by Bernard Golden

OVF: A Standard for Virtualization’s Future

Jan 08, 20096 mins

Open Virtualization Format could solve key problems with virtualization deployments. It's got key vendors behind it. And it's likely to be even more important as the use of cloud computing increases. Here's why.

Quick: What part of IT data centers operations hasn’t been transformed by virtualization?

After all, server virtualization’s value is well-established. Many, many companies have migrated significant percentages of their servers to virtual machines hosted on larger servers, gaining benefits in hardware utilization, energy use, and data center space. And those companies that haven’t done so thus far are hatching plans to consolidate their servers in the future. These are all capital or infrastructure costs, though. What does server virtualization do for human costs—the IT operations piece of the puzzle?

Base level server consolidation offers a few benefits for IT operations. It makes hardware maintenance much easier, since virtual machines can be moved to other physical servers when it’s time to maintain or repair the original server. This moves hardware maintenance from a weekend and late night effort to a part of the regular business day—certainly a great convenience.

The next step for most companies is to leverage the portability of virtual machines to achieve IT operational agility. Because virtual machines are captured in disk images, they are portable, no longer bound to an individual physical server. The ease of reproducing virtual images means that application capacity can be easily dialed up and down with the creation or tear down of additional virtual images. Server pooling allows virtual machines to be automatically migrated according to application load. More sophisticated virtualization uses include high availability, where virtual machines can be moved—automatically, by the virtualization management software itself&mdash:when hardware failures occur. Seeing the magic of a virtual machine automatically being brought up on a new server after its original host is brought down, all without any human intervention, vividly demonstrates the power of more sophisticated virtualization use.

Certainly these kinds of uses of virtualization demonstrate its power to transform IT operations, enabling IT organizations to offer the kind of responsiveness and creativity that could only be dreamed of a few years ago. The deftness with which applications can be migrated, upsized, downsized, cloned, etc. is something that will forever change the way IT does its job.

But there’s one place in the entire IT operations value chain unaffected by this capability: the original installation and configuration of the app. Before you can do any of that cool stuff with a virtual machine, you have to create it. And heretofore, that has remained a manual effort, time-consuming, error-prone, and not much fun.

Of course, there are virtual appliances, which come pre-configured with software, ready to run. Their traction to date has been limited, though. One problem for them has been software licensing; there’s no way to address the licensing requirements for commercial software components within them. While the end user can obtain and install a license for a virtual appliance, it’s certainly not plug-and-play. For this reason, most virtual appliances to date have been open source- or evaluation copy-based.

Also, while the virtual appliance can contain the necessary software components within its virtual disk image, there’s no way to accompany the executable image with contextual information—things like how many virtual CPUs the application should have, or how the networking should be set up. That all needs to be left to the person doing the installation to configure—certainly not unachievable or perhaps even a very large burden compared to the usual slog of installing and configuring individual software components, but definitely not plug-and-play, once again.

Finally, virtual appliances are well-suited for an individual virtual machine that contain all necessary software within itself, but they fall short of the ability to deliver a more complex topology. For example, it’s easy to envision an application that would be best suited to a multi-virtual machine architecture, an N-tier architecture, for example. Today, such a complex architecture could be delivered as multiple virtual appliances, but it would have to be accompanied with written instructions regarding configuration that would need to be executed manually.

The logic for virtual appliances is unassailable, however. From the perspective of IT shops, virtual appliances address software installation and configuration—the last frontier of operations automation. From the perspective of software vendors, virtual appliances hold the promise of reducing support costs, a large proportion of which are correlated with initial application install. From the perspective of virtualization providers, virtual appliances represent a further buttressing of the virtualization value proposition. And, from the perspective of the future global IT infrastructure, where the boundary between internal data centers and external cloud computing resources is erased, virtual appliances represent the easiest way to integrate those two worlds—one software component that can run anywhere.

This is a situation that cries out for a standard, which would offer a way for all participants to gain the benefits of virtual appliances while reducing the friction of multiple, competitive, incompatible implementations. And, fortunately, the Distributed Management Task Force has launched just such an initiative, the Open Virtual Machine (OVF) Format. OVF addresses the three key issues that have previously hindered virtual appliance adoption. In addition, OVF supports multiple virtual image formats, so that virtual machines for any hypervisor may be carried within an OVF payload. Furthermore, OVF payloads are digitally signed, ensuring tamper-free distribution and user peace of mind.

The rise of cloud computing is likely to make OVF even more important. Being able to put a complete application payload onto a remote cloud service will be much more convenient than implementing individual virtual machines and attempting to manage the licensing nightmare of remote systems.

Major players in virtualization like Dell, HP, IBM, Microsoft, VMware, and Citrix are behind OVF. The fact that these last three companies are involved indicates the importance the virtualization providers attach to this initiative, and raises the probability that the standard will be widely adopted.

OVF is not perfect, of course. To this point, it is focused on packaging and deployment, not management or, crucially, migration. This latter capability is critical for future real-world use. DMTF has announced that these latter two capabilities are priorities for it, so the standard will undoubtedly address them in the (one hopes) near future.

With installation and configuration eased by wider virtual appliance use, the potential of virtualization for completely transforming IT operations is at hand. While it would be unrealistic to expect that no manual administration will be necessary in tomorrow’s IT operations groups, the need for costly, low-payoff manual dogsbody work will be diminished enormously, allowing IT organizations to focus on their ultimate charter: helping the overall business operate more nimbly and efficiently.

So OVF: Look for it in your future.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.