by Kevin Fogarty

Microsoft’s Virtualization Big Picture: Worth the Wait?

Jun 10, 20085 mins

Microsoft has a comprehensive and integrated pitch regarding virtualization management. But the current reality is considerably less complete.

Despite naysaying by analysts, customers, tech industry observers and me, it is not true that Microsoft has no substantive answer to the multifaceted approach VMware and other competitors take to server virtualization.

Microsoft has a very good answer, in fact, one that’s more open and complete than is typical of Microsoft—the company that only decided a couple of years ago that it was undiplomatic to continue referring to server implementations that included applications or operating systems from other companies as “alien environments.”

Its answer includes sophisticated virtual- and physical machine management, disaster recovery, automatic provisioning and several different flavors of desktop virtualization.

Unfortunately, most of the content in that answer is a little, well, premature. Three to six months premature, depending on which part of the answer you’re waiting to hear.

If you only want a dirt-cheap way to run virtual machines on your Windows Server 2008 machines—or even non-Microsoft-based servers if you use the standalone version of Microsoft’s hypervisor—you’ll get what you’re waiting for sometime this summer, when Hyper-V completes beta testing.

You can get a beta version, of course, and it’s not the only hypervisor option out there. But since so many data-center managers refuse to count on a Version 1.0 product, I’m assuming they won’t count on one whose version number is still on the wrong side of the decimal point.

Hyper-V can run underneath an operating system on a server, very close to BIOS, and support non-Microsoft operating systems, especially Linux, according to Zane Adam, senior director of virtualization at Microsoft.

The more comprehensive piece is the management suite Microsoft is building out of its Systems Center systems-management application. It centers around Virtual Machine Manager (VMM), a completely new module in the suite custom-designed to add detailed VM management to Systems Center, which also includes backup-and-recovery and automatic failover for physical machines, as well as configuration and provisioning of physical and virtual machines.

Systems Center, with VMM installed, will seamlessly manage physical and virtual servers, physical and virtual desktops, and applications through the same interface, according to Microsoft.

Version 2 of VMM—a release date for which has not been set—will also manage VMware servers.

If you’re thinking Systems Center isn’t synonymous with centralized enterprise-class data-center systems management, by the way, you’re right. Adam says Microsoft has been pushing the suite steadily uphill for the last several years, updating it from incarnations as Systems Management Server and Microsoft Operations Manager into a more cohesive product set.

But the core of the high-end systems management market still belongs to HP, IBM/Tivoli and others, not Microsoft and its suite, which started life as a software-distribution suite designed to help customers distribute Windows 95.

Interestingly, Microsoft is also taking a multipronged approach to desktop virtualization. Customers can (or will be able to) virtualize desktop implementations in three major ways.

Windows Vista Enterprise Centralized Desktops runs the entire OS from a back-end server for end users running terminals or diskless workstations. Microsoft co-markets the product with Citrix.

Presentation Server runs like a terminal—showing the user screen images of an application running on a back-end server, using a minimum of compute resources on the PC.

Application virtualization—which Microsoft bought along with Softricity two years ago and is about to reissue as Microsoft Application Virtualization 4.5—lets customers house and distribute applications from a central site and run them in a sandbox on the user’s PC. That gives customers central control of their applications, removes the need to standardize PCs on the user end, and avoids problems between incompatible versions of the same application, Adam says.

Adam and Microsoft call the approach a “360-degree view of virtualization,” and it feels pretty comprehensive when you’re talking to them.

But you have to check the release dates to all the various pieces and see which exist as proven products, which are beta software and which (like the VMware-management capabilities) haven’t been spotted in the wild as yet.

I try not to be too skeptical when I hear a Microsoft big-picture pitch on something that has scared it as much as virtualization has (though respected grammarians—assuming there were such a thing—would agree that “experienced observer” and “skeptic” are synonymous when it comes to the tech industry, politics and the occult.)

But, comprehensive and integrated as the pitch is, the reality is considerably less complete. And maybe just a little thin.

“I was at Microsoft a couple of weeks ago and looked at Hyper-V, and saw a lot of good things about administration and things like that. But it’s still too new,” Barry Brunetto, VP of IS at Portland-based logging-industry toolmaker Blount told me on the phone recently.

“We’re standardized on Microsoft; but when you’re dealing with a production environment that’s critical for the whole business, you want to get some backing in there, you don’t want to go with something that’s not proven,” Brunetto says. “Right now I’m kind of leaning toward VMware.”