by Kevin Fogarty

VMware, Microsoft Virtualization Pitches Miss the Most Practical Issues

Opinion
Aug 01, 20088 mins
Virtualization

Exec at high-profile Hyper-V success-story firm says capacity planning, limiting sprawl and figuring out how to manage storage are all more important than what hypervisor you run.

One of the major drawbacks of Microsoft’s virtualization pitch is the lack of good case histories—examples of companies that have not only picked Hyper-V for their virtualization platform, but are able to talk about how they made it work.

I’ve written before about Kroll Factual Data and the advice from Chris Steffen, its lead virtualization dude, much to the displeasure of some VMware users who appear to believe Microsoft’s virtualization technology doesn’t exist, but is unutterably evil anyway.

I won’t dispute either point. I’ve worked Microsoft products and covered the company for long enough to know a product can be evil, nonexistent and still able to drive the direction of the rest of the market. Such is the power of FUD and high market share.

I can’t testify to the diabolical nature of Hyper-V (or hazard a guess at the meaning of the Evil Inside sticker on the box) but Microsoft’s hypervisor and broad-spectrum VM management and migration utilities do actually exist, so looking at Kroll’s experience with them is useful no matter how strongly you disagree with its choices.

One additional caveat about why Kroll is not normal: unlike almost anyone else other than Microsoft itself, Kroll built out its IT production-server farm using VMs on Virtual Server 2005, the precursor to the Hyper-V hypervisor that’s part of Windows Server 2008.

According to Steffen (who blogs about his experience on the Microsoft Hyper-V team blog here and here) the migration from Virtual Server to Hyper-V has been so simple and trouble-free that it’s not very interesting. The number of companies taking the same path is so small, however, that it’s not very useful, either.

So we’ll mostly skip that part, except to say there are several applications Steffen has found would work on top of Hyper-V despite assurances from their own developers that they wouldn’t. Blackberry Enterprise Server, for example, would not work on Hyper-V and probably never would, according to the support techs Steffen asked for help.

“We didn’t treat it any differently than any other app we put on Windows Server 2008,” Steffen says.” We jumped through all the hoops they put you through and did all the runaround, and it works fine.”

Many of the benefits Kroll gets out of its Microsoft virtualization setup have more to do with virtualization than Microsoft’s version of it specifically, at least to hear Steffen describe why he likes it.

(Kroll is a bad romantic partner, by the way; the whole time Microsoft has been sitting on its doorstep and satisfying its every need for technology and support, Kroll has been toying with the competition—using Citrix software for its remote-host environment, and continually evaluating new editions or add-ons to VMware products. It doesn’t stick to Microsoft because it’s a Microsoft shop; it sticks with Microsoft mainly because of the cost/benefit advantage compared to VMware, Steffen says. Part of that advantage, undoubtedly, is the level of low- or no-cost support, but I may have already mentioned that factor. )

The benefit side of the cost/benefit analysis comes largely because the range of management, disaster-recovery and other functions it gets from Microsoft’s Virtual Machine Manager 2008—as well as from other systems-management tools from Microsoft and its partners— is close enough to what VMware can provide to satisfy 90 percent of Kroll’s needs, Steffen says.

That wasn’t always the case. Microsoft seemed to realize late that it would need VM-management software if its hypervisor was going to be anything more than a fringe product, Steffen says. After it bought VM-management developer Connectix in 2003, Microsoft “rushed the product out the door and didn’t pay a lot of attention to what other people were doing,” Steffen says.

At a conference a few months later, Steffen found himself comparing notes to a senior-level Microsoft IT guy who, like Steffen, had come prepared with a long list of features Microsoft’s VM-management apps would need if either of them were going to be able to do their jobs right.

“Our spreadsheets were almost identical,” he says. “If we were in school, we would have ended up in detention.”

It is that list of features — most of which were eventually implemented in VMM 2008 — that keep Steffen reasonably happy with both Microsoft and its VM-management apps.

The other 10 percent is filled by third-party applications or custom-designed routines, processes or services that have nothing to do with virtualization. Hot-site recovery services, for example, work more smoothly with virtual-server cloning and migration, but don’t depend on it to do their job, he says.

There are other things to which virtualization makes little difference, as well, Steffen says.

Security is also much less different than you’d assume for VMs than for physical servers, according to Steffen, whose background is in IT security.

“You probably have anti-virus and firewalls and intrusion detection and all the rest of it protecting your physical servers,” Steffen says. “Do you have something that sits between the operating system and BIOS to keep it from being attacked? Probably not. That’s about the same situation as with hypervisors and the OS. You don’t see BIOS viruses much anymore; if you did, or saw hypervisors being attacked that way, you’d change; but right now it’s not a big area of concern. “

Storage — a big deal made bigger with VMs

Storage, on the other hand, is a huge issue. People forget that being able to add virtual servers without adding physical hosts does not mean that they can add virtual servers without planning for the data-storage space they need to work correctly.

Compared to storage, virtualization is magic; storage required additional hardware, which requires days or weeks to spec the equipment, get it delivered, get it configured and installed, and put to use. It might take 15 minutes to launch a new virtual server, but it could take 15 days to get in the storage to support it.

“People are having a really hard time decoupling their storage capacity planning from servers,” Steffen says. “You can’t do them together. You have to figure what your storage requirements are going to be and be really disciplined about buying it on that schedule and on not growing your VM farm faster than your storage can support it.”

Even changing standard host configurations to require far more disk space can backfire if you dont put hard limits on the number of VMs you support.

Disaster Recovery

Disaster recovery is also a great application for virtualization; mirror your servers and you can shift over to another VM in seconds after the primary server crashes — far more quickly and easily than with normal disaster recovery setups, Steffen says.

But using a virtual server for disaster recovery also saddles you with 40, 50, or 60-gig files for each VM server in a DR scheme. Get into the several-hundred-server range, and that approach is going to eat up a lot more disk space than you’d expect from either a traditional DR setup or from the footprint of the VM server farm.

Hardware Capacity Planning

Virtual machines fit ideally into hardware-consolidation plans, but get too carried away and your servers will look more like clown cars than high-efficiency, high-utilization IT assets. Leaving any hardware unutilized or underutilized is anathema to data-center managers, but without a dedicated amount of server and storage capacity under permanent reserve, there’s no space to expand when you need to back a server down a rev because a new version of a homegrown app doesnt behave the way it’s supposed to, or because business has increased enough to require an additional VM or two in the mix.

Some companies with which Steffen trades best-practices experience have simply stuck to the same IT requisition procedures, even when the asset being requisitioned is virtual rather than physical. Forcing a departmental manager to request servers a week or so before they have to be online — a delay created so a physical server could be trucked over and plugged in — gives IT the time to verify the storage and bandwidth requirements of the new server, and truck them over if necessary, Steffen says.

Sometimes that’s the only way to control the proliferation of servers that can be spawned by a junior tech copying configuration files across a network, in just a couple of minutes.

“Virtual-server sprawl,” Steffen says several times, “is a big damn deal.”

“VMM 07 and 08 can help determine where you have space to grow, and you can use the sizing tool to tell you exactly where is the best place to put a VM at any given time,” Steffen says. “But not a lot of people have heavily virtualized environments, so they may not be prepared to control sprawl. Bad as it can be, if you’re having to deal with a lot of bureaucratic inertia, that can give you enough breathing room to get ready for a VM based on how long something would have taken if you had to order it by phone and deliver it by hand.”

Kroll deals with both storage and compute-power limits with fixed overhead limits; hardware purchases are triggered not by requests for new servers, but on projections of when and how the company will eat into the 10 percent or 15 percent of unused systems capacity.

It’s the most important caveat to the decoupling of physical and virtual assets, Steffen says. Converting most of your physical servers into virtual might save electricity, space and other assets, but storage capacity, administrator time and other requirements keep growing at the same speed, whether the new gear is physical or imaginary.