For Munder Capital Management, server virtualization was all about improving their disaster recovery operations and ensuring that failed servers could be brought back online quickly.
“One of the biggest things we focused on with virtualization was disaster recovery between disparate locations,” says Mike Dufek, director of information systems for Munder Capital, a $30 billion financial services firm based in Birmingham, Mich.
“Virtualization gave us the availability to recover very quickly from physical to virtual machines and virtual back to physical machines,” Dufek says. The process of recovering a failed server, which used to take days or weeks of manual work now is managed by provisioning a new virtual server and restoring the failed one with data from a storage area network (SAN), Dufek says.
“We’ve been able to drop back the number of physical boxes we have and the time we spend maintaining those boxes,” Goerlich says. “Along with that, we save on cooling costs by about $1,000 a month. That’s huge.”
Picking the Platform: Three Requirements
Munder began their virtualization project in June, 2007. The project, which ultimately eliminated 42 physical servers, was developed using three criteria:
It had to match or enhance the skills of the company’s existing staff; it had to fit well with the ecosystem of operating systems, applications and hardware available to the company; and it had to deliver the best mix of features and performance for the price, according to Wolfgang Goerlich, network operations and security manager for Munder Capital.
To bolster the company’s disaster recovery plans, Munder Capital Management installed two dual-controller Compellent Storage Center SANs, one in Birmingham and the other at a remote site seven miles away. The setup relies on snapshots that back up the company’s data, and replication that shuttles the snapshots among sites in order to keep the data safe. (For more advice on storage virtualization planning, see CIO.com’s How to Do Storage Virtualization Right.
Starting in 2007, the company evaluated VMware, Microsoft’s HyperV, Microsoft Virtual Server and Xen, Goerlich says. They also looked at Virtual Iron and Parallels’ Virtuozzo before choosing Microsoft’s HyperV.
“We started with HyperV when it was first a release candidate,” Goerlich says. “Right away we were impressed with its performance.”
“VMware comes with a couple of features that HyperV doesn’t have—VMotion being the main one and memory allocation being the other,” Goerlich says. “One of the discussions we had was ‘were those feature sets going to give us enough return for the extra price.’ After looking at those features, we decided we could probably forgo them. And, Microsoft has been saying all along they are going to catch up the feature set anyway.”
Goerlich says Citrix XenServer also performed well, but that it didn’t have the support of its backup and storage partners and that those partners were not developing for it.
New SAN Plays Key Role
HyperV also allowed native access to the disk—another advantage over the other systems Munder evaluated.
“We boot virtual servers from our Compellent SAN,” says Goerlich. “Since we have multiple sites, if we lose disks at a site, we can bring them back up from images stored on the SAN. And HyperV supports replay or snapshot capability.”
“While VMware does replay phenomenally, from our perspective we’d like to keep the replay on the SAN,” Goerlich says. “HyperV takes a snapshot the same way VMware does. If you are using HyperV, you can use and recover those snapshots and design your workflow around those snapshots in the same way for both physical and virtual servers.”
For a look at how two other companies revamped their DR strategies using virtualization, see CIO.com’s recent case studies on Marriott’s bold effort and Transplace’s money-saving revamp.