Choosing just enough virtual machines, but not too many, for a given server has always been a challenge. Running a set of virtual servers and the applications that they support on one physical server running just one operating system seems easy enough — at first.
But making sure the hardware can support that additional load is a real trick because of the almost infinite variety of the software that runs within the virtual environment — each application making a slightly different set of demands on the host OS and the hardware, says Chris Wolf, analyst at The Burton Group.
Consolidating physical servers into VMs should save money of course, but you can’t scrimp too much on the hardware without dragging down the performance of the applications — and risk aggravating end users, says Ian Scanlon, IS operations manager for Computacenter, a data center and IT services company based in London but covering most of Europe.
“If you put five VMs on a server, you’re running six operating systems and all the applications, so you have to ramp up to be able to handle that and keep the service levels, the performance, high for the applications,” Scanlon said. “We ended up having to put on a lot more memory than we figured during the capacity planning.”
Getting detailed and accurate estimations of how well a server will perform as a VM host is complicated further by the varying ability of different chipsets to support virtual workloads and hypervisors, according to Gordon Haff, high-performance computing analyst at Illuminata.
Virtual machines stress a processor’s cache memory harder than a physical server does, and processors differ in their ability to switch between the demands of applications and hypervisors, he says.
Both Intel and AMD build in circuits specifically to support both virtualization and the migration of virtual servers. A given server could have between two and eight processors, each of which has between two and eight processing cores. How well your particular server configuration will fare with an idiosyncratic load of software is almost impossible to predict without very specific and painstaking analysis, says Andi Mann, analyst at Enterprise Management Associates (EMA).
Even asking vendor technical or sales reps directly won’t get you a specific answer, without your looking at the workloads you intend to put on the server. While there aren’t any hard-and-fast rules, a couple of rules of thumb can get you close enough that you’ll be able to spot the weak points and where or how to reinforce them, says Massimo Re Ferre, a senior IT architect in IBM’s Systems and Technology Group. First, for every core on a new Intel or AMD processor you can add three to five virtual machines, he says.
That’s a more optimistic outlook than that of Scanlon, who says he puts five or six VMs on a single server. If the applications are resource-intensive databases or ERP apps, he only runs two.
Bottom line: Less is more, Mann says. Too much focus on consolidation inevitably leads to poor performance and user dissatisfaction. Second rule of thumb from Re Ferre: for every core on a new processor, add between two and 4GB of memory. That coincides with Scanlon’s assessment, and the 48GB of RAM he runs on each high-end blade server.
“Once we got the memory up, we didn’t experience any performance problems to speak of,” he says.
Other gotchas to watch out for:
First: don’t forget the plumbing. With many servers come many I/O demands. Make sure you have enough links to back-end storage and to the network to accommodate them, Haff says.
Second: build a fence. VMs are easy to launch and hard to see, so server sprawl — having too many VMs running without actually being used — is appallingly common. Killing off all the unused servers and the disk space they reserved saved gave Computacenter back to many resources it was able to put off a major upgrade until the following budget cycle, Scanlon says.
Third: Use what tools are available to give you a detailed look at your setup, Wolf says.
VMware offers VMware’s vCenter CapacityIQ 1.0, and Microsoft offers its Assessment and Planning Toolkit for Hyper-V to guide customers. Neither is particularly good at figuring environments that include VMware and Hyper-V, however.
Many third party tools do cover both versions — HP alone has a sizing tool designed specifically for VMware and another for Hyper-V. Those less affiliated with VMware or Microsoft can provide a more independent evaluation, as well, Mann says.
Among them are are Novell’s PowerRecon, CiRBA, Inc’s Data Center Intelligence and Akorri’s BalancePoint, VKernel’s Capacity Modeler, To learn more about how one customer used CiRBA’s tools, see CIO.com case study “How Underwriters Laboratories Plans Virtualization Moves Wisely“.
In the end, it’s possible to build a detailed profile of what you’re demanding of a VM host and what you’re expecting. But considering the amount of time and money it would take to do it yourself or pay for a professional assessment, it may be cheaper to just buy more power than you need and enjoy a server that runs well at 70 percent capacity because it’s not overstressed at 95 percent, Re Ferre says.