Thursday afternoon, [Microsoft released its virtualization product, Hyper-V, to manufacturing. Previously, the company had promised to make a production-supported version of Hyper-V available to Windows Server 2008 customers within 180 days of the official release of the operating system itself.
By releasing Hyper-V in late June, Microsoft beat its self-imposed deadline by about a month, although delivering less than was originally promised. There have been public release candidates available since early 2008. In this piece, I'll take a look specifically at the release-to-manufacturing, or RTM, edition, noting improvements and changes.
The most notable—and the most significant—change between the initial release candidate version (RC0) of Hyper-V and the RTM edition is better performance. Most of the performance work was done between RC0 and RC1, but not many people knew about it due to (a) not-so-wide a release and (b) a ban on performance testing by MS. The company just wasn't ready for it to be tested on a wide scale. The performance story between RC1 and RTM is identical.
QLogic, a vendor of storage area networks (SANs) and other components, tested Hyper-V RC1 on one of its own host bus adapters (HBAs), based on Fibre Channel. QLogic hooked the HBA up to a storage array, to test the number of input/output operations per second supported by a virtual machine as opposed to real, physical hardware. The results were impressive, and are equally so under RTM as the underlying performance plumbing didn't change.
In particular, a setup based on intensive I/O applications—one that is common to mail server and database server configurations—showed that 120,426 I/O operations per second were possible with real hardware, compared with 116,720 per second on a Hyper-V-based virtual machine. In other words, the virtual machine was able to saturate the hardware enough to achieve 97 percent of the storage performance of a physical server.
Similar tests were performed by QLogic in a variety of configurations, each mimicking a specific type of storage operation, and results were similar—as low as 88 percent of real hardware in one scenario and as high as 99.93 percent in other. (These results are here.)
Indeed, performance has been good enough that Microsoft claims it has been running its popular msdn.microsoft.com and technet.microsoft.com sites from Hyper-V RC1 virtual machines now for months. These sites combined receive around 4 million hits per day. Each IIS7 virtual machine runs four virtual CPUs with 10GB of RAM, and the physical hosts have two quad-core CPUs with 32GB of RAM and host three virtual machines. Microsoft will be migrating to the RTM version of Hyper-V on these production machines very soon, as part of its major push to virtualize up to 25 percent of its internal IT infrastructure this year.
There were no major performance tweaks between RC1 and the RTM version, so the raw performance numbers are 100 percent intact within the RTM version.
Guest OS support
The other big story over the release candidate versions of Hyper-V is the support for many more guest operating systems. At this time, over 50 guest operating systems have been validated, including Windows 2000 Server, Windows Server 2003, Windows XP, Windows Vista with x86 and x64 support in a variety of multiprocessor scenarios. Additionally, there is support for SUSE Linux Enterprise Server 10 with both Service Pack 1 and 2, in both x86 and x64 architectures. Integration components will also be made available for all of these operating systems, further enhancing their compatibility.
The Hyper-V team runs an extensive battery of tests on each operating system they attempt to validate, ensuring its performance and response to being hosted on a virtual machine is as close to indistinguishable from real, physical hardware as possible. This is all part of the Server Virtualization Validation Program. In addition, OEMs have also qualified more than 214 individual systems to run Hyper-V, and 57 applications are being tested for qualifications by independent software vendors.
I personally tested Windows Server 2003 and Windows Server 2008, both in 32-bit and 64-bit architectures, and found the virtual machines (in my non-scientific examinations) to be quite snappy and realistic. My test system, a Dell Precision Workstation 490 with dual Xeon processors and 4 GB of RAM running Windows Server 2008 (which, of course, is famously named Service Pack 1), ran VMware Workstation and Virtual PC virtual machines quite happily.
But there was a noticeable lag when interacting with the VMs. Application installation would be slow and jerky, using a mouse to cut and paste material from one program to another would take some trundling, and so on. It was usable, but one could certainly tell the difference between using a physical PC and the virtual machine.
In contrast, I was unable to distinguish virtual machines running on Hyper-V from the native hardware with the Integration Components package was installed on each virtual machine.(Integration Components is part of Hyper-V.) Performance over the network, using Hyper-V Manager to route screens and keyboard and mouse responses to and from the virtual machines hosted on another system, was quite good as well.
Despite stellar performance and its prominence as an "in-box" solution, Hyper-V still suffers from a couple of shortfalls. Among them:
- There is no USB support on guest operating systems. While that's fine for a lot of server-style deployments, client operating systems and other applications sometimes depend on USB utilities for usability and licensing. Solutions from VMware—Microsoft's most direct competitor --currently have this. There is currently a workaround, however, for common USB-based devices like smart cards and storage products: you can easily share these types of products with a VM through the Remote Desktop Client. Within MSTSC—this is the shorthand name for the Remote Desktop Client—navigate to the "Local Resources" tab, and under the "Local devices and resources" section, click "More" and select any device you like.
- The VMware VMotion live migration component, where administrators have the ability to completely move virtual machines and their associated storage and networking components to another physical host with no downtime, has no parallel on the Hyper-V side. This means there is still some end-user impact when a machine hosting Hyper-V goes down. If uptime is absolutely, non-negotiably mission critical, Hyper-V isn't yet there. And it's unclear at this point if and when this feature might be added to Hyper-V.
- You cannot add resources on the fly, a feature known as hot-add.
Hyper-V won't kill VMware or any other competitor right out of the gate. In my opinion, it's not yet ready for the "five-nines" deployment yet because of the lack of zero-downtime migration and no ability to hot-add resources.
However, on balance, it's a fantastic solution where the price is right, and for modest to important business needs, it is an excellent fit because of its solid performance and ease of deployment.
RADIUS , Hardening Windows , Using Windows Small Business Server 2003 and Learning Windows Server 2003 . His work appears regularly in such periodicals as Windows IT Pro magazine , PC Pro and TechNet Magazine . He also speaks worldwide on topics ranging from networking and security to Windows administration. He is currently an editor for Apress, Inc., a publishing company specializing in books for programmers and IT professionals.
This story, "RTM Edition of Microsoft Hyper-V Adds Speed" was originally published by Computerworld .