If 2008 was the year vendors turned the virtualization market into competitive chaos, then 2009 was the year end-user companies moved en masse beyond test beds and into half-scale production.
Well, maybe not half scale. More like one-fifth to one-third scale, according to industry analysts and some of the biggest virtualization vendors.
Executives at enterprises who see the potential benefits of widespread virtualization of their servers have already picked the low-hanging fruit—replacing test-and-development lab servers with virtual machines, consolidating departmental servers on VMs and virtualizing non-critical servers within the data center, says Bob Quillin, senior director of marketing for EMC’s Ionix services division.
“Companies get to 20 percent or 30 percent virtualized and then slow down,” he says. “Once you get the test/dev, tier-2 and tier-3 apps virtualized, there are a lot of challenges about how to virtualize tier-1 apps. You have to have confidence you can deliver the same level of service, or you lose line-of-site control as the app goes through virtualization, and the infrastructure team has to deal with an increased rate of change.”
[ For timely virtualization news and expert advice on strategy, see CIO.com’s Virtualization Drilldown section. ]
Dealing with the fluid movement of a particular application from one server or VM to another—which Quillin says his customers jokingly refer to as “VMotion sickness”—is an organizational problem, as are “line of sight” limitations caused by management applications’ ability to monitor a physical server’s activity in detail, but not the VM’s.
Ionix and VMware are working on services and management software to improve those things, but Citrix Systems and partner Microsoft think the problem isn’t visibility — it’s recoverability, according to Biki Malik, senior director of product marketing at Citrix. (Citrix has been pushing a new push-button disaster-recovery application called StorageLink Site Recovery.)
Customers, of course, worry about both types of control and recovery, and most don’t have nearly the range of tools they need to feel comfortable with the number and variety of applications they’re hoping to virtualize, according to Gordon Haff, infrastructure and enterprise computing analyst at Illuminata.
“The big guys—VMware, Microsoft, Citrix—are all moving pretty quickly, so they’re clearly on the list of companies to keep an eye on,” Haff says. “The foundation stuff is primarily their domain. But beyond that, in things like compliance, I/O virtualization, security, stuff like KVM management, some of the smaller guys are pretty interesting.”
“Anybody who can fill the gaps the big guys don’t in helping virtualization admins provision and control their infrastructure is worth a look,” adds Mark Bowker, virtualization specialist at Enterprise Strategy Group. “The real missing piece, though, is the ability to optimize performance across both physical and virtual assets.”
That’s good news, in the sense that smaller vendors continue to add to the overall manageability of virtual infrastructures, Bowker says. Management tools still remain hotly desired by customers, a desire also noted in the 2008 and 2009 editions of CIO’s Top 10 Virtualization Vendors to Watch.
Looking beyond VMware, Microsoft and Citrix, here are 10 virtualization tools companies to watch in 2010:
For most vendors, promising customers that they can run more VMs on fewer servers and save money if they use your tool is risky if you’re talking about cutting-edge technology. When you’re talking about a capacity analyzer, a tool that takes inventory of your servers and computing resources and figures out how many applications of a given size you can run, it’s not that revolutionary an idea. You can download tools like it for free, if all you want is to benchmark your laptop.
In the virtual world, however, capacity management is something of a black art — not because few people have thought of it, but because few have built tools to look at both the physical and virtual servers and see how many of one will overwhelm the other. VKernel’s product works on both VMware and Microsoft’s Hyper-V. Without detailed capacity planning based on real data — not imagination — large-scale virtualization of production systems is not practical, according to Chris Wolf, analyst at The Burton Group.
Hyper9 broadly promises to help customers “achieve higher virtualization management maturity to meet more sophisticated business requirements.”
Look into the details, though, and the benefits become more clear, Bowker says. Hyper9’s Virtual Environment Optimization keeps track of workloads and virtual machines, categorizing them by geography, business unit or other criteria, and then reports on both performance levels and resource utilization. In other words, it tracks who is using how much of the available compute power and monitors performance problems to real or virtual sources to make troubleshooting easier. Says Bowker: You can’t optimize anything without seeing what it’s doing first.
DyamicOps breaks ranks with most other V-management vendors by looking at both server and desktop infrastructures as well as across VM platforms. It’s also one of the few companies in the market launched by an end-user company that liked its home-grown management app so much executives decided to make it a product. (See CIO.com’s related article, Credit Suisse Sells Its Own Virtualization Software to Others.)
The DynamicOps technology was born at Credit Suisse as a Web-based mechanism to let business units provision their own virtual resources—with built-in limits on the amount of resources they could demand and end-of-life requirements, too. Because the workflow behind the portal isn’t tied to one vendor, it’s not difficult to tweak to cover desktop VMs as well as servers. The commercial version promises quicker deployment of VMs and more control for IT, which is able to create standardized images and access limits, and track configuration and behavior of each VM through its operational life.
Embotics focuses on limiting VM sprawl—a headache many virtualization administrators complain can eat up much of the time and resource savings virtual servers deliver in the first place. The company’s V-Commander runs the same kind of discovery and inventory management scans that physical-network managers rely on, and allows users to create policies that treat provisioning as a life-cycle issue rather than a one-time event. It is designed to monitor VMs, classify them according to groups affected by different policies and automate their consolidation or recycling. The product works with VMware, Microsoft and Citrix VMs, and feeds data to third-party systems-management tools to limit sprawl in management consoles as well as VMs.
HyTrust won Best of Show and a Gold Award for security and virtualization at VMworld 2009 for the HyTrust Appliance, which is designed to create a single point of control for virtual infrastructures—including access, policy-based management, security and compliance. Because its management-policy abilities are object-based, HyTrust policies integrate with existing management structures, network and storage systems, using standard protocols. HyTrust also has the ability to get as granular as you want in controlling user access, application performance and IP address use, as well as providing heavy duty audits of what all those objects are doing. “Management is certainly a big missing piece,” Illuminata’s Haff says, “and compliance is becoming a bigger part of that.”
Catbird, a direct competitor of HyTrust, also won awards at VMworld for VM security apps that range from policy compliance to network access to security assessment to securing cloud-computing links. Its capabilities are built on the Catbird V-Agent, a software-based security agent that can run as a virtual machine or within a virtual machine, keeping track of a VM’s activity, tracking communication between the VM and its host, and streaming data to a central control portal so none is lost when the VM or the agent shut down. That adds flexibility to the system, Catbird claims, and allows the network of agents to expand or contract smoothly with changes in the VM infrastructure.
Netuitive focuses on making performance management simpler by automating the process of setting performance thresholds and baselines, eliminating many of the false-positive alarms network managers spend their days chasing. The basis of that automation is an analysis engine called the Netuitive Service Analyzer, which monitors the behavior of both physical and virtual machines and creates a set of baselines it defines as “normal.” Once those are set, Netuitive intercepts and analyzes alarms, sending its own alerts to administrators only when “normal” (rather than optimal performance) levels fall. That ability continues to be Netuitive’s strength, Burton Group’s Wolf says. But many larger, more mainstream systems management vendors are adding similar physical and virtual capabilities as well, making that “normal” part of the management market far more crowded.
Though desktop virtualization is less sexy and far more problematic—due to the number of machines to be virtualized, if nothing else—it is becoming more mainstream in concept. Making it more mainstream in practice will depend on how well desktop virtualization vendors emulate the performance and capabilities of standalone PCs, according to Andi Mann, analyst at Enterprise Management Associates.
Liquidware Labs’ Profile Unity is designed to do exactly that, allowing individual end users to create and store—on a server—profiles, configuration settings and documents so they can get the same “desktop” every time they log in to the company’s virtual desktop infrastructure (VDI). The profile management system also monitors activity of the virtual desktops for compliance reporting, security and service-level monitoring. Liquidware also does capacity assessments of existing desktops to let you know which machines are fit for XenDesktop or VMware View, and which are not.
AppSense, a direct competitor to Liquidware Labs, also focuses on user profiles, referring to its configuration storage and management system as a way to let users keep “personality” with their virtual desktops. Both companies promise quicker, more consistent provisioning and better customization. AppSense focuses more on user environment and responsiveness, storing user data where it can be most quickly retrieved to reduce login times, and promising to add not only the ability for end users to store documents on the server but also, sometime later this year, applications as well.
“AppSense is a company in the right place at the right time,” Wolf says. “They offer a mature product suite focused around ensuring a user’s personal settings remain consistent across a variety of delivery mechanisms, such as a physical desktop, virtual desktop and XenApp session. AppSense is closely aligned with Citrix, and I would not be surprised if Citrix eventually acquired them.”
Another potential player in desktop virtualization is RingCube, whose vDesk Virtual Desktop Solution and RingCube Workspace Virtualization Engine offer what Wolf calls a cost-effective alternative to traditional desktop virtualization.
“The technology is complementary to VMware and Citrix,” he says, “making it a good complement to existing plans, or for organizations that need to address immediate needs while waiting on further maturity and more competitive pricing from VMware and Citrix.”
Follow everything from CIO.com on Twitter @CIOonline.