The virtualization market took a sharp turn toward the nasty, practical and cheap during 2008. Microsoft finally shipped a version of Windows Server with a native hypervisor, effectively giving away the ability to create multiple virtual servers from one machine. That’s the cheap. The nasty: Microsoft also pulled marketing stunts like sending guerilla marketers into VMware’s biggest U.S. tradeshow bearing casino chips with an anti-VMware message. Market-dominating VMware shrugged off Microsoft’s bravado and rolled out a host of higher-level management and data-assurance products and services, while it ramped up its marketing to emphasize its vision of an ambitious virtual data center operating system. (Catch up on what VMware CEO Paul Maritz had to say about all this at the VMworld show with this video.)
In the IT trenches, virtualization users, mostly unconcerned with the vendor spitefulness, will expand beyond the test, development and evaluation installations they ran in 2008, running more production applications in VMs during 2009, predicts Chris Wolf, senior analyst at The Burton Group. In addition to running virtual servers and server-consolidation projects, many IT shops are adding deeper layers of storage and network virtualization—making it even harder to see and manage the proliferation, performance and interaction of applications, networks and VMs that are part of a virtualized infrastructure of ever-increasing complexity, Wolf says.
In a mid-2008 survey Enterprise Strategy Group conducted among VMware users, only a quarter of respondents said the tools they had available to track and monitor their virtual infrastructures were sufficient to let IT maintain current or contractually required service levels, according to ESG analyst Mark Bowker. (That cautiousness and hunger for tools matches up with what we heard in CIO.com’s January 2008 survey of IT leaders on virtualization.)
While tools to manage that opacity are proliferating, most tools can see only the performance data visible to hypervisors—which are purposely blinded to the underlying storage and networking topologies on which they run. The really valuable tools will be those with a deep awareness of storage-area networks and network I/O connections, to match applications with both resources and elements that could affect their performance on the network and storage side, Wolf says.
Sure, VMware, Microsoft and Citrix all continue to make and push their own management tools. Industry stalwarts such as CA, Symantec, Cisco and Sun are in the virtualization game for real now too. But among the many other smaller, innovative vendors in the virtualization management arena, which ones deserve your attention? Here’s a look back at our list for 2008. And here is CIO.com’s look at ten makers of VM management, configuration and monitoring tools who are, unquestionably, worth watching in 2009.
Akorri’s BalancePoint management applications are designed to measure the capacity of physical servers and the performance capacity of the virtual machines they support in order to keep workloads from affecting application service levels. Its particular strength is the ability to find and gather performance data on software and systems beyond the virtual infrastructure, according to a lab review by the Enterprise Systems Group.
BalancePoint collects data from servers, storage and applications to identify relationships among them and interactions that can affect performance. It creates application fingerprints for each set of relationships and creates models to identify potential conflicts ahead of time. The automated discovery and predictive modeling are the most valuable functions, ESG concluded, but the simple implementation that comes from the system’s automated discovery comes in a close second.
CiRBA remains one of the leaders in management of virtualized infrastructures. CiRBA tools are designed to help map out data center consolidations and virtual infrastructure development,combining capacity planning for both physical and virtual servers. CiRBA’s capacity planning takes into account factors such as application middleware, database query loads and required service levels. Pre-packaged analyses are designed to help determine optimum workloads for specific configurations of physical and virtual servers. They also allow customers to create their own criteria and evaluate either new designs or additions to existing configurations according to their idiosyncratic requirements.
To learn more about how one customer used CiRBA’s tools, see CIO.com case study “How Underwriters Laboratories Plans Virtualization Moves Wisely”. As this IT shop told us, CiRBA’s data center intelligence tools help them contiunue to expand the virtual server infrastructure, while eliminating the guesswork in what-if scenarios.
Embotics V-Commander is designed to take a lot of the stress off an IT staff by automating the control of VMs according to policies based on performance or business criteria. V-Commander, and the entry level V-Scout (available in a free version) are both designed to manage the lifecycle of individual VMs or the virtual infrastructure as a whole.
As we noted in earlier coverage of the product, Version 2.0 of V-Commander is able to ride herd on VMware, Microsoft and Citrix Xen VMs, as well as share data with VMware’s VirtualCenter management application.The tool also includes brokers that can exchange both data and commands with mainstream management applications. Ad hoc and canned reports are designed to help rein in rogue VMs and sprawl by enforcing limits on VM growth and the lifespan of those that are already running.
4. Marathon Technologies
Marathon continues to stay near the top of many virtualization users’ list of favorites by adding fault tolerance, high-availability and disaster-recovery capabilities to infrastructures that were supposed to eliminate many of those risks just by their very nature. The company’s everRun offers failover clustering, component-level fault tolerance that also protects storage and system components, and a system-level control that backs up memory being currently used by really critical applications; everRun keeps a real-time copy of data in memory for those applications so their workload can be shifted to another VM to ensure zero downtime with no loss of transactions or data. Marathon supported only Citrix VMs, until January, when it announced a partnership with Microsoft to expand its capabilities to virtual infrastructures running on Windows Server 2008 with Hyper-V.
Neterion, a network-interface card manufacturer has been energetic in addressing a problem most virtualization and networking vendors have ignored: no matter how many VMs you can squeeze onto a single physical server, each VM still has the same need to get data into and back out of the physical server. Most physical servers do fine with a single 1Gbit/sec network interface card. Running many VMs on the same machine causes I/O bottlenecks that can gum up the works. Most VM users solve the problem by adding two, three or even four 1Gbit/sec NICs on the physical server. Neterion’s solution is to put a single 10Gbit/sec card on those servers, adding both a huge network pipe and I/O regulation software designed to make it easier to juggle demands for bandwidth among the virtual switches VMs use as interfaces to their network connections, or guarantee minimum bandwidth for specific applications. Check our CIO.com’s recent case study on how clothing company Loro Piana USA used Netrion’s 10Gbit/sec cards to its advantage.
The approach is apparently working, in both the real and virtual networking markets. Neterion sales are growing at more than 150 percent per year and its market share in 10Gbit/sec networking is 49 percent, according to a 2008 report from The Linley Group.
Netuitive offers software that not only discovers, maps and monitors virtual machines and virtual infrastructures, but also tracks how applications and VMs run and responds when it spots trouble. That, according to Netuitive, is a far more accurate way to monitor the health of a virtual infrastructure than systems that require IT managers to plug in performance metrics and have a management system respond to thresholds that may or may not reflect the real-world experience of the applications.
The ability to trigger a response to performance anomalies is especially interesting, Wolf says, though Netuitive’s software is still able to respond only to factors the VMs themselves are able to see, Wolf says. Adding the ability to respond to glitches or changes in storage-area networks or other systems a VM sees only as attached storage would be a huge improvement, he says. Netutive is one of a number of vendors moving in that direction, Wolf says.
7. Reflex Systems
Reflex Systems hits the three characteristics at the top of the 2008 virtualization ‘hot-button’ list: security, automated management and cross-platform support. Its Virtual Management Center includes modules for configuration management and provisioning, compliance monitoring and reporting, VM lifecycle management, security and performance management for both VMs and the applications that run on them. Security Appliance runs an agent on each physical host, adding deep packet inspection, reporting and application control to VMC’s list of capabilities.
Reflex actually began with a toolset focusing on security, but began adding additional management capabilities during the last year or two, the most recent wave of which was announced in January. Reflex’ list of capabilities is impressive, as is the direction in which its product development is moving; but its real-world capabilities vary by platform, according to Wolf. The full list of VMC features is only available for VMware’s ESX, for example; it’s not clear which are currently available for Microsoft Hyper-V or Citrix XenServer, he says.
8. Scalent Systems
Moving VMs from one physical server to another can be an effective way to match demand for power with supply, but not if the VMs forget where their data is stored and where to find a good network connection after they land on their new home. Scalent agents sit on each physical and virtual machine maintaining a persistent memory of its network and storage relationships. The agents can manage local performance or configuration under the direction of a central controller and scripts written by IT managers. IT managers can write scripts using Java, Web services or third-party management software, and Scalent’s software development kit. The concept isn’t rocket science, but the execution can be tricky, and Scalent’s approach works well for a number of clients who use it to dynamically provision and manage their virtual infrastructures, Wolf says. Scalent’s technology should continue to be useful as IT runs more apps in VMs, or what some call the internal cloud.
9. Third Brigade
Security specialist Third Brigade has expanded its host-based virtual-server security model to include not only non-virtualized systems, but also cloud-based applications that live in shared infrastructures such as Amazon’s EC2. The company’s Deep Security offers firewall, intrusion protection, integrity monitoring and compliance validation and higher-level certification such as compliance with the Payment Card Industry’s Data Security Standards (PCI DSS) that secure remote debit-card and other electronic fund transfers. Deep Security lives on a centralized server and installs clients on VMs and guest OSes in both internal and external cloud infrastructures, Burton Group’s Wolf says. Centralizing security saves the effort of building separate clusters of VMs or physical servers in order to isolate certain applications, and the ability to expand into the cloud gives Third Brigade flexibility few other security companies can match, he says. It also offers a free version, VM Protection, for up to 100 virtual machines—a good move that has been received well among both existing and potential customers, Wolf says.
VKernel tools address a major blind spot in virtual infrastructures: not just the difficulty in knowing what VMs are running at any given time, but what CPU, storage, network and other data-center resources they’re using individually or as a group. The software runs on a SuSE Linux kernel as a VM within VMware’s ESX to measure the resources the VMware setup is using, and generates chargebacks to make accounting and usage-reporting on virtual machines simpler. (And we expect chargeback capabilities to emerge as a more demanded management tool feature this year.) Its ability to extrapolate data on past performance using “predictive analytics” are useful in capacity planning as well, to help IT groups hold off on additional server purchases, eliminate bottlenecks as they develop and plan for peaks in usage from particular user groups or seasonal changes in business, according to Bowker of ESG.