by John Moore

How to Evaluate High Availability Options for Virtualized IT Environments

Apr 29, 20135 mins
Server VirtualizationServersSystem Management

One upside to virtualization is that it puts more applications on fewer servers. One downside is that the availability of those servers become of greater importance.

As virtualization puts more application eggs into fewer server baskets, design principles such as high availability require more attention.

Consultants and IT managers working in the virtualization field say enterprises take a variety of approaches to keeping their apps online. Many use high availability software associated with the widely used VMware hypervisor. Various forms of clustering also aim to minimize downtime. Efforts to boost the reliability of physical hardware are also underway.

More Apps, Fewer Servers Equals Demand for High Availability

The high availability and fault tolerance revivals stem from a couple of factors, among them the greater concentration of applications on consolidated servers. “If you add enough non-mission critical application to the same server, it makes it mission critical,” says Nigel Dessau, chief marketing officer at Stratus.

Overall, continuous availability has become much more important in the cloud era compared with the days of client/server, Dessau contends. Back then, fat clients shared more of the workload, so users could keep working on their PCs in the event of a server crash. Apps that couldn’t afford downtime were backed by fault tolerant servers, which Dessau says only represented perhaps 5 percent of the market.

News: Stratus Acquires Competing HA Software Provider Marathon

The market reach of availability technology may grow amid virtualization and cloud computing, however. “In the cloud world, you have thin clients or even mobile devices,” Dessau says. “In this case, when the server is not available, [users] become severely limited as to what they can do. The server’s availability is much more essential.”

Kris Lamberth, chief technology officer at Paranet Solutions, a Dallas-based CRM and IT outsourcing company that offers virtualization services, notes that customers are becoming increasingly aware of the cost of downtime. “I think everybody is looking for high availability today just because everyone is so dependent on IT,” he says.

An enterprise’s actual investment in availability ultimately depends on the importance of a given application—and how eager the firm is to pay for uptime.

VMware High Availability Approach Focuses on Clusters

Availability options for virtualized customers often start at the hypervisor level. The VMware vSphere High Availability feature, for example, checks the health of virtual machines and detects problems.

If an operating system failure is detected, VMware HA automatically restarts the virtual machine. If a virtual machine’s underlying physical server fails, VMware restarts the application on another server. (A cluster of vSphere ESXi hosts makes that failover possible.)

Milton Lin, master cloud specialist at Force 3, a Crofton, Md.-based systems integrator, says VMware tends to be the hypervisor of choice among Force 3clients due to its simple availability approach. “For any VMware administrator or environment with VMware already, VMware HA is easy.”

Lin says VMware HA calls for an n+1 design to ensure sufficient resources for failover or server maintenance. A two-host cluster, for instance, should not exceed the CPU and memory performance of one host. In that case, 50 percent of cluster resources would be reserved for failover.

A cluster with more hosts offers higher utilization and diminishes the impact of a host failing, Lin notes. A four-host cluster, for instance, would reserve 25 percent of its resources for failover.

Christian Teeft, vice president of engineering at Latisys, said solutions such as VMware HA are a good fit for customers who seek availability but can tolerate a brief interruption while workloads are reloaded and started on another server. Latisys, a multi-tenant data center company, builds availability into its solutions.

Teeft said some customers—those with big data analytics applications, for example—may run hundreds of virtual machines for number crunching, and the loss of a node in the cluster won’t severely impact overall performance. For organizations with that profile, Latisys builds data center solutions around VMware HA and the Hewlett-Packard Converged Infrastructure platform, Teeft notes.

SQL Server Clusters Provide More Uptime, More to Manage

Customers, however, can move beyond VMware HA for more protection. Lin suggests that customers could look into an additional cluster for a specific application such as SQL Server.

Analysis: Is Converged Infrastructure the Future of the Data Center?

A cluster, in this scenario, would consist of an active node running a SQL Server database and a passive node on standby. The passive node starts SQL Server when the active node fails. Customers can use Microsoft clustering technology or third-party software such as Vision Solutions’ Double-Take Availability, Lin says.

A cluster specific to SQL Server will provide more uptime than what VMware offers out of the box, Lin says. Clusters of this kind are more tightly integrated with the application itself, he adds, whereas VMware HA “has no idea what application you’re running.”

The drawbacks to this option, Lin says, include the creation of another cluster that customers must manage. If a third-party product is used, there’s also the complexity of an additional piece of software.

Several other vendors have high availability clustering offerings. These include Dell, Enhance Technology, Hewlett-Packard, IBM, Intel and Proxmox.