If certain server and virtualization vendors get their way, end-user companies will be buying many fewer individual servers in a few years, and many more integrated packages of infrastructure.
Virtualization has allowed many companies to reduce the number of their physical servers, but increased demand for compute power, I/O capacity and storage, according to an April report from IDC.
Cisco Systems and Hewlett-Packard responded by creating “converged infrastructure” packages that include servers, storage and networking components attached to a backplane that makes the whole package one big chunk of compute power that can be divided easily among virtual or physical servers.
[ Who’s Better at Disaster Recovery: VMware or Microsoft? Debate Has Broken Out. ]
That converged approach is a huge advantage, for reasons that are technical, financial and, in some cases, surprisingly mundane, according to Jim Levesque, the systems programmer who manages the virtualized infrastructure for the LA Dept. of Water and Power (LADWP).
The most efficient way to pack servers into a data center right now is using blades fixed in a chassis that generates tremendous heat, uses a lot of energy and is a nightmare to install or reconfigure because the back is a spaghetti-fight of wiring that’s unbelievably labor-intensive, according to Levesque.
Virtual I/O servers can make that simpler because they connect to each blade directly, rather than requiring each to have an HBA or NIC installed. Levesque cut networking hardware costs, increased per-server bandwidth and reduced support time for LADWP’s 300 physical servers and 350 VMs using virtual I/O servers from Xsigo Systems.
Other User Advantages
Virtual I/O is just one example of the functions that can be removed from a server design in systems designed specifically for virtual infrastructures, however.
“In traditional server design, each block or component had all the bits and pieces required to operate as a standalone unit,” according to Craig Thompson, VP of product marketing for I/O server vendor Aprius. “When we look at the direction of the OEMs, the server is quickly becoming a CPU, memory and a couple of high-bandwidth I/O ports that connect to shared resources on a network fabric of some sort.”
Forrester analyst John Rymer calls the concept “distributed virtualization” when applied to application servers. The goal: deliver high performance by virtualizing everything an application needs and provide it dynamically when it’s needed.
In the hardware world, it’s harder to convince end users the technical challenge is worth the effort, if they don’t understand the advantages of virtual I/O, Thompson says. Neither Aprius nor Xsigo has made as much progress as it wanted to along those lines, company spokespeople said.
Packaging virtual I/O and innovative server design has been much easier for HP and Cisco, whose pre-fab virtual infrastructures allow customers to buy chunks of servers in the number they want and install them collectively, rather than building up one chassis at a time.
“We make the whole blade system enclosure look like one server to the outside world,” according to Gary Thome, VP of strategy and architecture for Hewlett-Packard’s Industry Standard Servers and Software group. “Instead of having to unbox everything and look at the MAC address on the NICs before you can configure anything, you can do all these complex network configurations from a console and decide how many Ethernet or Fibre Channel connections you want on the server and allocate how much bandwidth each one gets.”
HP and Cisco Play Cost, Complexity Angles
The approach has been successful for both HP and Cisco, attracting customers who want to build out virtual infrastructures and understand the complexity of doing so, according to an April IDC report on Cisco’s Data Center roadmap.
Both companies are still fighting the perception that blade servers—which make up only 15 percent of the total market—are more expensive than other servers and that consolidated infrastructure products would be more expensive still, the report says.
Buying integrated chunks of data center is less expensive than buying each server and component separately, however, according to Paul Durzan, director of product management for Cisco’s Server Access and Virtualization Group.
“In the past you’d want two Ethernet, and two Fibre Channel connections per server, which adds up to about $45,000 per chassis in switch costs,” Durzan says. “One chassis with Fabric Extenders is going to cost around $70,000 for one chassis, but the extenders are only $4,000. For two chassis it’s $74,000; for three it’s $78,000.”
HP’s approach also costs less in total, and saves costs in power and support. “It’s the exact opposite of what you might think,” Thome says.
A June IDC report in which IDC analysts interviewed three HP customers found Converged Infrastructure systems cut administration and support time as much as 30 percent.
More important than support or total-systems cost, however, is the ability to scale to support ever-larger virtual infrastructures and reassign resources such as storage, bandwidth or even memory according to need, rather than physical location, according to Sumit Dhawan, vice president of product marketing for Citrix XenDesktop.
“The landscape for servers and storage is changing very rapidly and the infrastructure required per user needs to be reduced, perhaps be cut in half, in the very near future. That is one of the issues customers will have to overcome,” Dhawan says.
Follow everything from CIO.com on Twitter @CIOonline.