by Deni Connor

I/O Virtualization: How to Ease Network Congestion Caused by Virtualized Servers

May 06, 20084 mins

Virtualized servers put big I/O demands on enterprise networks, but new I/O virtualization technologies from vendors nnincluding Gear6 and Xsigo can help ease the traffic jam. Bottom line benefits: improved application speeds for users and nnreduced cabling and networking costs.

Virtual servers may make the utilization of physical servers more efficient, but that efficiency comes with a price. The physical servers that support virtual machines (VMs) have to be more powerful. Less obviously, they also need better input/output (I/O) capability than standalone machines, which means more cabling to connect the virtual machines to the network switch and more switch ports to accommodate the increased number of virtual machines contending for the network’s resources.


Neterion Adapters May Cure Virtualization Headache: I/O Bottlenecks

Monster Savings from Virtualization on Sesame Street

Adventures in Managing Virtualization

“Virtualization is about consolidation and concentration,” says Bernd Herzog, an analyst at research firm Application Performance Management Experts. “It takes many disparate and distributed systems and concentrates them together onto a relatively few physical servers, and therefore onto a much smaller set of network pipes and storage area network (SAN) access infrastructure. “

According to Gartner, four million virtual machines will exist by 2009. IDC (a sister company to CXO Media) predicts that by 2010, 14.6 percent of servers shipped will host VMs.

A number of companies are addressing the impact of server virtualization on the IP and storage networks. Gear6 and Xsigo have both introduced hardware products in recent months that virtualize I/O operations, much as VMware, Citrix, XenServer and Microsoft Virtual Server allow the virtualization of the server’s CPU.

Each has taken a different approach to virtualizing I/O between servers and the network.

Gear6’s CACHEfx appliance, which was introduced in May 2007, uses random access memory to provide a virtualized I/O environment for data that applications need to retrieve and process quickly.

As server speeds increase and as more virtual machines are placed on physical servers, applications become I/O bound.

“Pre-virtualization, IT shops had one physical server with one I/O channel,” says Gary Orenstein, VP of marketing for Gear6. “There was complete synchronization between the application, the operating system, the motherboard, the I/O driver and the I/O link. As soon as they put multiple virtual machines on that link, none of them is coordinated with the other. If they had virtual machines, all the VMs were grasping for that link simultaneously without coordination.”

Xsigo takes a different approach: Its I/O Director, which debuted at VMworld last fall in San Francisco, lets IT control I/O between servers and the network by creating virtual I/O channels that can be used by applications and other processes.

The virtualization of I/O with the use of the I/O Director, according to the company estimates, allows IT departments to reduce cabling between servers and the network by as much as 70 percent and reduce their connectivity costs to network and storage switches by up to 50 percent.

Dr. James Zhu, CIO of Alcatel Shanghai Bell in China, is planning to deploy a Xsigo I/O Director in his data center when it opens this June.

“The value we recognize is that the I/O Director can reduce our cabling and network interface card cost,” Zhu says. “More importantly, we can change our server’s I/O configuration without changing the network configuration and [in doing so] reduce our operating and capital expense costs.”

Xsigo estimates that with a multi-core server that costs $6,000 and contains ten Gigabit Ethernet adapters and two Fibre Channel host bus adapters, powered by electricity that costs 17.5 cents per kilowatt hour, that the average user of an I/O Director will save over $379,000 on an initial investment of $780,000.

Upcoming 10GB Ethernet adapters from Neterion, set to debut in servers in the second half of 2008, should also help improve the I/O situation on servers running many virtual machines. These new Neterion X3100 Series products provide 17 independent I/O paths right in the adapter’s silicon. This means the various VMs and applications won’t have to fight for one swath of I/O bandwidth as they do today. Additional I/O bandwidth can be routed to applications when needed, with IT groups able to set quality of service (QoS) levels for different applications and borrow bandwidth when necessary, Neterion President Dave Zabrowski told CIO in a preannouncement briefing in February. The marketing lingo will be that the server has a “virtualized NIC (network interface card)” or “VNIC,” he says.