This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
A private cloud architecture leverages the power of end-to-end virtualization so workloads can be fluidly distributed among a pool of servers, but this ideal cannot be achieved with traditional network infrastructure.
Conventional server I/O is costly, complex and inflexible, and results in applications being effectively locked to specific groups of servers. A fresh approach to infrastructure is required, one that leverages virtual I/O technologies.
ANALYSIS: Four trends shape the new data center
Two hardware layers comprise the foundation of the private cloud: virtualized servers and a virtualized infrastructure. Server virtualization lets you run any application on any server. Virtual infrastructure lets you flexibly link those servers to whatever network and storage resources are required.
Think of it as universal, any-to-any connectivity. Any server, regardless of vendor, can be connected to any data center resource -- regardless of interface type -- and that connectivity can be managed in real-time on live servers.
Traditional server I/O was not designed for this. It was designed to meet the needs of static servers configured with the I/O necessary to meet that device's specific application requirements within the three-tier data center model.
Private clouds completely change the traditional deployment model, thus creating the need for a new infrastructure model. The objective of a private cloud is to create a dynamic pool of compute resources that can be deployed as needed: any application on any server, and any collection of servers assigned to any group of users. Think of it as dynamically configured virtual data centers within the data center. But to achieve this objective, servers need software-configurable connections to all networks and storage.
Virtual I/O technologies are available to solve this problem. Virtual I/O is a hardware and software solution that provides data centers with a simpler, converged infrastructure. Instead of deploying multiple I/O cards and cables to every server, virtual I/O allows data center managers to configure connections flexibly in software, up to 64 isolated connections per device.
All elements in the data center -- all Ethernet networks and all storage types, including Fibre Channel, iSCSI and NAS -- can be connected to blade and rack servers via a single, software-configurable fabric. The result is 70% fewer cards, cables and switch ports and 50% less infrastructure cost. Most importantly, you achieve the flexible configuration capabilities essential to the private cloud.
Virtual I/O technologies: What to look for
You'll find virtual I/O solutions from several system vendors and from a few vendors who are focused on the connectivity business. What should you look for when comparing these solutions? Here are several key features that build the key elements of the private cloud:
1) Flexible any-to-any connectivity
Top virtual I/O solutions let you configure connections on live servers with no downtime. This is critical because virtualized servers may run 10 to 20 or more applications on each physical host; if a host must be taken out of service to perform a configuration task, it could take hours to evacuate the device, configure it, then bring it back up and re-populate it. Multiply that time across the number of servers in the data center and even a small task becomes a major burden. With dynamic provisioning, it can be done across hundreds of servers simultaneously.
2) Network isolation
Top virtual I/O solutions let you configure multiple isolated network connections within a single fabric. This means you can configure multiple Layer 2 environments within a group of servers, or even within a single server. All I/O is converged to a single cable per server, without violating any network isolation requirements. This is not true for all virtual I/O solutions. In some cases, the hosts share a single Layer 2 switch, which means you will need separate converged networks for each Layer 2 environment, thus limiting their usefulness in a private cloud. For true "any-to-any" connectivity, look for the flexibility to connect from any server to any network segment.
Today's multi-core servers can drive a lot of bandwidth, particularly when running a large number of virtual machines or executing I/O intensive tasks such as VMotion or backups. Some virtual I/O solutions limit bandwidth to only 10Gb per link, which is less than half the capacity of many servers. The top virtual I/O solutions deliver bandwidth up to 40Gb/s per link, thus exceeding the capacity of the server itself. This ensures that the I/O link never becomes a bottleneck in performance.
4) Future proof
A key benefit of the private cloud is the ability to run any application on any machine. But this requires that all servers have access to a common set of I/O resources. What happens when something changes, such as an upgrade to 10G Ethernet or 8G Fibre Channel? Do you then upgrade all servers? Or do you minimize cost and disruption by only upgrading a portion of them? If you chose the latter, you've now limited the flexibility of your cloud; you no longer have the same I/O on all servers. Look for a virtual I/O solution that will let you propagate new capabilities out to all servers without having to modify the server or its cabling. That will maintain the integrity of your cloud down the road while saving you time and money.
5) Open standards-based
When shopping around, select a virtual I/O solution that is built on open standards and proven interoperable across systems from all leading vendors. This way, you can reuse your existing hardware and achieve infrastructure convergence without being forced to standardize on one vendor. This will reap big dividends down the road when it's time for your next equipment purchase and you have the flexibility to shop among multiple vendors.
As we move to the next level of virtualization -- to the private cloud model -- we are finally fulfilling the true promise of virtualization. The result is dramatically higher efficiency, thanks to 10x better equipment utilization and phenomenally increased compute density. Best of all, the user experience improves while costs come down, the ultimate win-win.
Achieving this requires rethinking the infrastructure. Just as we re-engineered the servers with server virtualization a few years ago, we now need to rethink the server connectivity. Fortunately, the needed virtual I/O technologies are here, and they are proven in large scale deployments. The payoff is incredible savings and flexibility for companies of all sizes. The challenge lies simply in transitioning from a traditional infrastructure and selecting the best virtual I/O solution.
Xsigo Systems Inc. is the leader in virtualized infrastructure, helping organizations reduce costs and improve business agility. Based on patented and award-winning technology, Xsigo offers a complete family of infrastructure virtualization products, including the Xsigo I/O Director and the Xsigo Server Fabric. For more information, visit www.xsigo.com.
Read more about data center in Network World's Data Center section.
This story, "Leveraging Virtual I/O Technologies to Achieve Converged Infrastructures: the Road to the Private Cloud" was originally published by NetworkWorld .