Even if you've never heard of Flextronics you have probably used or benefited from one of the products this global digital equipment manufacturer has helped assemble -- a Microsoft Xbox 360, for example, or components that show up in everything from Cisco and Motorola devices to aerospace and automotive equipment.
With 250,000 employees in 30 countries, Flextronics has more than 10,000 servers, about half of which are virtualized in two major data centers - one in the U.S. and the other in Hong Kong - while the rest are scattered across 130 locations around the world. CIO David Smoley - who will soon become CIO of AstraZenca - says his overarching goal is to migrate as much hardware as possible from these individual sites to the data center hubs: "The opportunity to push virtualization to its limits in an effort to improve our ability to supply resources to our users is super important to us."
Smoley and other executives envision a virtual data center nirvana where behind-the-firewall IT resources handle mission critical workloads and a connected secure public cloud provides additional capacity.
Getting there is a challenge, but what's making it feasible is the convergence of multiple technologies. Virtualization has rewritten the rules for compute and is now making waves in storage and networking, and cloud computing and converged infrastructure options are coming on strong. Yet speed bumps abound. IT shops are wrestling with everything from heterogeneous hypervisor environments to concerns about security, reliability, availability, performance and even staffing and expertise. So where are we in getting to that nirvana vision? "It's not a reality yet," Smoley says, "but we're getting close."
Virtualization and its vices
Forrester virtualization analyst David Bartoletti says 59% of compute workloads are virtualized in enterprise data centers today, up from 45% just two years ago, and climbing to an expected cap of about 80% of workloads in the coming years. "Most of the easy workloads have been virtualized," he says. There will always be some apps that just run better on dedicated hardware, Bartoletti says, but the majority will be virtualized given the cost, efficiencies and agility that abstracting the compute layer from hardware provides.
The virtualization market has undergone dramatic shifts in the past 12 to 18 months. VMware, an EMC company, still dominates the industry, but other hypervisor platforms are making inroads - most notably, Microsoft Hyper-V. Independent analyst Zeus Kerravala found that 20% of VMware customers he surveyed last year were already implementing Hyper-V. This increasingly multi-hypervisor world brings with it new challenges.
For example, VMware's vMotion allows for the transfer of virtual machines and applications from one cluster of virtualized servers to another, but only if they're both running VMware. Transferring active virtual machines across disparate hypervisors requires still more tools. Smoley, for example, is piloting software from HotLink, a 2010-startup by former VMware executives, to manage heterogeneous hypervisor environments across the company's sites around the globe.
"A beautiful end state would be a data center with different hardware pieces throughout, all brought together under a software layer so resources can be placed where they are most needed," Forrester's Bartoletti says. But a world of increasing heterogeneity on the hypervisor layer is making that vision more difficult to attain, not easier.
[PRODUCT SAMPLER: 9 hot offerings from network virtualization, SDN and data center startups]
Bringing up the rear
While compute virtualization has become mainstream, network and storage virtualization are less mature, even though the latter has been around for some time. It turns out, this stuff can be pretty difficult to implement.
"The fundamental idea of storage virtualization is similar to that of compute virtualization. Storage no longer has to be dedicated to specific servers or even virtual machines; software instead pools storage resources and makes it possible to centrally manage them. As a result, heterogeneous storage components can be presented as a single resource to virtual machines, obviating the need to manage those disparate storage disks separately. Dru Borden, CEO of cloud storage provider Nirvanix, estimates that less than one-fifth of enterprise customers have true virtualized storage environments deployed - the market is still young."
"When you're talking storage you're talking speeds and feeds," he says, more specifically input and output per second (IOPS). Storage hypervisor vendors that enable resource pooling are reluctant to provide IOPS guarantees when managing competitor's hardware. And if storage virtualization providers can't guarantee performances, IT managers will be reluctant to virtualize storage for their IOPS-heavy tier-1 applications. The promise of storage virtualization is real though: the ability to use commodity storage hardware, instead of proprietary systems from big-name vendors like EMC or NetApp, can yield savings of 30% to 60%, Borden says.
An alternative to on-premises storage virtualization is to use a cloud storage option, which is what Borden's Nirvanix offers. Customers can choose to have storage on their premises on hardware managed by Nirvanix, or in the company's cloud. Virtualized data can migrate between the on-premise hardware and the cloud. EMC, NetApp and other storage giants offer similar services. Borden argues this too is storage virtualization because the data is portable - it uses either internal hardware components or external cloud resources.
But this approach comes with the usual concerns about cloud - perceived security and multi-tenancy risks and questions about bandwidth needs. The possibility to increase value, efficiency and agility are appealing, but implementing storage virtualization in practice is easier said than done, and hence the light market adoption.
Even less mature is network virtualization - the glue that promises to bring together all the elements in the virtual data center.
The fundamental premise of network virtualization is to centralize switching and routing control to make networks easier to manage, more dynamic and easier to scale. Network virtualization "will drastically reduce the provisioning time of network resources, and in an age where IT lives and dies by the time it takes to get things done, that's an advantage," says Martin Casado, VMware's chief networking architect and a pioneer of OpenFlow, a key protocol in the virtual software defined networking (SDN) world.
SDN is just emerging, though, and most enterprises have yet to map their SDN strategies, says Matthew Palmer, an SDN consultant at Wiretap Ventures and a blogger at SDNCentral. "Customers want to see proof of concepts in deployment before they pay the big money to custom build these solutions to their specific needs," he says.
Early SDN proponents include cloud service providers supporting multi-tenant environments, and progressive large enterprises looking to get efficiencies enjoyed by service providers. This year and next are when many proof of concepts will be piloted; 2014 or maybe even 2015 will likely be the years of significant customer adoption, Palmer predicts.
VMware is betting the mainstream market will develop faster than that. After spending $1.2 billion to purchase virtual network company Nicira last year - where Casado served as CTO - VMware just announced it will fold Nicira's technology into its vCloud suite and roll out a VMware-powered hybrid public cloud service. Through a software update and a new controller, customers will be able to create virtual network environments without having to rip and replace existing hardware, VMware says.
[MORE VMWARE: 5 Questions about VMware's virtual networking strategy]
Convergence, cloud complicate matters
Companies that are tired of integrating the components on their own might opt for one of the so-called data center in a box options. Proponents of the approach have a simple proposition: Why have separate appliances for compute, networking, storage, deduplication and WAN optimization when you can just have one system that does it all?
"We've crossed the chasm where we're in the phase of the early majority of adopters using this technology," says Trey Layton, CTO of converged infrastructure conglomerate VCE, a partnership among EMC, VMware and Cisco that delivers unified systems. Boxes come pre-configured, ready to install and scale by adding more of whatever is necessary.
There are also a variety of companies like Nutanix and Simplivity offering hyper-convergence boxes, systems that have been engineered from the ground up to integrate a number of services. This is opposed to VCE's strategy of optimizing products from three companies to create a single solution.
While the ease of use is tempting, it comes at a cost, Smoley from Flextronics says: "In my mind, it raises concerns about vendor lock-in."
If the enterprise is successful aligning all the pieces of the virtual data center on premise, ultimately they are going to want to complement that with cloud resources. Customers have more choice than ever when it comes to cloud computing, but many still question if the cloud is ready for enterprise-grade production workloads. "While the technology is rapidly maturing, it has catching up to do," Sanjib Sahoo, CTO at tradeMONSTER wrote in a recent Network World Tech Debate.
Jacques Greyling, vice president of data center infrastructure for outsourcing provider Rackspace, says cloud is typically best for dynamic workloads that have demand spikes because servers can be spun up and down quickly. Static and sensitive workloads are more likely to run on infrastructure dedicated to individual customers in a managed hosting environment, he says. There are exceptions to any rule; Netflix proudly runs most of its business-critical video streaming services on Amazon Web Service's cloud, for example, as do many web startup companies who live all in AWS's cloud.
The technology to support the nirvana vision of hybrid cloud connectivity that allows dynamic scaling is still a slight ways off, Greyling says. It requires significant changes in network topology, which the company is addressing by implementing software defined networking functionality. Doing so allows the easy creation of virtual LANs for individual customers and segmenting them off from one another. Combine that with common management platforms -- in Rackspace's case OpenStack - that spans the customer's site and the Rackspace cloud, and the hybrid cloud model starts to become a reality.
There is hope
Despite all the challenges - from virtualization management, to converged infrastructure to the cloud - companies are making progress toward the virtual data center. Take Michael Ferguson, director of IT for mid-sized Miami law firm Rennert, Vogel, Mandler & Rodriguez, who two years ago made a switch to embrace compute and storage virtualization.
It's not necessarily the perfect nirvana vision, but it has significantly eased IT management. He now centrally controls all of the servers from a single screen using VMware vCenter, and his virtualized storage array powered by storage hypervisor platform DataCore's SANSymphony has created a highly-available environment, backed up with off-site collocation disaster recovery.
"After two years, I've already paid off all of my investments, plus more," says Ferguson, who hasn't bought a new server since installing the system, even though the firm has continued to grow.
Network World senior writer Brandon Butler covers cloud computing and social collaboration. He can be reached at BButler@nww.com and found on Twitter at @BButlerNWW.
Read more about data center in Network World's Data Center section.
This story, "Building, and Managing, the 21st Century Data Center" was originally published by Network World.