Babu Kudaravalli, senior director of IT operations at SXC Health Solutions, knows about the havoc an acquisition can wreak on a company's storage infrastructure. While overseeing National Medical Health Card Systems' (NHMC) IT department, he watched the pharmacy benefits manager grow nearly 40 percent per year, primarily through acquisitions. The result was a mishmash of more than 60 servers that were functioning at 90 percent utilization, impacting performance and creating a constant challenge for storage and system administrators, he recalls.
More on CIO.com
Fortunately, that had changed by the time SXC acquired NMHC last February. Gone was the hodgepodge of arrays and, in its place, a high-capacity, easy-to-manage storage infrastructure made possible through storage virtualization. "SXC was very impressed," says Kudaravalli, adding that SXC plans to preserve NMHC's storage environment.
But accolades aren't the only reason companies are turning to storage virtualization. Cutting costs, easing management headaches, simplifying data migrations across multiple tiers—these are just a few of the factors pushing them into the arms of vendors including Hewlett-Packard, EMC, Symantec and DataCore Software. A study by research firm TheInfoPro reveals that 35 percent of Fortune 1000 storage organizations are using the technology and plan to expand their investment during the next two years.
Not unlike server virtualization, which simplifies the management of disparate server hardware and operating system platforms, storage virtualization masks the complexities of heterogeneous storage arrays by aggregating them into a centralized structure. And it's earning plenty of fans. But with all the hype surrounding this technology, many CIOs fail to consider the hurdles—from interoperability glitches to deployment snafus—that can greatly impact storage virtualization success.
"In the course of putting [multiple storage devices and arrays] into one consolidated pool, companies risk introducing new problems, like performance issues," warns Greg Schulz, founder of consulting firm StorageIO Group.
Far from Plug-and-Play
Kudaravalli agrees. Today's NMHC storage environment consists of two HP StorageWorks XP24000 Disk Arrays, which supply enterprise- class capacity to applications from a pool of virtualized storage. And two HP StorageWorks Enterprise Virtual Arrays support near-mission-critical applications requiring high availability and midrange capacity. The result is 55 terabytes of virtualized storage. But, Kudaravalli admits that "it took a long while to get there."
For a company to make the most of storage virtualization, a solution must be able to accommodate existing storage hardware, as well as satisfy the requirements of future storage systems. In NMHC's case, Kudaravalli needed a solution that would be compatible with factors including the company's existing servers, host bus adapters, fiber cards, fiber switches, operating systems and multiple business applications.
For this reason, NMHC spent nearly nine months testing evaluation copies of HP's technology, and decided to limit itself to a single vendor. By doing so, Kudaravalli hoped to reduce the interoperability headaches that can arise from deploying disparate solutions from competing vendors.
Schulz of StorageIO recommends requesting a compatibility matrix from vendors that outlines not only the products each supports but the versions and configurations, too. While a storage virtualization solution may accommodate a competing vendor's hardware, interoperability issues may prevent it from taking full advantage of a device's functionality.
A Multistep Process
Another obstacle that can stand in the way of a high-functioning, virtualized environment is a botched deployment. Because implementation errors can result in data loss and reduced service, experts warn that deploying virtualization across an entire enterprise in one fell swoop can easily spell disaster.
Rather than risk "putting its business in jeopardy," Kudaravalli says NMHC adopted a piecemeal approach to implementation that spanned more than a year and involved the use of test servers for development, quality assurance and production trials. As a result, Kudaravalli was able to standardize the deployment process, avoid having to hire top-dollar consultants, reduce the complexity of the overall project and gain time to properly troubleshoot unanticipated deployment glitches.
Time certainly wasn't on the side of Gerry McCartney, CIO of Purdue University, in April of 2007. But he knew that wresting control of the institution's complex and overloaded storage environment called for a carefully plotted procedure. Purdue selected EMC's Invista network-based storage virtualization solution. But McCartney first made certain Purdue's existing storage-area network was robust enough for virtualization, ensured adequate switch port capacity, certified connected hosts, as well as updated firmware and operating system patches.
"It was a lot of work but it was the best method to preserve the integrity of our operating environment as we proceeded," says McCartney.
McCartney also opted to deploy the virtualized environment "host by host," to allow system administrators to become familiar with it and make certain they did not run into performance issues. This slow-and-steady approach also afforded Purdue University's IT team the time needed to determine which systems could be virtualized in place, and which required scheduled migrations to avoid disruptions.
However, not all experts agree that a slow deployment is a smart move. John Sloan, a senior research analyst with Info-Tech Research Group, cautions that "a graduated approach" delays "reaping the benefits of a streamlined infrastructure." All the more reason for companies to test the waters before pooling their storage resources via virtualization.