Babu Kudaravalli, senior director of IT operations at SXC Health Solutions, knows about the \n\nhavoc an acquisition can wreak on a company's storage infrastructure. While overseeing \n\nNational Medical Health Card Systems' (NHMC) IT department, he watched the pharmacy benefits \n\nmanager grow nearly 40 percent per year, primarily through acquisitions. The result was a \n\nmishmash of more than 60 servers that were functioning at 90 percent utilization, impacting \n\nperformance and creating a constant challenge for storage and system administrators, he \n\nrecalls.\n\nMore on CIO.com\nStorage Vendors Forecast a Bright Future in the Cloud\n\nHP Brings Storage, Servers Into One Rack\n\nFive Keys to Smart Storage Virtualization\n\nFortunately, that had changed by the time SXC acquired NMHC last February. Gone was the \n\nhodgepodge of arrays and, in its place, a high-capacity, easy-to-manage storage \n\ninfrastructure made possible through storage virtualization. "SXC was very impressed," says \n\nKudaravalli, adding that SXC plans to preserve NMHC's storage environment.\n\nBut accolades aren't the only reason companies are turning to storage virtualization. \n\nCutting costs, easing management headaches, simplifying data migrations across multiple \n\ntiers\u2014these are just a few of the factors pushing them into the arms of vendors \n\nincluding Hewlett-Packard, EMC, Symantec and DataCore Software. A study by research firm \n\nTheInfoPro reveals that 35 percent of Fortune 1000 storage organizations are using the \n\ntechnology and plan to expand their investment during the next two years.\n\nNot unlike server virtualization, which simplifies the management of disparate server \n\nhardware and operating system platforms, storage virtualization masks the complexities of heterogeneous storage arrays by \n\naggregating them into a centralized structure. And it's earning plenty of fans. But with all \n\nthe hype surrounding this technology, many CIOs fail to consider the hurdles\u2014from \n\ninteroperability glitches to deployment snafus\u2014that can greatly impact storage \n\nvirtualization success.\n\n"In the course of putting [multiple storage devices and arrays] into one consolidated pool, \n\ncompanies risk introducing new problems, like performance issues," warns Greg Schulz, \n\nfounder of consulting firm StorageIO Group.\n\nFar from Plug-and-Play\n\nKudaravalli agrees. Today's NMHC storage environment consists of two HP StorageWorks XP24000 \n\nDisk Arrays, which supply enterprise- class capacity to applications from a pool of virtualized storage. And two HP \n\nStorageWorks Enterprise Virtual Arrays support near-mission-critical applications requiring \n\nhigh availability and midrange capacity. The result is 55 terabytes of virtualized storage. \n\nBut, Kudaravalli admits that "it took a long while to get there."\n\nFor a company to make the most of storage virtualization, a solution must be able to \n\naccommodate existing storage hardware, as well as satisfy the requirements of future storage \n\nsystems. In NMHC's case, Kudaravalli needed a solution that would be compatible with factors \n\nincluding the company's existing servers, host bus adapters, fiber cards, fiber switches, \n\noperating systems and multiple business applications.\n\nFor this reason, NMHC spent nearly nine months testing evaluation copies of HP's technology, \n\nand decided to limit itself to a single vendor. By doing so, Kudaravalli hoped to reduce the \n\ninteroperability headaches that can arise from deploying disparate solutions from competing \n\nvendors.\n\nSchulz of StorageIO recommends requesting a compatibility matrix from vendors that outlines \n\nnot only the products each supports but the versions and configurations, too. While a \n\nstorage virtualization solution may accommodate a competing vendor's hardware, \n\ninteroperability issues may prevent it from taking full advantage of a device's \n\nfunctionality.\n\nA Multistep Process\n\nAnother obstacle that can stand in the way of a high-functioning, virtualized environment is \n\na botched deployment. Because implementation errors can result in data loss and reduced \n\nservice, experts warn that deploying virtualization across an entire enterprise in one fell \n\nswoop can easily spell disaster.\n\nRather than risk "putting its business in jeopardy," Kudaravalli says NMHC adopted a \n\npiecemeal approach to implementation that spanned more than a year and involved the use of \n\ntest servers for development, quality assurance and production trials. As a result, \n\nKudaravalli was able to standardize the deployment process, avoid having to hire top-dollar \n\nconsultants, reduce the complexity of the overall project and gain time to properly \n\ntroubleshoot unanticipated deployment glitches.\n\nTime certainly wasn't on the side of Gerry McCartney, CIO of Purdue University, in April of \n\n2007. But he knew that wresting control of the institution's complex and overloaded storage \n\nenvironment called for a carefully plotted procedure. Purdue selected EMC's Invista \n\nnetwork-based storage virtualization solution. But McCartney first made certain Purdue's \n\nexisting storage-area network was robust enough for virtualization, ensured adequate switch \n\nport capacity, certified connected hosts, as well as updated firmware and operating system \n\npatches. \n\n"It was a lot of work but it was the best method to preserve the integrity of our operating \n\nenvironment as we proceeded," says McCartney.\n\nMcCartney also opted to deploy the virtualized environment "host by host," to allow system \n\nadministrators to become familiar with it and make certain they did not run into performance \n\nissues. This slow-and-steady approach also afforded Purdue University's IT team the time \n\nneeded to determine which systems could be virtualized in place, and which required \n\nscheduled migrations to avoid disruptions.\n\nHowever, not all experts agree that a slow deployment is a smart move. John Sloan, a senior \n\nresearch analyst with Info-Tech Research Group, cautions that "a graduated approach" delays \n\n"reaping the benefits of a streamlined infrastructure." All the more reason for companies to \n\ntest the waters before pooling their storage resources via virtualization.