When Marty Garrison became CTO of ChoicePoint three\n years ago, the storage situation was messy. That\u2019s no\n small matter at a company that manages 16 billion records, such\n as background checks and insurance applications, eating up two\n petabytes of storage\u2014that\u2019s 2,048 terabytes. And\n growing. Like many IT leaders, he faced lots of data in lots of\n silos. \u201cStorage had grown organically by project, and it\n was not managed in terms of cost. So we had eight to 10 SAN\n [storage area network] infrastructures as islands, none of\n which could talk to each other. We couldn\u2019t share storage\n space across islands, and we couldn\u2019t tier our\n data,\u201d he recalls.The silos meant there could be no cost efficiencies from\n bulk purchases, from better utilization of the existing storage\n capacity or from a unified management approach that would lower\n staffing needs. So Garrison created a central, common storage\n architecture and strategy. He removed storage management\n responsibilities from local Unix administrators and hired\n dedicated storage experts to manage responsibilities globally.\n He consolidated the SANs into one, reducing management costs\n and allowing more efficient data utilization. He pared down the\n vendors to just a couple for each type of technology. That let\n him simplify management and buy in bulk, to get greater\n discounts. When you buy hundreds of terabytes of storage each\n quarter, Garrison says, \u201cit really does drive costs\n down.\u201dHe also introduced tiering, which uses cheaper, slower\n drives for data that doesn\u2019t need the highest level of\n availability. \u201cBefore that, we had done no performance\n testing to determine service requirements. The staff played it\n safe and got Tier 1 Hitachi and EMC disks for\n everything,\u201d Garrison recalls\u2014at nearly double the\n price per terabyte as Tier 2 or Tier 3 disks. Altogether, he\n has slashed storage costs by 40 percent, both for the disks\n themselves and for the management overhead. And he\u2019s not\n had to significantly grow his staff despite escalating storage\n requirements.Garrison is now exploring new ways to keep costs in check,\n including storage virtualization and single-instance storage.\n \u201cNow it\u2019s time to go into the next phase,\u201d he\n says.You must move to a simplified storage architecture to reduce\n total cost of ownership, analysts say. Even as the cost of new\n storage media decreases at up to 34 percent annually, the cost\n of rising capacity and service level demands can exceed 60\n percent, says Stewart Buchanan, a research director at Gartner.\n \u201cEnterprises need more business discipline in IT asset\n management of storage,\u201d he says.\n\n Lay the Right Foundation\n The good news: CIOs have more storage choices, and more mature\n choices, than they did just a few years ago. Some approaches\n that were once novel and untested, such as tiered storage and\n its related archival approach of hierarchical storage\n management, are now proven, says Nik Simpson, a storage analyst\n at the Burton Group consultancy. This is also true for the use\n of SANs.\n A Fast Guide to Big Storage Providers\n Many of the technologies to support structural\n storage efficiencies are widely available, such as\n storage area networks (SANs), disk-to-disk backup (also\n called virtual tape libraries) and tiered storage.\n \u201cYou can use your existing vendors for these if\n you don\u2019t want to work with a startup,\u201d\n says Nik Simpson, a storage analyst at the Burton\n Group.Providers of both fibre channel and iSCSI products\n include 3Par, Compellent, EMC, Hewlett-Packard, Hitachi Data Systems, IBM, Network Appliance (NetApp) and\n Sun Microsystems. LeftHand Networks and Symantec offer software for such\n networks, while Sanrad offers an appliance to\n interlink the two technologies. Providers of\n iSCSI-only SANs include EqualLogic, Isilon Systems and Pillar Data Systems.For the recently emerged area of network storage\n virtualization, mainstream providers include EMC, HP,\n Hitachi, IBM, LSI, NetApp and Sun. \u201cNetApp and\n Hitachi are at the top of my list, and IBM is a\n reasonable third,\u201d says Simpson. Software-only\n providers include DataCore Software, FalconStor Software, Incipient and Symantec.In the also emerging area of single-instance storage\n and deduplication, leading players include Data Domain, Diligent Technologies, EMC,\n ExaGrid, FalconStor, NetApp,\n Quantum and Sepaton.One increasingly popular category of savings comes from\n replacing tape backup with disk backup (also called virtual\n tape libraries), says Dave Dillehunt, CIO of the integrated\n delivery network FirstHealth of the Carolinas. Tape capacity\n has not kept up with hospital storage requirements\u2014about\n 185 terabytes at FirstHealth\u2014and physically managing the\n tapes has become too burdensome, he says. A caveat: One danger\n in relying on disk-based backup is the temptation to keep the\n data online (which can overload storage networks, because\n people will use the data if it is available). That\u2019s why\n Dillehunt keeps the disk backup disconnected from the rest of\n the network.If your storage needs are modest, tape does continue to make\n sense because the medium cost is so much less, notes Rich\n O\u2019Neal, senior vice president of operations at the online\n rewards-tracking site Upromise. That\u2019s the case for his 4\n terabytes of data. Of the established approaches, tiering\n offers the most significant bottom-line benefit, says\n Gartner\u2019s Buchanan. It not only lets you increase the\n amount of cheap storage relative to expensive storage that you\n use but also forces you to understand the service levels for\n all your data. Then you can reduce costs by deleting or at\n least not backing up unneeded data. You can move rarely used\n data to offline storage to keep network traffic under control.\n And you can begin to manage demand by users, by showing them\n the entire data lifecycle costs for their requested\n applications. \u201cTiering lets you find the total cost of\n ownership of your storage,\u201d he says.A good target: Keep 30 percent of your data in Tier 1\n storage and the rest at lower tiers, advises Burton\n Group\u2019s Simpson, though the exact ratio depends on the\n performance and availability requirements for your data.It\u2019s critical for the CIO to make sure that business\n takes responsibility for its data demands. \u201cIt\u2019s\n not the role of the storage team to define the data\n requirements\u2014that has to go to business\n management,\u201d Buchanan says. But the CIO has to lay the\n groundwork by having effective asset management in place and\n exhibiting efficiency.\n\n Cheaper Storage Networks Through iSCSI\n Among newer technologies that can help reduce storage costs,\n the most notable in recent years is iSCSI (Internet Small\n Computer System Interface). A type of storage that connects\n drives to each other and to servers using a simple,\n easy-to-manage protocol, it lets organizations of all sizes\n deploy SANs. Before iSCSI, the major SAN option was fibre\n channel, but \u201cfibre channel is not suited outside larger\n enterprises,\u201d Simpson notes, because of its complexity\n and its high management cost.The simplicity and fit of iSCSI for a larger range of\n organizations make it the fastest-growing interconnect\n technology for storage, reports IDC (a sister company to\n CIO\u2019s publisher); the research firm expects 25 percent of\n all external storage sold in 2011 to be iSCSI-based.Regional accounting firm Schenck Business Solutions dropped\n its EMC fibre channel array three years ago because of its\n complexity, replacing it with an EqualLogic iSCSI-based SAN.\n \u201cWe had struggled with configuration and day-to-day\n usage,\u201d recalls CIO Jim Tarala. Since then, the\n company\u2019s storage capacity has increased about 330\n gigabytes to 20 terabytes. But he\u2019s got a handle on\n overall cost. \u201cWe spent approximately 120 percent of what\n we did on the EMC gear (330 gigabytes) to get the EqualLogic\n (20 terabytes) and our management costs are a maximum of 60 to\n 65 percent of what they were previously,\u201d Tarala says. He\n expects to upgrade the storage to 30 terabytes soon.Associated Bank, which serves several Midwestern states, had\n a similar experience. In 2005, it needed to rethink its storage\n strategy to prepare for volumes of expected image data such as\n electronic check images and customer records, since the bank\n was implementing a program to let customers start an\n application at one branch and finish it at any other. When the\n storage initiative began in 2005, the bank had about 20\n terabytes of data; it now has 300 terabytes.The bank built its SAN using iSCSI arrays because it wanted\n an IP-based network to take advantage of its staff\u2019s\n existing networking skills, recalls Preston Peterson, the\n assistant vice president of infrastructure design. Still, just\n in case fibre channel becomes necessary later on, the bank made\n sure its Compellent storage arrays could support both fibre\n channel and iSCSI.The move to iSCSI did raise questions, notes Kory Kitowski,\n the bank\u2019s vice president of IT. For example, engineers\n from Microsoft and other vendors weren\u2019t familiar with\n iSCSI, so they questioned unfamiliar server and SAN settings\n when installing or troubleshooting their own products.\n Internally, despite having IP-savvy IT staff, the bank still\n needed to reeducate the storage administrators. \u201cWe went\n through a major paradigm shift,\u201d Kitowski says.But the result was a 30 percent overall savings to what they\n had expected to spend using traditional SANs, Peterson\n says.Even within large enterprises, there\u2019s no longer a\n need to rely solely on fibre channel, says ChoicePoint\u2019s\n Garrison, who uses either iSCSI or fibre channel, based on the\n specific storage\u2019s availability needs.\n\n Prepare for the Next Wave\n As enterprises get these structural changes in place, both\n Simpson and Buchanan advise that, for further savings, CIOs\n should begin looking at two emerging technologies: network\n storage virtualization and single-instance storage. Network\n storage virtualization moves management out of the arrays and\n other disk hardware, and implements it as part of the\n SAN\u2019s operating environment. This lets IT treat all the\n disks as a virtual resource pool.Single-instance saves on storage by keeping just one copy of\n data in your frontline systems (such as application servers),\n substituting pointers to the source for any copies, while the\n related deduplication technology saves just one copy of a file\n or data block during backup or archiving and substitutes\n pointers for any later copies found. Long available for e-mail\n servers, single-instance technology is becoming available as a\n feature both in backup and archival systems and in frontline\n storage systems, notes Burton Group\u2019s Simpson.But several factors limit these technologies\u2019\n adoption, says Gary Fox, national practice director for the\n consultancy Dimension Data.Fox says that network storage virtualization technology\n proves complex to manage, despite vendors\u2019\n characterization of it as plug-and-play.As for single-instance storage technology, data loss worries\n surround the pointer approach; most companies are in pilot mode\n for it, Fox says. Also, the technology comes primarily from\n startup vendors, though Fox expects that to change. Still,\n despite its nascency, \u201cWe see a lot of interest from\n clients,\u201d he says. After all, they also foresee continued\n unbridled storage growth.Galen Gruman is a frequent contributor to CIO. You can reach\n him at email@example.com.