When Marty Garrison became CTO of ChoicePoint three
years ago, the storage situation was messy. That’s no
small matter at a company that manages 16 billion records, such
as background checks and insurance applications, eating up two
petabytes of storage—that’s 2,048 terabytes. And
growing. Like many IT leaders, he faced lots of data in lots of
silos. “Storage had grown organically by project, and it
was not managed in terms of cost. So we had eight to 10 SAN
[storage area network] infrastructures as islands, none of
which could talk to each other. We couldn’t share storage
space across islands, and we couldn’t tier our
data,” he recalls.
The silos meant there could be no cost efficiencies from
bulk purchases, from better utilization of the existing storage
capacity or from a unified management approach that would lower
staffing needs. So Garrison created a central, common storage
architecture and strategy. He removed storage management
responsibilities from local Unix administrators and hired
dedicated storage experts to manage responsibilities globally.
He consolidated the SANs into one, reducing management costs
and allowing more efficient data utilization. He pared down the
vendors to just a couple for each type of technology. That let
him simplify management and buy in bulk, to get greater
discounts. When you buy hundreds of terabytes of storage each
quarter, Garrison says, “it really does drive costs
He also introduced tiering, which uses cheaper, slower
drives for data that doesn’t need the highest level of
availability. “Before that, we had done no performance
testing to determine service requirements. The staff played it
safe and got Tier 1 Hitachi and EMC disks for
everything,” Garrison recalls—at nearly double the
price per terabyte as Tier 2 or Tier 3 disks. Altogether, he
has slashed storage costs by 40 percent, both for the disks
themselves and for the management overhead. And he’s not
had to significantly grow his staff despite escalating storage
Garrison is now exploring new ways to keep costs in check,
including storage virtualization and single-instance storage.
“Now it’s time to go into the next phase,” he
You must move to a simplified storage architecture to reduce
total cost of ownership, analysts say. Even as the cost of new
storage media decreases at up to 34 percent annually, the cost
of rising capacity and service level demands can exceed 60
percent, says Stewart Buchanan, a research director at Gartner.
“Enterprises need more business discipline in IT asset
management of storage,” he says.
Lay the Right Foundation
The good news: CIOs have more storage choices, and more mature
choices, than they did just a few years ago. Some approaches
that were once novel and untested, such as tiered storage and
its related archival approach of hierarchical storage
management, are now proven, says Nik Simpson, a storage analyst
at the Burton Group consultancy. This is also true for the use
A Fast Guide to Big Storage Providers
Many of the technologies to support structural
storage efficiencies are widely available, such as
storage area networks (SANs), disk-to-disk backup (also
called virtual tape libraries) and tiered storage.
“You can use your existing vendors for these if
you don’t want to work with a startup,”
says Nik Simpson, a storage analyst at the Burton
Providers of both fibre channel and iSCSI products
include 3Par, Compellent, EMC, Hewlett-Packard, Hitachi Data Systems, IBM, Network Appliance (NetApp) and
Sun Microsystems. LeftHand Networks and Symantec offer software for such
networks, while Sanrad offers an appliance to
interlink the two technologies. Providers of
iSCSI-only SANs include EqualLogic, Isilon Systems and Pillar Data Systems.
For the recently emerged area of network storage
virtualization, mainstream providers include EMC, HP,
Hitachi, IBM, LSI, NetApp and Sun. “NetApp and
Hitachi are at the top of my list, and IBM is a
reasonable third,” says Simpson. Software-only
providers include DataCore Software, FalconStor Software, Incipient and Symantec.
In the also emerging area of single-instance storage
and deduplication, leading players include Data Domain, Diligent Technologies, EMC,
ExaGrid, FalconStor, NetApp,
Quantum and Sepaton.
One increasingly popular category of savings comes from
replacing tape backup with disk backup (also called virtual
tape libraries), says Dave Dillehunt, CIO of the integrated
delivery network FirstHealth of the Carolinas. Tape capacity
has not kept up with hospital storage requirements—about
185 terabytes at FirstHealth—and physically managing the
tapes has become too burdensome, he says. A caveat: One danger
in relying on disk-based backup is the temptation to keep the
data online (which can overload storage networks, because
people will use the data if it is available). That’s why
Dillehunt keeps the disk backup disconnected from the rest of
If your storage needs are modest, tape does continue to make
sense because the medium cost is so much less, notes Rich
O’Neal, senior vice president of operations at the online
rewards-tracking site Upromise. That’s the case for his 4
terabytes of data. Of the established approaches, tiering
offers the most significant bottom-line benefit, says
Gartner’s Buchanan. It not only lets you increase the
amount of cheap storage relative to expensive storage that you
use but also forces you to understand the service levels for
all your data. Then you can reduce costs by deleting or at
least not backing up unneeded data. You can move rarely used
data to offline storage to keep network traffic under control.
And you can begin to manage demand by users, by showing them
the entire data lifecycle costs for their requested
applications. “Tiering lets you find the total cost of
ownership of your storage,” he says.
A good target: Keep 30 percent of your data in Tier 1
storage and the rest at lower tiers, advises Burton
Group’s Simpson, though the exact ratio depends on the
performance and availability requirements for your data.
It’s critical for the CIO to make sure that business
takes responsibility for its data demands. “It’s
not the role of the storage team to define the data
requirements—that has to go to business
management,” Buchanan says. But the CIO has to lay the
groundwork by having effective asset management in place and
Cheaper Storage Networks Through iSCSI
Among newer technologies that can help reduce storage costs,
the most notable in recent years is iSCSI (Internet Small
Computer System Interface). A type of storage that connects
drives to each other and to servers using a simple,
easy-to-manage protocol, it lets organizations of all sizes
deploy SANs. Before iSCSI, the major SAN option was fibre
channel, but “fibre channel is not suited outside larger
enterprises,” Simpson notes, because of its complexity
and its high management cost.
The simplicity and fit of iSCSI for a larger range of
organizations make it the fastest-growing interconnect
technology for storage, reports IDC (a sister company to
CIO’s publisher); the research firm expects 25 percent of
all external storage sold in 2011 to be iSCSI-based.
Regional accounting firm Schenck Business Solutions dropped
its EMC fibre channel array three years ago because of its
complexity, replacing it with an EqualLogic iSCSI-based SAN.
“We had struggled with configuration and day-to-day
usage,” recalls CIO Jim Tarala. Since then, the
company’s storage capacity has increased about 330
gigabytes to 20 terabytes. But he’s got a handle on
overall cost. “We spent approximately 120 percent of what
we did on the EMC gear (330 gigabytes) to get the EqualLogic
(20 terabytes) and our management costs are a maximum of 60 to
65 percent of what they were previously,” Tarala says. He
expects to upgrade the storage to 30 terabytes soon.
Associated Bank, which serves several Midwestern states, had
a similar experience. In 2005, it needed to rethink its storage
strategy to prepare for volumes of expected image data such as
electronic check images and customer records, since the bank
was implementing a program to let customers start an
application at one branch and finish it at any other. When the
storage initiative began in 2005, the bank had about 20
terabytes of data; it now has 300 terabytes.
The bank built its SAN using iSCSI arrays because it wanted
an IP-based network to take advantage of its staff’s
existing networking skills, recalls Preston Peterson, the
assistant vice president of infrastructure design. Still, just
in case fibre channel becomes necessary later on, the bank made
sure its Compellent storage arrays could support both fibre
channel and iSCSI.
The move to iSCSI did raise questions, notes Kory Kitowski,
the bank’s vice president of IT. For example, engineers
from Microsoft and other vendors weren’t familiar with
iSCSI, so they questioned unfamiliar server and SAN settings
when installing or troubleshooting their own products.
Internally, despite having IP-savvy IT staff, the bank still
needed to reeducate the storage administrators. “We went
through a major paradigm shift,” Kitowski says.
But the result was a 30 percent overall savings to what they
had expected to spend using traditional SANs, Peterson
Even within large enterprises, there’s no longer a
need to rely solely on fibre channel, says ChoicePoint’s
Garrison, who uses either iSCSI or fibre channel, based on the
specific storage’s availability needs.
Prepare for the Next Wave
As enterprises get these structural changes in place, both
Simpson and Buchanan advise that, for further savings, CIOs
should begin looking at two emerging technologies: network
storage virtualization and single-instance storage. Network
storage virtualization moves management out of the arrays and
other disk hardware, and implements it as part of the
SAN’s operating environment. This lets IT treat all the
disks as a virtual resource pool.
Single-instance saves on storage by keeping just one copy of
data in your frontline systems (such as application servers),
substituting pointers to the source for any copies, while the
related deduplication technology saves just one copy of a file
or data block during backup or archiving and substitutes
pointers for any later copies found. Long available for e-mail
servers, single-instance technology is becoming available as a
feature both in backup and archival systems and in frontline
storage systems, notes Burton Group’s Simpson.
But several factors limit these technologies’
adoption, says Gary Fox, national practice director for the
consultancy Dimension Data.
Fox says that network storage virtualization technology
proves complex to manage, despite vendors’
characterization of it as plug-and-play.
As for single-instance storage technology, data loss worries
surround the pointer approach; most companies are in pilot mode
for it, Fox says. Also, the technology comes primarily from
startup vendors, though Fox expects that to change. Still,
despite its nascency, “We see a lot of interest from
clients,” he says. After all, they also foresee continued
unbridled storage growth.
Galen Gruman is a frequent contributor to CIO. You can reach
him at email@example.com.