Before you can even think about getting more value from your storage dollar, you need to have a strategy in place. That became clear to David Corwin, senior director of technology services at Yellow Technologies, the IT division of transportation company Yellow Corp. During the past 18 months, his group has worked to recast existing storage policies into a strategy aimed squarely at future corporate needs. For Yellow, that meant understanding where storage needs were growing, which were growing fastest (collaborative environments such as e-mail and file/print services, as it turned out) and what business needs were driving that growth. One result of the strategy was the decision to consolidate three storage area networks into one. “Three SANs is three times the administration [costs],” says Corwin. Sometime before the end of 2003, Yellow will further define its policies for retaining data as part of its overall strategy.
Jamie Gruener, a senior analyst at the Yankee Group, notes that in the ’90s many IT execs bought storage infrastructure without a master plan to guide them. “We’re no longer in a period of time
when you can’t have some sort of strategic planning initiative in place,” he says. That means assessing what you have, doing an annual forecast that includes capacity needs, figuring out what department or unit is consuming what amount of storage, having a backup and recovery plan, and deciding on the best management tools.
Create A Dedicated Team
Two years ago, execs at Alliant Energy, a midwestern energy provider, challenged a group selected from its Intel server and Unix and database administration teams to find common solutions to storage problems. The team consolidated disk space using its SAN and developed processes for managing the company’s backup and recovery. “Before, all server administrators needed to be storage experts. Now a few people manage all storage, and they’re the experts,” says Gregg Lawry, Alliant’s IT managing director. The result has been a reduction in the number of server admins who need to worry about storage from 15 to three and, thanks to disk space consolidation and other money-saving decisions, a reduction of 58 percent in Alliant’s unit price for storage.
At Paccar, a global truck manufacturer, storage responsibilities are being shifted from server teams to a two-person storage management team. “You’ve got to peel somebody off and say, ’Your job is to manage storage across the whole organization,’” says vice president and CIO Patrick Flynn. At Paccar, project managers know they need to sit down with a storage resource manager to think about file sizes, backup frequencies and data security issues. With common processes in place, Paccar has saved money by consolidating servers, reducing the amount of direct-attached storage (DAS) and utilizing capacity more efficiently.
DAS connects storage resources directly to a single server. Storage area networks provide pooled storage connected to a LAN. And increased asset utilization can be a quick SAN benefit. “So if you go from a 40 percent rate on DAS, by pooling storage on a SAN, you may get a 60 percent utilization rate. When dealing with terabytes of storage, that’s a lot of money,” says Phil Goodwin, senior program director at Meta Group. He also says SANs can help increase organizational agility by making it easier to redeploy storage resources from one application to another.
Another benefit of SAN technology is faster application development testing. Alliant’s Lawry says that in his company’s previous DAS environment, after running a test, it could take a day or two to restore the data attached to the server and set up the environment again. With a SAN, multiple versions of data can be replicated in a short period of time so that developers can run parallel tests without having to restore data.
At Denver Health Hospital Authority, which provides care to 30 percent of Colorado residents, CTO Jeff Pelot uses two SANs, from EMC and LeftHand Networks, to maintain system availability—which can mean the difference between life and death. He says that with its old DAS setup, the medical records system once went down for 36 hours. “The restore was incredibly difficult,” he says. “With SAN, our data is pretty well protected at any given time. If there’s any outage, we can get back to a point in time, say an hour before the failure, where the data was synched.”
SANs aren’t perfect, of course. DAS can be superior for high-level security purposes (a nuclear power plant might want to physically isolate data on DAS) or if a data warehouse is attached to a single server (in which case SAN connectivity doesn’t buy you anything). And, of course, SANs still require an up-front investment, which may be a hard sell given tight IT budgets.
Storage resource management (SRM) tools can provide some clarity in a complex environment. “SRM software is one of the best ways to look at capacity—who’s consuming it, and who last accessed it,” says Gruener.
Goodwin likes SRM tools because they can help identify duplicate and obsolete copies of data, which can slow storage growth. He says that the real culprit in growth is not primary storage (such as transactional data) but secondary storage requirements, such as duplications for backup, disaster recovery and data mining. According to Meta Group research, secondary storage requirements will exceed primary by seven to 15 times through 2008. “[SRM] is really a storage reporting tool,” Goodwin says. “You have to understand what you’ve got and how it’s used before you can make a decision on how to improve it.”
Classify Your Data
Companies are sitting on mountains of data, especially in recent years with the growth of data-intensive ERP and CRM systems, newsfeeds, Web-based marketing programs and the like. Adding to the information explosion are regulatory requirements, such as the Health Insurance Portability and Accountability Act. Richard Scannell, vice president of corporate development and strategy at GlassHouse Technologies, a storage analyst company, says that reference data—data about data—outstrips the amount of original data being created. He recommends segmenting data into two or three discrete tiers. For example, 20 percent of a company’s data might be deemed critical; 30 percent very important but could be lived without for eight hours; and the remainder necessary to keep for regulatory purposes, but a company could wait for three days to recover it from tape.
At Alliant, Louis Chiang, manager for IT applications hosting, says his company is classifying data now and hopes to do more in the future. He cites the storage for a customer support application, which resided on a two- or three-year-old disk, that they recently upgraded. Instead of discarding the older disk, it now stores less critical file and print apps.
Know Your Customer
As CIO at Case Western Reserve University, Lev Gonick deals with customers—faculty and research administrators—who can be right prickly when it comes to data ownership. About a year ago, Gonick began moving from a highly distributed DAS environment to a highly centralized, more secure SAN architecture. As part of that security, Gonick thought it made sense to have the SAN management team be responsible for restoring enterprise data that had been deleted. But, Gonick says many of the faculty felt ownership of the data and wanted the ability to restore it themselves. So Gonick decided to allow faculty and their research administrators to access servers with digital IDs, even though it increased security risks to some extent. The value? “Significantly fewer Tylenol 3 headaches,” from dealing with peeved users, says Gonick.
Measure Your Decisions
Make sure decisions take into account your favored metrics, whether total cost of ownership (TCO), ROI or something else. In terms of TCO, Randy Kerns, senior partner at storage analyst company Evaluator Group, says the dominant metric should not be the total amount of storage, but the amount of managed storage, or capacity per administrator. That number can affect decisions about such issues as utilization. He says that SRM software vendors will tell you you’re only using, say, 40 percent of your storage and that their product could push that to 60 percent. But Kerns advises that increasing your utilization rate may not make economic sense if, for example, it would require a higher cost to manage that capacity.
Another metric he advocates is time-to-deployment—the time it takes from the moment you need more storage to the time it goes live. “If it takes two weeks, how much value have I lost?” he asks.
Get a Sensible Recovery Plan
What is the dollar value associated with time to recovery? How long will it take to get systems back up, and how much is that time worth? Those are a few of the questions you should ask as you put a plan in place, says Kerns. “If I’m back in business in two hours, it will cost me X amount of revenue. If eight hours, I’ve lost revenue and maybe lost customers. In two days, maybe the survival of the company comes into play. The value of knowing that tells you the importance and expenditures you need to make to implement a [storage] solution,” he adds.
GlassHouse’s Scannell agrees that the cost of a recovery or backup strategy should be measured. He cites one customer who had a tape-based backup environment with multiple libraries of tapes it didn’t need. Hundreds of tapes were only 10 percent to 30 percent full because of a configuration option that was chosen when the tapes were purchased. By simply tweaking the configuration, fewer tapes went offsite, reducing hardware and processing needs and requiring fewer people to manage the process. Total savings: $1 million.
Scannell also says the area of data replication may be ripe for potential cost savings. Say a company has two data centers with data copied automatically—in real-time—between each center. Yet the company also employs a snapshot solution that replicates data every 15 minutes. Does it still make sense to take snapshots as often when the data is also replicated in the two data centers? “These questions of policy and the domino effect they have are very poorly understood,” Scannell says.
Look for Creative Ways to Reduce Costs
There may be a number of ways to save money that you haven’t had the time, resources or brainpower to consider. Yellow Technologies acquired storage in larger portions in one procurement cycle per year instead of multiple times, says Corwin. Instead of buying 1 terabyte four times a year, for example, the company buys 4 terabytes once a year. That allows Yellow to leverage price breaks from its vendors, and it reduces the overhead costs of being in multiple procurement cycles. “Four RFPs a year is quite time consuming,” he says.
By paying attention to his faculty, Case Western’s Gonick tries to figure out how much storage he’ll need at a future date. If he needs 10 terabytes of storage now, but thinks that he may need five more terabytes in a year, he gets his vendor to commit to one price in advance. “We have an option to scale at the same price point,” he says.
Goodwin also advises charging storage costs back to the business units. “When there’s no relationship between cost and consumption, there will be unlimited demand for consumption,” he says.
Buying the cheapest hardware or software may save you money up front, but how does it fit into your long-term storage plan? The majority of your storage budget is spent on administration, not product; and hardware and software that costs less initially might cost significantly more down the line. Ultimately, says Evaluator’s Kerns, the business requirements must drive the storage purchase. “Maybe the cost of implementing fibre channel versus IP [channel] is twice as much money. But if you’re worried about that, you better find a new job because it’s a bigger picture issue,” he says.
Storage must be managed like a resource, says Paccar’s Flynn, so you need to invest in the people and tools to best manage it. Now is a great time to negotiate with vendors, he says, noting that there’s probably never been a better time to get them to compete for your business. “We can continue to invest and drive out some screaming deals,” he says.
Yes, rock-bottom bargains are a good thing. But if you don’t have a handle on what you’re buying and why, the deals of today are doomed to failure. Make sure you have a storage management strategy in place and people dedicated to carrying it out. Then use the rest of the tips previously mentioned to crank out the most value from your storage investments.