More than half of U.S. companies have plans to relocate or expand their data centers. Here's how to avoid the five mistakes that can turn your data center relocation into a disaster. No C-level executive, whether it’s the CIO or CFO, wants to invest in their company’s data center, especially not now when the economy is executing an almost perfect swan dive into an Olympic-sized recessionary pool. But an optimally (or even an adequately) functioning data center is not a luxury; it’s a business necessity. If it ain’t right, it’s got to be fixed. And chances are, your company’s data center is not right. In fact, according to a study conducted by the AFCOM Data Center Institute, an organization for data center professionals, a majority of U.S. companies (53 percent) expect to relocate or expand their data centers during the next several years. Nearly one-third say they will need to move, while 45 percent expect to make major improvements to their existing facilities. What’s wrong with data centers today? What isn’t? They’re old (at a 2007 Gartner conference a third of the attendees said their data centers were seven years old or older, meaning that they weren’t designed for the power and cooling needs of today’s high-density servers); their TCO is growing at twice the rate of most companies’ revenues, and due to the growing amount of data being collected, stored, and processed, they’re often located in facilities that while perhaps suitable five years ago cannot be upgraded today. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe It’s no surprise that companies such as Alcatel-Lucent are doing radical makeovers of their data center strategy, as profiled by CIO.com earlier this year. So, whether you want to or not, you’re going to have to move or consolidate or redesign your data center sooner rather than later, and you want to do it as well and as cost-effectively as possible. You certainly don’t want it to turn into a disaster like what happened to the state of Oregon. Oregon’s Data Center NightmareIn 2004 the State of Oregon launched an initiative to consolidate the data centers of 12 state agencies and their approximately 1,700 servers into a single, new, Tier 3 facility. Oregon wanted to reduce the number of servers and operating systems it supported (thereby lowering hardware, licensing, and management costs), offer new and better service level agreements, improve the state’s disaster recovery capability, allow for growth and technological advances, and ensure better data security. The state’s new site was completed in January, 2006 at a cost of $20 million. A year later, 11 agencies were migrated to the new facility, at a cost of $43 million. At $25,000 per relocated server or about $4 million per agency, the move’s cost was astounding—and it gets worse. In July, 2008, the state issued a report concluding that in fact only 70 out of the 1,700 servers had been eliminated, and new service level agreements had not been provided. Data security was so poor that the Department of Education could not move into the new facility due to its failure to meet federal privacy regulations, and another agency had to move back into its old center because the new center’s power supply was inadequate. As for the projected cost savings, who knew? What went wrong? Oregon had tumbled into almost all of the pitfalls that can ruin a data center relocation and consolidation. The Five Pitfalls and How to Avoid Them: 1. Poor Planning: Oregon’s project technology administrator admitted that the relocation plan underestimated the number of servers the new facility would have to accommodate. Underestimating the complexity of data center moves—the time it will take, the skills required to do the job, the hardware needed—is more the rule than the exception when it comes to relocation and consolidation. This is especially true when it comes to taking into account application dependencies. In the heterogeneous environments that characterize most IT application portfolios and IT infrastructures—with their many bolt-ons and homegrown systems developed over the years—no software tool exists today that can see all of the interdependencies. To account for them, the knowledge has to be collected from the people managing the applications—a time-consuming task. Indeed, moving a data center the right way places a large burden on IT departments that already are fully utilized and often over-booked. In our experience, it’s wise either to dedicate a full-time team to planning the move or to look outside the organization for professional help. 2. Underestimating Power Requirements: The Oregon project administrator allowed that the electrical power the facility was designed to provide—55 watts per square foot—was too low. Data centers built for today’s equipment range from 150 to 300 watts per square foot. IT professionals frequently underestimate power requirements, and power costs, particularly if facilities management pays the bills—as is typically the case. In a recent survey, 68 percent of IT managers said they were not responsible for power bills related to their data center’s IT equipment. It is important to make sure facilities and IT talk about their respective issues so that they gain an appreciation for their differing perspectives and areas of expertise. This is the only way to prevent their issues from turning into problems, and their problems from turning into data center relocation and consolidation disasters. 3. Failure to Establish Pre-Move Baselines: It was difficult for Oregon to determine whether the agencies it had moved were realizing any of the cost reductions originally sought because “the baseline data provided by the agencies before the consolidation was either grossly understated or nonexistent.” It’s an old saw that you can’t improve what you can’t measure. A corollary is that you can’t compare one thing to another if you don’t know what the first thing was. Know your current data center TCO and have the numbers in hand before moving into your new facility or risk opening yourself up to ceaseless fingerpointing and complaining. 4. Upgrading Systems During the Move: Oregon consolidated its facilities before “the underlying architecture, standards, and licensing issues had been worked out.” In our experience, any change undertaken during a move adds risks and complicates the project. This is especially significant when it comes to today’s popular practice of using a data center move or consolidation to drive server virtualization. Although worthwhile, virtualization is a significant project in itself, and attempting to implement server virtualization during a move means trying to do two very difficult things at the same time—a sure recipe for disaster. In short, try to minimize changes during the move planning and execution periods: don’t switch vendors, and certainly don’t virtualize. The exception to this rule is that it often pays to re-IP and purchase new networking gear before the move. This will save the effort of reinstalling new gear in the new site during the move. 5. There’s No Substitute for Experience: Because a data center move is generally a once in a career event for IT professionals, few companies have the expertise on-hand to do it well. Very high density power and cooling environments require specialized expertise and coordination. Unfortunately, IT knowledge does not translate into an understanding of how to move a data center, nor does a knowledge of facilities (and operations) translate into an understanding of the singular requirements of today’s data centers, not to mention tomorrow’s. Experience counts. If your organization has someone with the requisite experience, get him or her on the moving team. If it doesn’t, find someone who does. In this economy especially, it is critical that data centers both facilitate current operations and provide the flexibility for future business growth. A botched move can stop an enterprise dead in its tracks; a poorly managed one can force an organization to incur the expense of moving again far too soon. Avoiding these five pitfalls won’t ensure success, but it’s a good way to start preventing disaster. Michael Bullock is the founder and CEO of Transitional Data Services, a Boston, MA-based consulting firm that provides data center design, construction and relocation services for large scale, ultra-high density data centers. Prior to starting TDS, Bullock held executive leadership positions at Student Advantage, CMGI and Renaissance Worldwide. Related content feature Gen AI success starts with an effective pilot strategy To harness the promise of generative AI, IT leaders must develop processes for identifying use cases, educate employees, and get the tech (safely) into their hands. By Bob Violino Sep 27, 2023 10 mins Generative AI Innovation Emerging Technology feature A fluency in business and tech yields success at NATO Manfred Boudreaux-Dehmer speaks with Lee Rennick, host of CIO Leadership Live, Canada, about innovation in technology, leadership across a vast cultural landscape, and what it means to hold the inaugural CIO role at NATO. By CIO staff Sep 27, 2023 6 mins CIO IT Skills Innovation feature The demand for new skills: How can CIOs optimize their team? By Andrea Benito Sep 27, 2023 3 mins opinion The CIO event of the year: What to expect at CIO100 ASEAN Awards By Shirin Robert Sep 26, 2023 3 mins IDG Events IT Leadership Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe