by Nick Booth

Portable datacentres – a moveable feast?

Feature
Sep 16, 2010
IT LeadershipIT StrategyTelecommunications Industry

If a week is a long time in politics, 18 months is an eternity in IT, where there seems to be a paradigm shift every week.

Yet that is the length of time it takes most enterprises to build a datacentre from scratch. If they’re lucky. In many cases it can take up to three years to get all the heating, lighting, air conditioning, power supplies and backup generators in place.

“The biggest challenges when building a new datacentre are the necessary capital expenditure and long lead times,” says CIO Robin Balen who ordered a datacentre­ for network services provider Gyron Inter­net. “Developing an end-to-end solution is expensive and takes up to two years when you consider all the design and build ­requirements.”

You have to know your way around the utility providers, and show you can generate your own electricity if needed, he says, which can be noisy. “Seeking planning permission is a drawn-out and complicated process,” says Balen.

When you finally find a site where the council doesn’t object to you construct­ing a noisy, polluting drain on local power and water supplies, you can then start to create a datacentre in a building that was unlikely to have been designed for creating hot and cold aisles and housing thousands of ­cables and heavy machinery.

You will then spend the next 18 months in endless meetings with highly expensive engineers who specialise in disciplines you will never under­stand. You will have to pay them to be on site even when they’re unable to work, as they wait for someone else to complete their stage of construction.

Small wonder, then, that datacentre construction projects are among the jobs that all CIOs dread, and frequently run over-budget. But what are the alternatives? Outsourcing? Containers? Modules? The pros and cons of outsourcing are well documented. A containerised datacentre offers an almost instant solution, but res­tricts you over choice, whereas the new modular datacentres promise greater compatibility and give you more choice over the contents of the IT inventory. But do they save you any money?

As ever, the whole process of choice has been nebulised by a series of smoke-screen-creating IT vendors anxious to bamboozle the CIO. The conflicting claims and counter claims make it harder to rationalise each approach.

Analyst Clive Longbottom, senior res­earcher at Quocirca, says it’s all a bit of a grey area. “A containerised datacentre is pretty much a modular datacentre. But a modularised datacentre doesn’t have to be a containerised one,” he says.

A containerised system, such as those offered by SGI and Sun Microsystems, comes in a big metal box and you have to take what you’re given. A modular data­centre can be tailored, so you can configure it how you like. Colt, for example, is building data halls in 500m2 units, that can be built offsite and delivered in three months. Each unit can be used as an independent datacentre, or they can be aggregated to form a bigger system. You can even plonk them on top of each other, like the old stackable hubs that became a popular networking commodity. The rationale is that they are open and scalable.

The target is a datacentre model that can be extended just by adding further modules. For a containerised system, the module is of a defined size – a half or full container. For a modular datacentre, it can be down to a portion of a rack, a full rack, a row or a series of rows, depending on the level of granularity required. By creating a known building block, the idea is that a datacentre remains homogeneous, that its management is simpler, and that growth is controlled, says Longbottom.

“All fine and dandy, until the same issues that dogged the drive towards­ a standard hardware configuration for desktops in the 1990s creep in,” he adds. “You need to grow, and you order another­ module. The module turns up and you bung it in and off you go. But something goes wrong, and you take the same steps as you have always done before – and yet you run into problems.”

Why? Because the vendor has stopped making all the bits that you defined as your master model. The 320GB hard disk drives have had to be replaced with 1TB ones, as 320GB drives are just too expensive now.  The connectors have moved from SATA II to SAS. The CPU is 4GHz, rather than 3.2GHz and the NIC is 10/100/1000/10000Mbit, rather than 10/100/1000Mbit.

It’s inevitable, says Longbottom, but it limits the practicality of a modular datacentre. But as long as your requirements and perceptions are right, it’s still far better­ than the complete and utter chaos of an old-style datacentre, where new kit was bought willy-nilly and just managed as sets of individual assets.

Modules can at least be defined to meet specific workloads and can provide known physical dimensions and heat and power loads. By defining these parameters and outlining to the vendor exactly what you want in terms of energy consumption, processing power, cooling and sustainable compatibility, you’ve cracked the idea of a modular system.

Containers, on the other hand, also free you from the constraints of the old datacentre. And they’re a lot faster to order, although this can go disastrously wrong.

Daryl Cornelius, a director at testing company Spirent, explains how datcentres are lethal if you don’t keep them under control. Spirent makes testing systems that measure applications and networks.

“We were called on by one CIO, whose hair was on fire. His servers were failing to deliver the processing power, so he’d had a containerised datcentre installed, but it had made no difference,” says Cornelius.

Container datcentres don’t come cheap, and neither do the cranes that are used to deliver them. So you can imagine the poor man’s pain when he discovered his bosses’ financial service applications were plodding along, as slow as ever.

A quick test by Spirent revealed the answer. “It was an optimisation problem, nothing to do with the servers, but the bandwidth,” says Cornelius. Once you have installed a containerised datacentre, there’s not a huge amount you can do with it. “The company ended up spending a fortune on an asset it can’t re-use and won’t get a refund on,” he says.

Everyone makes the odd purchasing mistake, and the City is littered with ghost servers that were bought in haste but sit on a network doing nothing. Sadly a containerised datacentre is not a mistake you can easily hide.

It’s not really the purchasing costs that hurt though, but the running costs. Here containers have their advantages, explains David Galton-Fenzi, director of sales and marketing Zycko, which deals in SGI containerised datacentres. “They’re much more intense and you can pack as much computing power into 20m2 of a container as you’d get into a 3000-foot data hall. That’s a lot less air to cool,” he says.

The other positive is that at end of life you just take the container out and plug in a replacement. Old-style datacentres ­involve a high degree of decommissioning – and little can be done until decommissioning has been completed.

Cross-vendor compatibility is also an issue, warns Chris Smith, marketing director at infrastructure optimisation specialist on365. “They might offer high densities and efficient operating costs, but only take one vendor’s equipment,” says Smith. “They’re restrictive: it’s SGI IceCube versus the Sun Blackbox.”

An important deciding factor is the power usage effectiveness (PUE) rating. SGI claims its IceCube can offer 15 petabytes of storage at a PUE of 1.12, while the Colt modular data halls are rated at 1.2 on the PUE scale.

Ultimately, the best long-term objective could be to create your own datacentre. It is a compelling solution for a business with a large storage need, where they can accept any brand of device. It would also be a great building block for the apparently ­imminent era of cloud computing.