If a week is a long time in politics, 18 months is an eternity in IT, where there seems to be a paradigm shift every week.\nYet that is the length of time it takes most enterprises to build a datacentre from scratch. If they\u2019re lucky. In many cases it can take up to three years to get all the heating, lighting, air conditioning, power supplies and backup generators in place.\n\u201cThe biggest challenges when building a new datacentre are the necessary capital expenditure and long lead times,\u201d says CIO Robin Balen who ordered a datacentre\u00ad for network services provider Gyron Inter\u00adnet. \u201cDeveloping an end-to-end solution is expensive and takes up to two years when you consider all the design and build \u00adrequirements.\u201d\nYou have to know your way around the utility providers, and show you can generate your own electricity if needed, he says, which can be noisy. \u201cSeeking planning permission is a drawn-out and complicated process,\u201d says Balen.\nWhen you finally find a site where the council doesn\u2019t object to you construct\u00ading a noisy, polluting drain on local power and water supplies, you can then start to create a datacentre in a building that was unlikely to have been designed for creating hot and cold aisles and housing thousands of \u00adcables and heavy machinery.\nYou will then spend the next 18 months in endless meetings with highly expensive engineers who specialise in disciplines you will never under\u00adstand. You will have to pay them to be on site even when they\u2019re unable to work, as they wait for someone else to complete their stage of construction.\nSmall wonder, then, that datacentre construction projects are among the jobs that all CIOs dread, and frequently run over-budget. But what are the alternatives? Outsourcing? Containers? Modules? The pros and cons of outsourcing are well documented. A containerised datacentre offers an almost instant solution, but res\u00adtricts you over choice, whereas the new modular datacentres promise greater compatibility and give you more choice over the contents of the IT inventory. But do they save you any money?\nAs ever, the whole process of choice has been nebulised by a series of smoke-screen-creating IT vendors anxious to bamboozle the CIO. The conflicting claims and counter claims make it harder to rationalise each approach.\nAnalyst Clive Longbottom, senior res\u00adearcher at Quocirca, says it\u2019s all a bit of a grey area. \u201cA containerised datacentre is pretty much a modular datacentre. But a modularised datacentre doesn\u2019t have to be a containerised one,\u201d he says.\nA containerised system, such as those offered by SGI and Sun Microsystems, comes in a big metal box and you have to take what you\u2019re given. A modular data\u00adcentre can be tailored, so you can configure it how you like. Colt, for example, is building data halls in 500m2 units, that can be built offsite and delivered in three months. Each unit can be used as an independent datacentre, or they can be aggregated to form a bigger system. You can even plonk them on top of each other, like the old stackable hubs that became a popular networking commodity. The rationale is that they are open and scalable.\nThe target is a datacentre model that can be extended just by adding further modules. For a containerised system, the module is of a defined size \u2013 a half or full container. For a modular datacentre, it can be down to a portion of a rack, a full rack, a row or a series of rows, depending on the level of granularity required. By creating a known building block, the idea is that a datacentre remains homogeneous, that its management is simpler, and that growth is controlled, says Longbottom.\n\u201cAll fine and dandy, until the same issues that dogged the drive towards\u00ad a standard hardware configuration for desktops in the 1990s creep in,\u201d he adds. \u201cYou need to grow, and you order another\u00ad module. The module turns up and you bung it in and off you go. But something goes wrong, and you take the same steps as you have always done before \u2013 and yet you run into problems.\u201d\nWhy? Because the vendor has stopped making all the bits that you defined as your master model. The 320GB hard disk drives have had to be replaced with 1TB ones, as 320GB drives are just too expensive now.\u00a0 The connectors have moved from SATA II to SAS. The CPU is 4GHz, rather than 3.2GHz and the NIC is 10\/100\/1000\/10000Mbit, rather than 10\/100\/1000Mbit.\nIt\u2019s inevitable, says Longbottom, but it limits the practicality of a modular datacentre. But as long as your requirements and perceptions are right, it\u2019s still far better\u00ad than the complete and utter chaos of an old-style datacentre, where new kit was bought willy-nilly and just managed as sets of individual assets.\nModules can at least be defined to meet specific workloads and can provide known physical dimensions and heat and power loads. By defining these parameters and outlining to the vendor exactly what you want in terms of energy consumption, processing power, cooling and sustainable compatibility, you\u2019ve cracked the idea of a modular system.\nContainers, on the other hand, also free you from the constraints of the old datacentre. And they\u2019re a lot faster to order, although this can go disastrously wrong.\nDaryl Cornelius, a director at testing company Spirent, explains how datcentres are lethal if you don\u2019t keep them under control. Spirent makes testing systems that measure applications and networks.\n\u201cWe were called on by one CIO, whose hair was on fire. His servers were failing to deliver the processing power, so he\u2019d had a containerised datcentre installed, but it had made no difference,\u201d says Cornelius.\nContainer datcentres don\u2019t come cheap, and neither do the cranes that are used to deliver them. So you can imagine the poor man\u2019s pain when he discovered his bosses\u2019 financial service applications were plodding along, as slow as ever.\nA quick test by Spirent revealed the answer. \u201cIt was an optimisation problem, nothing to do with the servers, but the bandwidth,\u201d says Cornelius. Once you have installed a containerised datacentre, there\u2019s not a huge amount you can do with it. \u201cThe company ended up spending a fortune on an asset it can\u2019t re-use and won\u2019t get a refund on,\u201d he says.\nEveryone makes the odd purchasing mistake, and the City is littered with ghost servers that were bought in haste but sit on a network doing nothing. Sadly a containerised datacentre is not a mistake you can easily hide.\nIt\u2019s not really the purchasing costs that hurt though, but the running costs. Here containers have their advantages, explains David Galton-Fenzi, director of sales and marketing Zycko, which deals in SGI containerised datacentres. \u201cThey\u2019re much more intense and you can pack as much computing power into 20m2 of a container as you\u2019d get into a 3000-foot data hall. That\u2019s a lot less air to cool,\u201d he says.\nThe other positive is that at end of life you just take the container out and plug in a replacement. Old-style datacentres \u00adinvolve a high degree of decommissioning \u2013 and little can be done until decommissioning has been completed.\nCross-vendor compatibility is also an issue, warns Chris Smith, marketing director at infrastructure optimisation specialist on365. \u201cThey might offer high densities and efficient operating costs, but only take one vendor\u2019s equipment,\u201d says Smith. \u201cThey\u2019re restrictive: it\u2019s SGI IceCube versus the Sun Blackbox.\u201d\nAn important deciding factor is the power usage effectiveness (PUE) rating. SGI claims its IceCube can offer 15 petabytes of storage at a PUE of 1.12, while the Colt modular data halls are rated at 1.2 on the PUE scale.\nUltimately, the best long-term objective could be to create your own datacentre. It is a compelling solution for a business with a large storage need, where they can accept any brand of device. It would also be a great building block for the apparently \u00adimminent era of cloud computing.