Despite Green Diet, Data Centers Still Gobble Power

Despite virtualization, power saving hardware and other green-friendly IT tech, data centers are consuming more power than ever

By Joab Jackson
Fri, May 21, 2010

IDG News Service — Like the unfortunate person who continually diets but only seems to gain more weight, power-hungry data centers -- despite adopting virtualization and power management techniques -- only seem to be consuming more energy than ever, to judge from some of the talks at the Uptime Symposium 2010, held this week in New York.

"There is a freight train coming that most people do not see, and it is that you are going to run out of power and you will not be able to keep your data center cool enough," Rob Bernard, the chief environmental strategist for Microsoft (MSFT), told attendees at the conference.

Power usage is not a new issue, of course. In 2006, the U.S. Department of Energy predicted that data center energy consumption would double by 2011 to more than 120 billion kilowatt-hours (kWh). This prediction seems to be playing out: An ongoing survey from the Uptime Institute found that, from 2005 to 2008, the electricity usage of its members' data centers grew at an average of about 11 percent a year.

But despite all the talk in green computing, data centers don't seem to be getting more power-efficient. In fact, they seem to be getting worse.

"We haven't fundamentally changed the way we do things. We've done a lot of great stuff at the infrastructure level, but we haven't changed our behavior," Bernard said.

Speakers at the conference pointed to a number of different power-sucking culprits, including energy-indifferent application programming, siloed organizational structures, and, ironically, better hardware.

One part of the problem is the way applications are developed. "Applications are architected in the old paradigm," Bernard said. Developers routinely build programs that allocate too much memory and hold on to the processor for too long. A single program that isn't written to go into sleep mode when not in use will drive up power consumption for the entire server.

"The application isn't energy-aware, it doesn't matter that every other application on the client is," he said. That one application will prevent the computer from going into a power-saving sleep mode.

The relentless pace of processor improvement is another culprit, at least if the data center manager doesn't handle it correctly. Thanks to the still-unrelenting pace of Moore's Law, in which the number of transistors on new chips doubles every two years or so, each new generation of processors can double the performance of its predecessors.

In terms of power efficiency, this is problematic, even if the new chips don't consume more power than the old ones, Bernard said. Swapping out old processors for new ones may get the application to run faster, but the application takes up correspondingly less of the more powerful CPU's resources. Meanwhile, the unused cores idle, still consuming a large amount of power. This means more capacity is wasted, unless more applications are folded onto fewer servers.

Continue Reading

Our Commenting Policies