EPA Urges Efficiency, Many Data Centers Still Far From it

The White House plan to cut carbon dioxide pollution by 30% seeks to meet its goals, in part, through efficiency improvements. This could put further pressure on data centers to improve efficiency, many of which are powering servers that are doing very little work or none at all.

The White House plan to cut carbon dioxide pollution by 30% seeks to meet its goals, in part, through efficiency improvements. This could put further pressure on data centers to improve efficiency, many of which are powering servers that are doing very little work or none at all.

For instance, a recent Uptime Institute survey asked enterprise data professionals: "What percentage of your servers are likely comatose?" About 60% of respondents said the number of comatose servers was under 5%. But nearly 25% put at least 10% of their servers were into that category.

The problem may be bigger than the Uptime survey indicates.

"Most data center operators can't even tell you how many servers they have never mind their utilization, so caution in interpreting those numbers is indicated," said Jonathan Koomey, a research fellow Steyer-Taylor Center for Energy Policy and Finance at Stanford University. "The percentages for comatose servers are likely much bigger."

The EPA plan to reduce carbon emissions is aimed at power plant generation, but it also includes overall energy efficiency a building blocks for reducing carbon emissions.

The coal industry may be unhappy with the EPA's plan, but one company, Johnson Controls, which makes environmental systems for buildings and data centers, issued a statement Monday saying it supports "the inclusion of energy efficiency and distributed energy systems" in the federal rule. The company pledged to reduce carbon and create jobs.

Many high-profile data centers, run by the likes of Apple, eBay and Google, already incorporate alternative energy into their power mix and all tout their overall efficiency. But the Uptime survey, from more than 1,000 enterprise data center operators and executives, suggest that a large number of data centers are wasting energy by running substantial numbers of servers that are doing nothing.

Managing underutilized servers and improving efficiency is not a simple task, as one large financial services company, Barclays, has discovered.

Barclays decommissioned about 9,100 physical servers last year, which represents about 12% to 17% of its total footprint. Those servers, in total, consumed 2.5 megawatts (MW), and Barclays has since become of model for the industry, recently cited by Uptime as the leader in the area.

Barclay's results, which are being repeated year after year, are part of a multi-year effort to develop a decommissioning process that considered the human element alongside the technology.

"What we learned was the biggest impediment to success was people's reluctance to click the approve button" on a change ticket, said Paul Nally, a director at Barclays, in an interview.

The path for Barclays began around 2009 after it acquired the North American operations of Lehman Brothers. There were duplicated systems and needed to be migrated.

Initially, the decommissioning process was chaotic. Shutting down a database server might involve up to a dozen tickets, but there was no order to it. The IT operation turned to an orchestration software system to ensure orderly steps were adhered to: that database administrators would go first, followed by storage, and then operating system and eventually to the people who would do the physical work.

To overcome concerns about decommissioning, Nally said the decommissioning process was made as "safe as possible" and "reversible." With those controls in place, "people's reluctance to hit 'approve' sort of abated," he said.

To help manage the process safely, the firm's IT department spreads the shutdown process over the three weekends.

The first is an inventory check to make sure the server is in the location it's believed to be; the following weekend, the server is shut down via an automated workflow; and on the third weekend, the server is removed from the rack.

The three week process allows Barclays IT teams the option of a quick recovery if there is a mistake in the decommissioning. They also created an internal Web page to simplify the process and build metrics.

Early in the process, there were some errors in decommissioning servers, but it has improved steadily by using a repeatable and consistent plan.

Barclays also has two staff people whose full-time job is to manage the decommissioning. "Having some people dedicated to this also lends itself toward a consistent program," said Nally.

"A huge part of this challenge is just, quite honestly, dealing with human nature," said Nally, because turning off something is a risk. "So you have to have some skin in the game to accept that risk."

To get that buy-in, Barclays has a large awareness program to save cost and be more efficient, he said.

Decommissioning servers is "cleaning up after yourself," and in doing so "you remove a lot of noise from the environment," said Nally. The end result is a more nimble and efficient IT operation, he said.

Patrick Thibodeau covers cloud computing and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov or subscribe to Patrick's RSS feed. His e-mail address is pthibodeau@computerworld.com.

See more by Patrick Thibodeau on Computerworld.com.

Read more about data center in Computerworld's Data Center Topic Center.

This story, "EPA Urges Efficiency, Many Data Centers Still Far From it" was originally published by Computerworld.

Insider Resume Makeover: How (and When) to Break the Rules
Join the discussion
Be the first to comment on this article. Our Commenting Policies