Data Center Definition and Solutions

Data Center topics covering definition, objectives, systems and solutions.

1 2 3 Page 2
Page 2 of 3

DCiE (Data Center Infrastructure Efficiency): Created by members of the Green Grid, DCiE is another metric used to determine the energy efficiency of a data center, and it is the reciprocal of PUE. It is expressed as a percentage and is calculated by dividing IT equipment power by total facility power. Efficiency improves as the DCiE approaches 100%. A data center's DCiE typically ranges from about 33% (bad) to 77% (good), with an average DCiE of 40% (not so good).

LEED Certified: Developed by the U.S. Green Building Council (USGBC), LEED is an internationally recognized green building certification system. It provides third-party verification that a building or community was designed and built using strategies aimed at improving performance across all the metrics that matter most: energy savings, water efficiency, CO2 emission reduction, the quality of the indoor environment, the stewardship of resources and the sensitivity to their impact on the general environment. For more information on LEED, go to

The Green Grid: A not-for-profit global consortium of companies, government agencies and educational institutions dedicated to advancing energy efficiency in data centers and business computing ecosystems. The Green Grid does not endorse vendor-specific products or solutions, and instead seeks to provide industry-wide recommendations on best practices, metrics and technologies that will improve overall data center energy efficiencies. For more on the Green Grid, go to

Telecommunications Industry Association (TIA): TIA is the leading trade association representing the global information and communications technology (ICT) industries. It helps develop standards, gives ICT a voice in government, provides market intelligence, certification and promotes business opportunities and world-wide environmental regulatory compliance. With support from its 600 members, TIA enhances the business environment for companies involved in telecommunications, broadband, mobile wireless, information technology, networks, cable, satellite, unified communications, emergency communications and the greening of technology. TIA is accredited by ANSI.

TIA-942: Published in 2005, the Telecommunications Infrastructure Standards for Data Centers was the first standard to specifically address data center infrastructure and was intended to be used by data center designers early in the building development process. TIA-942 covers:

• Site space and layout

• Cabling infrastructure

• Tiered reliability

• Environmental considerations

Tiered Reliability — The TIA-942 standard for tiered reliability has been adopted by ANSI based on its usefulness in evaluating the general redundancy and availability of a data center design.

Tier 1 Basic — no redundant components (N): 99.671% availability

• Susceptible to disruptions from planned and unplanned activity

• Single path for power and cooling

• Must be shut down completely to perform preventive maintenance

• Annual downtime of 28.8 hours

Tier 2 — Redundant Components (limited N+1): 99.741% availability

• Less susceptible to disruptions from planned and unplanned activity

• Single path for power and cooling includes redundant components (N+1)

• Includes raised floor, UPS and generator

• Annual downtime of 22.0 hours

Tier 3 — Concurrently Maintainable (N+1): 99.982% availability

• Enables planned activity (such as scheduled preventative maintenance) without disrupting computer hardware operation (unplanned events can still cause disruption)

• Multiple power and cooling paths (one active path), redundant components (N+1)

• Annual downtime of 1.6 hours

Tier 4 — Fault Tolerant (2N+1): 99.995% availability

• Planned activity will not disrupt critical operations and can sustain at least one worst-case unplanned event with no critical load impact

• Multiple active power and cooling paths

• Annual downtime of 0.4 hours

Due to the doubling of infrastructure (and space) over Tier 3 facilities, a Tier 4 facility will cost significantly more to build and operate. Consequently, many organizations prefer to operate at the more economical Tier 3 level as it strikes a reasonable balance between CAPEX, OPEX and availability.

Uptime Institute: This is a for profit organization formed to achieve consistency in the data center industry. The Uptime Institute provides education, publications, consulting, research, and stages conferences for the enterprise data center industry. The Uptime Institute is one example of a company that has adopted the TIA-942 tier rating standard as a framework for formal data center certification. However, it is important to remember that a data center does not need to be certified by the Uptime Institute in order to be compliant with TIA-942.

Is the federal government involved in data centers?

Since data centers consume a far greater share of the power grid than any other sector, they have attracted the attention of the federal government and global regulatory agencies.

Cap and Trade: Sometimes called emissions trading, this is an administrative approach to controlling pollution by providing economic incentives for achieving reductions in polluting emissions. In concept, the government sets a limit ("a cap") on the amount of pollutants an enterprise can release into the environment. Companies that need to increase their emissions must buy (or trade) credits from those who pollute less. The entire system is designed to impose higher costs (essentially, taxes) on companies that don't use clean energy sources. The Obama administration is proposing Cap and Trade legislation and that is expected to affect U.S. energy prices and data center economics in the near future.

DOE (Department of Energy): The U.S. Department of Energys overarching mission is to advance the national, economic, and energy security of the United States. The EPA and the DOE have initiated a joint national data center energy efficiency information program. The program is engaging numerous industry stakeholders who are developing and deploying a variety of tools and informational resources to assist data center operators in their efforts to reduce energy consumption in their facilities.

EPA (Environmental Protection Agency): The EPA is responsible for establishing and enforcing environmental standards in order to safeguard the environment and thereby improve the general state of Americas health. In May 2009 the EPA released Version 1 of the ENERGY STAR® Computer Server specification detailing the energy efficiency standards required by the agency. Servers have to carry the label.

PL 109-431: Passed in December 2006, the law instructs the EPA to report to congress the status of IT data center energy consumption along with recommendations to promote the use of energy efficient computer servers in the US. It resulted in a "Report to Congress on Server and Data Center Energy Efficiency" delivered in August 2007 by the EPA ENERGY STAR Program. This report assesses current trends in the energy use and energy costs of data centers and servers in the US and outlines existing and emerging opportunities for improved energy efficiency. It provides particular information on the costs of data centers and servers to the federal government and opportunities for reducing those costs through improved efficiency. It also makes recommendations for pursuing these energy-efficiency opportunities broadly across the country through the use of information and incentive-based programs.

What should I consider when moving my data center?

When a facility can no longer be optimized to provide sufficient power and cooling — or it can't be modified to meet evolving space and reliability requirements — then you're going to have to move. Successful data center relocation requires careful end-to-end planning.

Site selection: A site suitability analysis should be conducted prior to leasing or building a new data center. There are many factors to consider when choosing a site. For example, the data center should be located far from anyplace where a natural disaster — floods, earthquakes and hurricanes — could occur. As part of risk mitigation, locations near major highways and aircraft flight corridors should be avoided. The site should be on high ground, and it should be protected. It should have multiple, fully diverse fiber connections to network service providers. There should be redundant, ample power for long term needs. The list can go on and on.

Move execution: Substantial planning is required at both the old and the new facility before the actual data center relocation can begin. Rack planning, application dependency mapping, service provisioning, asset verification, transition plans, test plans and vendor coordination are just some of the factors that go into data center transition planning.

If you are moving several hundred servers, the relocation may be spread over many days. If this is the case, you will need to define logical move bundles so that interdependent applications and services can be moved together so that you will be able to stay in operation up to the day on which the move is completed.

On move day, everything must go like clockwork to avoid down time. Real time visibility into move execution through a war room or a web-based dashboard will allow you to monitor the progress of the move and be alerted to potential delays that require immediate action or remediation.

What data center technologies should I be aware of?

Alternative Energy: Solar, wind and hydro show great potential for generating electricity in an eco-friendly manner. Nuclear and hydro show great potential for grid based, green power. However, the biggest challenge when it comes to using alternative energy for your data center applications is the need for a constant supply at high service levels. If you use alternative energy but still need to buy from the local power company when hit with peak loads, many of the economic benefits youre reaping from the alternative energy source will disappear quickly. As new storage mechanisms are developed that capture and store the excess capacity so it can be accessed when needed, then alternative energy sources will play a much greater role in the data center than they do today. Water and air based storage systems show great potential as eco-friendly energy storage options.

Ambient Return: This is a system whereby air returns to the air conditioner unit naturally and unguided. This method is inefficient in some applications because it is prone to mixing hot and cold air, and to stagnation caused by static pressure, among other problems.

Chiller based cooling: A type of cooling where chilled water is used to dissipate heat in the CRAC unit (rather than glycol or refrigerant). The heat exchanger in a chiller based system can be air or water cooled. Chiller based system provide CRAC units with greater cooling capacity than DX based systems. Besides removing the DX limitation of a 24° F. spread between output and input, the chiller system can adjust dynamically based on load.

Chimney effect: Just as your home chimney leverages air pressure differences to drive exhaust, the same principle can be used in the data center. This has lead to a common design with cool air being fed below a raised floor and pulled into the data center as hot air escapes above through the chimney. This design creates a very efficient circulation of cool air while minimizing air mixing.

Cloud computing: This is a style of computing that is dynamically scalable through virtualized resources provided as a service over the Internet. In this model the customer need not be concerned with the technical details of the remote resources. (That's why it is often depicted as a cloud in system diagrams.) There are many different types of cloud computing options with variations in security, backup, control, compliance and quality of service that must be thoroughly vetted to assure their use does not put the organization at risk.

1 2 3 Page 2
Page 2 of 3
Survey says! Share your insights in our 19th annual State of the CIO study