Data Center Definition and Solutions

Data Center topics covering definition, objectives, systems and solutions.

What is a data center?

Known as the server farm or the computer room, the data center is where the majority of an enterprise servers and storage are located, operated and managed. There are four primary components to a data center:

White space: This typically refers to the usable raised floor environment measured in square feet (anywhere from a few hundred to a hundred thousand square feet). For data centers that dont use a raised floor environment, the term "white space" may still be used to show usable square footage.

Support infrastructure: This refers to the additional space and equipment required to support data center operations — including power transformers, your uninterruptible power source (UPS), generators, computer room air conditioners (CRACs), remote transmission units (RTUs), chillers, air distribution systems, etc. In a high-density, Tier 3 class data center (i.e. a concurrently maintainable facility), this support infrastructure can consume 4-6 times more space than the white space and must be accounted for in data center planning.

IT equipment: This includes the racks, cabling, servers, storage, management systems and network gear required to deliver computing services to the organization.

Operations: The operations staff assures that the systems (both IT and infrastructure) are properly operated, maintained, upgraded and repaired when necessary. In most companies, there is a division of responsibility between the Technical Operations group in IT and the staff responsible for the facilities support systems.

RELATED LINKS

High-Powered Data-Center Management Tools Come Downmarket

How Server Virtualization Tools Can Balance Data Center Loads

Five Lessons for Consolidating Data Centers At Merger Time

Data Center Drilldown

NEWSLETTERS

Data Center

How are data centers managed?

Operating a data center at peak efficiency and reliability requires the combined efforts of facilities and IT.

IT systems: Servers, storage and network devices must be properly maintained and upgraded. This includes things like operating systems, security patches, applications and system resources (memory, storage and CPU).

Facilities infrastructure: All the supporting systems in a data center face heavy loads and must be properly maintained to continue operating satisfactorily. These systems include cooling, humidification, air handling, power distribution, backup power generation and much more.

Monitoring: When a device, connection or application fails, it can take down mission critical operations. Sometimes, one system's failure will cascade to applications on other systems that rely on the data or services from the failed unit. For example, multiple systems, such as inventory control, credit card processing, accounting and much more will be involved in a complex process such as eCommerce checkout. A failure in one will compromise all the others. Additionally, modern applications typically have a high degree of device and connection interdependence. Ensuring maximum uptime requires 24/7 monitoring of the applications, systems and key connections involved in all of an enterprises various workflows.

Building Management System: For larger data centers, the building management system (BMS) will allow for constant and centralized monitoring of the facility, including temperature, humidity, power and cooling.

The management of IT and data center facilities are often outsourced to third party companies that specialize in the monitoring, maintenance and remediation of systems and facilities on a shared services basis.

What is a green data center?

A green data center is one that can operate with maximum energy efficiency and minimum environmental impact. This includes the mechanical, lighting, electrical and IT equipment (servers, storage, network, etc.). Within corporations, the focus on green data centers is driven primarily by a desire to reduce the tremendous electricity costs associated with operating a data center. That is, going green is recognized as a way to reduce operating expense significantly for the IT infrastructure.

The interest in green data centers is also being driven by the federal government. In 2006, Congress passed public law 109-431 asking the EPA to: "analyze the rapid growth and energy consumption of computer data centers by the Federal Government and private enterprise."

In response, the EPA developed a comprehensive report analyzing current trends in the use of energy and the energy costs of data centers and servers in the U.S. and outlined existing and emerging opportunities for improving energy efficiency. It also made recommendations for pursuing these energy-efficiency opportunities broadly across the country through the use of information and incentive-based programs.

According to the EPA report, the two largest consumers of electricity in the data center are:

• Support infrastructure — 50% of total

• General servers — 34% of total

Since then, significant strides have been made to improve the efficiency of servers. High density blade servers and storage are now offering much more compute capacity per Watt of energy. Server virtualization is allowing organizations to reduce the total number of servers they support, and the introduction of EnergyStar servers have all combined to provide many options for both the public and private sectors to reduce that 34% of electricity being spent on the general servers.

Of course, the greatest opportunity for further savings is in the support infrastructure of the data center facility itself. According to the EPA, most data centers consume 100% to 300% of additional power for the support systems than are being used for their core IT operations. Through a combination of best practices and migration to fast-payback facility improvements (like ultrasonic humidification and tuning of airflow), this overhead can be reduced to about 30% of the IT load.

What are some top stakeholder concerns about data centers?

While the data center must provide the resources necessary for the end users and the enterprise's applications, the provisioning and operation of a data center is divided (sometimes uncomfortably) between IT, facilities and finance, each with its own unique perspective and responsibilities.

IT: It is the responsibility of the business's IT group to make decisions regarding what systems and applications are required to support the business' operations. IT will directly manage those aspects of the data center that relate directly to the IT systems while relying on facilities to provide for the data center's power, cooling, access and physical space.

Facilities: The facilities group is generally responsible for the physical space — for provisioning, operations and maintenance, along with other building assets owned by the company. The facilities group will generally have a good idea of overall data center efficiency and will have an understanding of and access to IT load information and total power consumption.

Finance: The finance group will be responsible for aligning near term vs. long term capital expenditures (CAPEX) to acquire or upgrade physical assets and operating expenses (OPEX) to run them with overall corporate financial operations (balance sheet and cash flow).

Perhaps the biggest challenge confronting these three groups is that by its very nature a data center rarely will be operating at or even close to its optimally defined range. With a typical life cycle of 10 years (or perhaps longer), it is essential that the data center's design remains sufficiently flexible to support increasing power densities and various degrees of occupancy over a not insignificant period of time. This in-built flexibility should apply to power, cooling, space and network connectivity. When a facility is approaching its limits of power, cooling and space, the organization will be confronted by the need to optimize its existing facilities, expand them or establish new ones.

What options are available when I'm running out of power, space or cooling?

Optimize: The quickest way to address this problem and increase available power, space and cooling is to optimize an existing facility. The biggest gains in optimization can be achieved by reducing overall server power load (through virtualization) and by improving the efficiency of the facility. For example, up to 70% of the power required to cool and humidify the data center environment can be conserved with currently available technologies such as outside air economizers, ultrasonic humidification, high efficiency transformers and variable frequency drive units (VFDs). Using these techniques when combined with new, higher density IT systems will allow many facilities to increase IT capacity while simultaneously decreasing facility overhead.

Move: If your existing data center can no longer be upgraded to support today's more efficient (but hotter running and more energy-thirsty) higher-density deployments, there may be nothing you can do except to move to a new space. This move will likely begin with a needs assessment/site selection process and will conclude with an eventual build-out of your existing facility or a move to a new building and site.

Outsource: Besides moving forward with your own new facility, there are two other options worth consideration:

• Colocation: This means moving your data center into space in a shared facility managed by an appropriate service provider. As there are a broad range of business models for how these services can be provided (including business liability), it is important to make sure the specific agreement terms match your short-and-long term needs and (always) take into account the flexibility you require so that your data center can evolve over its lifespan.

• Cloud computing: The practice of leveraging shared computing and storage resources — and not just the physical infrastructure of a colocation provider — has been growing rapidly for certain niche-based applications. While cloud computing has significant quality-of-service, security and compliance concerns that to date have delayed full enterprise-wide deployment, it can offer compelling advantages in reducing startup costs, expenses and complexity.

What are some data center measurements and benchmarks and where can I find them?

PUE (Power Usage Effectiveness): Created by members of the Green Grid, PUE is a metric used to determine a data center's energy efficiency. A data center's PUE is arrived at by dividing the amount of power entering it by the power used to run the computer infrastructure within it. Expressed as a ratio, with efficiency improving as the ratio approaches 1, data center PUE typically range from about 1.3 (good) to 3.0 (bad), with an average of 2.5 (not so good).

DCiE (Data Center Infrastructure Efficiency): Created by members of the Green Grid, DCiE is another metric used to determine the energy efficiency of a data center, and it is the reciprocal of PUE. It is expressed as a percentage and is calculated by dividing IT equipment power by total facility power. Efficiency improves as the DCiE approaches 100%. A data center's DCiE typically ranges from about 33% (bad) to 77% (good), with an average DCiE of 40% (not so good).

LEED Certified: Developed by the U.S. Green Building Council (USGBC), LEED is an internationally recognized green building certification system. It provides third-party verification that a building or community was designed and built using strategies aimed at improving performance across all the metrics that matter most: energy savings, water efficiency, CO2 emission reduction, the quality of the indoor environment, the stewardship of resources and the sensitivity to their impact on the general environment. For more information on LEED, go to www.usgbc.org.

1 2 3 Page
Join the discussion
Be the first to comment on this article. Our Commenting Policies