by Bernard Golden

Cloud CIO: Are You Making Your Data Centers Cloud-Friendly?

Opinion
Jul 06, 20116 mins
Cloud ComputingData CenterGreen IT

Data center construction is growing ever more complex and expensive as data center facilities expand to meet enterprises' seemingly limitless computing needs.

Last week I keynoted a conference that took me far afield from the usual cloud computing events I often speak at. The DatacenterDynamics event focuses on the physical part of data center facilities, computing infrastructures, and the operations groups that run them. I spoke on “IT as the New Cloud Service Provider” and found the entire conference an enlightening experience.

At most cloud computing conferences, the nearest one gets to hardware is server vendors; this conference featured nary a server company, but plenty of UPS firms, cable manufacturers, air conditioning installers, and even construction companies.

And let me tell you, attendees had very specific topics on their minds. For example, the session before mine, “ASHRAE: Is Expanding the Temperature and Humidity Limits—Again,” was absolutely mobbed (more on temperatures later). I have never heard data center temperature mentioned at a cloud computing conference. And unlike most conferences, cloud computing was not front and center at this event. Only two sessions besides mine had “cloud” in their title, one by Zynga CTO Allan Leinwand and the other by an Accenture representative discussing the firm’s survey of IT managers.

However, despite its lack of presence in sessions, cloud computing hovered like a spectre over all the proceedings, with much of every session’s content shaped by the changing nature of computing—and particularly by its exploding scale.

Leinwand’s session was fascinating, especially where he outlined how his company, which developed FarmVille and other online games, has built an internal cloud to supplement its use of Amazon Web Services. In essence, Zynga launches games in a public cloud environment, and those that really take hold are transferred back to the company’s private cloud. As you might imagine, the growth of Zynga’s games can be explosive, so the ability to provide infrastructure rapidly is crucial. Leinwand described how Zynga can take a thousand servers in their packaging on the loading dock and have them added to its cloud computing resource pool in less than 24 hours.

Sean Peterson from Accenture made a very cogent recommendation I had never considered, in the context of his discussion about cost assignment. His general observation is that cost assignment—and particularly chargeback—is a vital part of corporate cloud computing environments to ensure appropriate use of computing resources. Absent chargeback there is no feedback mechanism to guide user behavior.

His recommendation was that chargeback should be preceded by a period of “showback,” in which resource use, without any attempt to perform cost assignment, is provided to user organizations. User organizations usually have no idea how much compute use their applications are generating, and making an immediate switch to chargeback may pose the risk of sticker shock when the actual costs are passed on. Starting with showback offers the opportunity to evaluate use prior to generating an invoice—and may avoid some very heated discussions. We always recommend chargeback mechanisms to our customers, as we feel price signaling is the most effective resource rationing mechanism available to organizations, but going forward we will recommend a period of showback to ease the transition.

Energy efficiency was an explicit or implicit part of almost all of the event’s presentations. In my presentation, I described how the leading Internet and cloud companies like Facebook, Amazon, Google, Microsoft and Yahoo are pushing the envelope on data center efficiency. Another presentation talked about how one could build an efficient data center without having to build a chicken coop. The chicken coop is a reference to a Yahoo data center built near Buffalo that resembles a chicken coop—long narrow buildings open to outside air that allow for natural cooling. Yahoo achieves a power usage effectiveness (PUE) of 1.08 in this data center, compared to a PUE of around two for most corporate infrastructures. Yahoo also uses one central network operating center (NOC) to manage five of these chicken coop buildings. In other words, they improve productivity and efficiency by multiplexing labor across a large number of managed infrastructure environments.

I happened to sit next to a man at lunch who works for Mortenson, a large construction company that, in addition to its other projects, builds data centers. He shared with me some of his firm’s practices that provided real insight about how sophisticated this type of construction is becoming. His firm uses virtual design and construction software to create a 3D design of a data center that allows design conflicts (e.g., a duct that crosses through a cabling run) to be identified prior to construction.

Clearly, the days of putting some racks in unused office space are long gone. Today data centers are sophisticated environments designed to very high standards, despite being labeled “chicken coops.” This poses a problem for many companies: the capital investment required to build something to this level of sophistication and efficiency goes well beyond what most can afford. The extent to which data centers are becoming a rich organization’s game was reinforced by the Mortenson rep’s observation that his firm is currently only building large data centers; the 5,000 to 25,000 square foot project market has completely dried up, he told me.

The final panel of the day outlined the lengths to which Internet/cloud companies will go in their stretch for low data center costs. Current or former data center gurus from Yahoo, Facebook and Google described their experiences running their companies’ compute environments. One panelist described his firm’s iron insistence on cycling out servers on a three-year rotation. While the practice causes additional capital investment, he noted that three years represents two generations of Moore’s Law, which means at the end of three years your servers are offering only 25 percent of the compute efficiency of new machines, meaning a 75 percent waste of power.

Another panelist discussed the use of higher temperatures in data center environments. While ASHRAE,a technical society for the HVAC industry, recommended considering moving data center temperatures up a few degrees, he recommended considering a far higher bump—up to 104 degrees! When queried about possible OSHA issues, he said that once a set of servers is installed, very little human contact should be necessary until they’re decommissioned, and any contact could be limited to ten minutes or less. In fact, he said, if an organization has someone interacting with hardware very often, it’s a sign they’re doing something wrong.

What this conference brought home to me is the fact that IT is becoming a field of specialists. It’s no longer enough to be middling-to-good at every aspect of the field, from data center operations to infrastructure support on through to application development and delivery. With limited budgets and commercial availability of providers with specialized offerings based on large investment and best practices, IT organizations have to figure out where they can deliver distinctive differentiation. The right strategy is to identify how IT can add value in specific areas and focus there, while leveraging outside resources for other aspects of IT.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.

Follow Bernard Golden on Twitter @bernardgolden. Follow everything from CIO.com on Twitter @CIOonline