by Joe Weinman

While digital growth explodes, energy use remains flat

Jun 29, 2016
Cloud ComputingData CenterEnergy Industry

Even as digitalization pervades our society, technologies for cloud, virtualization and energy efficiency are keeping energy use essentially constant, according to a new DOE report.

An authoritative analysis titled the “United States Data Center Energy Usage Report” has just been released from the U.S. Department of Energy, and it contains shocking conclusions regarding electricity. After energy use grew with wild abandon through the first decade of the new millennium, approaches such as cloud and virtualization have driven so much efficiency that U.S. data center electricity use has barely budged, even as virtually every dimension of our world has become more, well, virtual. Specifically, in essentially the same period that companies such as Instagram grew from an idea to half a billion users, overall U.S. electricity usage for “data centers”—spanning the range from server closets to hyperscale facilities run by companies such as Amazon, Google, and Microsoft—has increased less than 1 percent per year.

The 66 page report contains a comprehensive analysis from the U.S. Department of Energy’s prestigious Lawrence Berkeley National Laboratory, derived in collaboration with globally recognized experts such as Jonathan Koomey from Stanford, Eric Masanet from Northwestern, and Inês Azevedo from Carnegie Mellon. It uses a sophisticated modeling approach combined with voluminous real world data.

Specifically, the report indicates that energy used by digital infrastructure in the U.S. only grew 4% in the period from 2010-2014, with an estimated 70 billion kilowatt hours used in 2014. This is expected to barely change through 2020, when it is projected to reach 73 billion kilowatt hours.

It’s no surprise to any reader of this CIO column that digitalization is an unstoppable trend.  Products, services, and processes increasingly rely on digital software and hardware. Various observers have pointed out that such a shift comes at a price—growth in electricity use in devices, networks, and data centers.

One counterargument, which I made a few years ago, is that one can’t just look at the cost side of the equation. For a balanced perspective, one would need to look at the benefit side: the reduction in energy use thanks to mechanisms such as optimization and substitution.

In the same way that it’s worth paying a few dollars for the cab ride to cash in a winning Powerball lottery ticket, it’s worth using some energy to run calculations to reduce the energy used by physical processes. One example is UPS’s ORION system, which optimizes the routes that drivers follow, sequencing the stops and minimizing left turns and thus reducing the total fuel usage and carbon footprint.

Substitution, where digital solutions replace physical ones, is another driver of benefits. Sure, the article you’re reading took energy to deliver to your device, which also uses power, but this substitutes for the physical processes of driving logging trucks into forests, cutting down trees, turning pulp into paper, and printing and delivering a paper magazine. Similar logic applies to holding a web conference rather than using jet fuel to meet in person.

Under this logic, an increase in energy use from digital technologies is not necessarily a bad thing, if it is more than offset by savings elsewhere. For these cases, more is better, because more energy used means even larger energy savings elsewhere.

But the industry has accomplished even more than this, because these kinds of benefits have increased even as energy costs have stayed flat. The report lays out how this has occurred.

One mechanism is server virtualization, where multiple applications or workloads that used to profligately waste hardware and thus electricity are now consolidated into fewer servers. Similar benefits have arisen through storage and network virtualization.

Another big driver is the rise of the cloud. Rather than running applications solely with fixed capacity running at an average single digit utilization, cloud servers can be dynamically allocated among different workloads and customers as individual needs vary, leading to extremely high utilization rates and thus energy efficiency. These kinds of benefits can be achieved to some extent in enterprise data centers through private cloud approaches, but are most evident with larger cloud providers.

Yet another big driver is the increased energy efficiency of modern server designs. Not only can they perform more operations at a lower energy cost, but they scale their power use more cleanly, using less power when idling or performing fewer computations. Similarly, storage has grown more efficient, with larger drives capable of I/O operations that also use less energy per I/O or bit stored, and additional improvements due to the shift from physical disks to solid state drives. And of course, the same thing has happened with network equipment, as port speeds increase every few years without a corresponding increase in energy use.

One driver that has had a bit less of an overall impact over the past few years is improvements in power usage effectiveness (PUE). This metric characterizes the ratio of energy used by the data center in total—including heating, cooling, lighting, and energy losses—compared to the amount actually used by IT infrastructure to do useful work. Although some newer data centers are achieving excellent PUEs closing in on a perfect ratio of 1.0, the average PUE has improved more slowly, even with rapidly advancing technologies for data center infrastructure management, data center services optimization, airflow modeling and control, and the like.

In fact, the average PUE has only improved from an estimated 2.0 in 2007 to 1.8-1.9 today, according to the DOE report. According to Arman Shehabi, a primary author of the report, in reality the average PUE has improved to roughly 1.7 since the detailed data collection phase of the report, and should improve further to 1.5 by 2020. The report authors argue that the rate at which best practices are impacting the average PUE is due to the relatively slow turnover in actual physical data centers and infrastructure—e.g., heating, cooling, and power distribution—compared to IT equipment—servers, storage, and networks. This demonstrates both the need for CIOs to focus on PUE, and the further potential improvements in U.S. and global energy efficiency as best practices are more broadly adopted.

Looking toward the future, we can expect even further efficiency improvements. Some will derive from continued adoption of proven best practice designs and approaches. Others may be extrapolated from a continuing history of improvements, such as in server power efficiency. Still others will arise from new technologies—such as “serverless” computing, where digital services only use resources when they are invoked—and new architectures—such as fog / edge computing, where data is processed through distributed queries or compressed at the edge prior to transport.

In short, predictions that aggregate data center energy use would spiral out of control have been disproven thanks to a variety of innovations at all layers.