An authoritative analysis\u00a0titled the \u201cUnited States Data Center Energy Usage Report\u201d has just been released from the U.S. Department of Energy, and it contains shocking conclusions regarding electricity. After energy use grew with wild abandon through the first decade of the new millennium, approaches such as cloud and virtualization have driven so much efficiency that U.S. data center electricity use has barely budged, even as virtually every dimension of our world has become more, well, virtual. Specifically, in essentially the same period that companies such as Instagram grew from an idea to half a billion users, overall U.S. electricity usage for \u201cdata centers\u201d\u2014spanning the range from server closets to hyperscale facilities run by companies such as Amazon, Google, and Microsoft\u2014has increased less than 1 percent per year.\nThe 66 page report contains a comprehensive analysis from the U.S. Department of Energy\u2019s prestigious Lawrence Berkeley National Laboratory, derived in collaboration with globally recognized experts such as Jonathan Koomey from Stanford, Eric Masanet from Northwestern, and In\u00eas Azevedo from Carnegie Mellon. It uses a sophisticated modeling approach combined with voluminous real world data.\nSpecifically, the report indicates that energy used by digital infrastructure in the U.S. only grew 4% in the period from 2010-2014, with an estimated 70 billion kilowatt hours used in 2014. This is expected to barely change through 2020, when it is projected to reach 73 billion kilowatt hours.\nIt\u2019s no surprise to any reader of this CIO column that digitalization is an unstoppable trend. \u00a0Products, services, and processes increasingly rely on digital software and hardware. Various observers have pointed out that such a shift comes at a price\u2014growth in electricity use in devices, networks, and data centers.\nOne counterargument, which I made a few years ago, is that one can\u2019t just look at the cost side of the equation. For a balanced perspective, one would need to look at the benefit side: the reduction in energy use thanks to mechanisms such as optimization and substitution.\nIn the same way that it\u2019s worth paying a few dollars for the cab ride to cash in a winning Powerball lottery ticket, it\u2019s worth using some energy to run calculations to reduce the energy used by physical processes. One example is UPS\u2019s ORION system, which optimizes the routes that drivers follow, sequencing the stops and minimizing left turns and thus reducing the total fuel usage and carbon footprint.\nSubstitution, where digital solutions replace physical ones, is another driver of benefits. Sure, the article you're reading took energy to deliver to your device, which also uses power, but this substitutes for the physical processes of driving logging trucks into forests, cutting down trees, turning pulp into paper, and printing and delivering a paper magazine. Similar logic applies to holding a web conference rather than using jet fuel to meet in person.\nUnder this logic, an increase in energy use from digital technologies is not necessarily a bad thing, if it is more than offset by savings elsewhere. For these cases, more is better, because more energy used means even larger energy savings elsewhere.\nBut the industry has accomplished even more than this, because these kinds of benefits have increased even as energy costs have stayed flat. The report lays out how this has occurred.\nOne mechanism is server virtualization, where multiple applications or workloads that used to profligately waste hardware and thus electricity are now consolidated into fewer servers. Similar benefits have arisen through storage and network virtualization.\nAnother big driver is the rise of the cloud. Rather than running applications solely with fixed capacity running at an average single digit utilization, cloud servers can be dynamically allocated among different workloads and customers as individual needs vary, leading to extremely high utilization rates and thus energy efficiency. These kinds of benefits can be achieved to some extent in enterprise data centers through private cloud approaches, but are most evident with larger cloud providers.\nYet another big driver is the increased energy efficiency of modern server designs. Not only can they perform more operations at a lower energy cost, but they scale their power use more cleanly, using less power when idling or performing fewer computations. Similarly, storage has grown more efficient, with larger drives capable of I\/O operations that also use less energy per I\/O or bit stored, and additional improvements due to the shift from physical disks to solid state drives. And of course, the same thing has happened with network equipment, as port speeds increase every few years without a corresponding increase in energy use.\nOne driver that has had a bit less of an overall impact over the past few years is improvements in power usage effectiveness (PUE). This metric characterizes the ratio of energy used by the data center in total\u2014including heating, cooling, lighting, and energy losses\u2014compared to the amount actually used by IT infrastructure to do useful work. Although some newer data centers are achieving excellent PUEs closing in on a perfect ratio of 1.0, the average PUE has improved more slowly, even with rapidly advancing technologies for data center infrastructure management, data center services optimization, airflow modeling and control, and the like.\nIn fact, the average PUE has only improved from an estimated 2.0 in 2007 to 1.8-1.9 today, according to the DOE report. According to Arman Shehabi, a primary author of the report, in reality the average PUE has improved to roughly 1.7 since the detailed data collection phase of the report, and should improve further to 1.5 by 2020. The report authors argue that the rate at which best practices are impacting the average PUE is due to the relatively slow turnover in actual physical data centers and infrastructure\u2014e.g., heating, cooling, and power distribution\u2014compared to IT equipment\u2014servers, storage, and networks. This demonstrates both the need for CIOs to focus on PUE, and the further potential improvements in U.S. and global energy efficiency as best practices are more broadly adopted.\nLooking toward the future, we can expect even further efficiency improvements. Some will derive from continued adoption of proven best practice designs and approaches. Others may be extrapolated from a continuing history of improvements, such as in server power efficiency. Still others will arise from new technologies\u2014such as \u201cserverless\u201d computing, where digital services only use resources when they are invoked\u2014and new architectures\u2014such as fog \/ edge computing, where data is processed through distributed queries or compressed at the edge prior to transport.\nIn short, predictions that aggregate data center energy use would spiral out of control have been disproven thanks to a variety of innovations at all layers.