by Bernard Golden

Cloud CIO: The Next Generation Cloud Offering

Opinion
Apr 14, 2011
CIOCloud ComputingData Center

Recent announcements from Facebook and VMware reflect how profoundly cloud computing will change the nature of corporate IT in the future. CIO.com's Bernard Golden discusses what the takeaways from these announcements are for cloud CIOs.

Last week I wrote about how CIOs should go about rolling out cloud computing initiatives. In the piece, my conclusion was headed “An Unusually Fast Platform Shift”; I noted that the pace of innovation and adoption regarding cloud computing far outstrips any previous platform shift.

This week, two announcements reinforced that perspective. Both of them are critical for CIOs going forward with cloud computing plans and, though not much commentary has linked them, I believe they are complementary and reflect how profoundly cloud computing will change the nature of corporate IT in the future.

The first announcement was Facebook’s announcement of its Open Compute Project. Facebook set out two years ago to rethink data center design, custom creating an integrated facility that uses bespoke hardware, external air, no air conditioning, innovative floor design, and more, to provide a much less expensive approach to operating a data center. For details on Facebook’s own data center that implements this design, see James Hamilton’s excellent writeup here.

The second announcement was VMware’s announcement of its CloudFoundry project, which provides a open source-licensed, generalized Platform-as-a-Service framework. VMware has purchased a number of open source software companies, whose products are included in the CloudFoundry framework. CloudFoundry goes well beyond an integrated product suite and provides a programming framework to allow developers to create elastic, scalable applications without needing to trouble themselves with the details of coordinating individual compute resources like virtual machines, databases, etc. Moreover, VMware is providing a hosted version of CloudFoundry for developers to use. Finally, CloudFoundry is cloud-agnostic, supporting the deployment of CloudFoundry-based applications to a variety of cloud providers, including ones that are not VMware software based (as well as internal private clouds located on-premise within companies). For more info on CloudFoundry, here is a video interview of Tod Nielsen of VMware, by Robert Scoble (@scobleizer) of Rackspace.

Each announcement is important in its own right, but, taken together, they provide a vision of what the future of computing will look like. Let’s drill down on each one to understand the implications.

First, Open Compute presents a new benchmark for operating a data center, and, unlike, say, Google and Microsoft, who also have implemented extremely efficient data centers but have kept the details proprietary, Facebook has chosen to share its design.

Essentially, Facebook has rethought every element of a data center in order to make the overall aggregation of components and operations as efficient as possible. In announcing the project, Facebook noted that its data center implementing the Open Compute design uses 38% less energy to do the same amount of work as an older data center, while costing 24% less. As already noted, the data center uses no air conditioning; this is because its sited in Oregon, where outside air temperatures obviate the need for cooling. Additional measures include using a specialized energy distribution system, custom-designing servers that have no unnecessary components, and eliminating a central UPS. By treating its data center as an integrated aggregation, rather than an assembled collection of standardized components, Facebook has made the integrated whole far more efficient. Its announcement stated that the data center operates at a 1.07 PUE, significantly better than its previous 1.5 PUE.

Facebook’s announcement reinforces our perspective that the nature of IT operations is going to change: IT will increasingly be a service provider, but not necessarily an asset owner. How can one justify owning and operating a data center unless it achieves the kind of efficiency now possible? The coming role for infrastructure operations will be selecting among physical infrastructure providers and implementing operational oversight and monitoring. There will undoubtedly be many challenges in pursuing this strategy, including security, bandwidth sufficiency, and SLA monitoring; however, failing to pursue a low-cost infrastructure strategy is inappropriate when options are available.

Turning to CloudFoundry, it comes at an interesting time for software efforts within IT organizations as they seek to adopt cloud computing. While there has been significant use of Amazon Web Services and other cloud providers, much of what has been implemented is what we refer to as “Hosting 2.0.” By this we mean the developers are using virtual machines within the cloud infrastructure, but designing and implementing applications the same way as when physical machines provided the computing environment. Specifically, this means that the applications are installed as though they will run in a persistent virtual machine, with any configuration or application topology changes implemented manually, and with no redundancy to protect availability in the event of resource failure.

The problem is, all the assumptions that underlie that approach are inappropriate in a cloud computing environment. To achieve the agility and elasticity that cloud computing promises, applications must be designed and operated differently. Our conclusion, though, is that many, many software engineers will struggle acquiring the skills needed to develop these type of applications.

Consequently, I have begun to conclude that for many organizations, a PaaS software infrastructure will be critical to enable “true” cloud application development. Furthermore, that PaaS infrastructure has to support the languages and application design patterns that developers know and are comfortable with. Only with a framework that abstracts the details of achieving scalable elasticity, automatic database replication, integration with other platform services, and so on, will many organizations be able to obtain the promise of cloud computing.

This is why the CloudFoundry initiative looks so promising. It supports “traditional” Java development as well as later-generation Ruby, et al languages. It envisions easy access to platform services like queues and so on. And it promises to help organizations avoid lock-in by enabling transparent deployment to any number of public clouds as well as internal clouds.

This appears to me to be an exactly on-target approach to what is required: Specialized software engineers take care of the complicated plumbing bits, while mainstream application developers focus on solving business problems, secure in the knowledge that what they develop will automatically obtain the benefits promised by cloud computing.

Of course, it’s early yet. Much of the initial commentary has evaluated this initiative as a threat to Amazon or other cloud providers. The “who wins, who loses” approach is tempting, but, really, in such a rapidly growing field, CSPs should be focused on how to ease adoption, not who is top dog, which means they’ll welcome anything that reduces barriers to use. In any case, CSPs make money by selling computing service, and anything that eases consumption is a positive for them. A more telling question is to what extent CloudFoundry will build a community and ecosystem surrounding the technology offering. And beyond the typical issues of community building, one would think two critical constituencies will have questions about their potential involvement.

For CSPs, more telling than which PaaS is the more powerful is likely to be a potential reluctance to commit to an offering from a vendor who is a key supplier to a competitor. This will probably be even more true for non-VMware based CSPs. There could be a natural concern that information shared during a discussion about business strategy with the CloudFoundry team might find its way to the ears of the unit that supplies software to other CSPs.

For end user organizations, a natural concern would be the depth of commitment by VMware to CloudFoundry. The software product landscape is littered with the abandoned open source initiatives of vendors, who launched an open source product with great fanfare and then lost enthusiasm once they realized how long it takes and how much work is required to build a vibrant community. This abandonment isn’t the result of lack of desire on the part of the vendors upon initial launch; it just reflects the fact that later, when it comes time to invest in the product’s growth, higher priority items displace the money or headcount required for the open source initiative, and one day it becomes evident that the product has grown stale and the community has petered out. Obviously, an end user is going to evaluate the prospects for an open source product’s longevity very carefully before jumping on board.

These reservations are not to criticize the CloudFoundry initiative itself. I hope Ive made clear how important and necessary such an offering is. The fact that a proprietary software vendor has had to step up to make this happen reflects, in my view, the paucity of imagination on the part of the venture capital community. The need for high-productivity, application development streamlining is obvious — why wasn’t CloudFoundry or an equivalent initiative funded as a startup?

For a cloud CIO, the takeaways from these announcements are the following:

1. Evaluate your cloud infrastructure plans very carefully in light of the accelerated innovation in the data center design domain. Sticking a cloud infrastructure into an cost-uncompetitive data center (and that includes operational inefficiency as well as kit inefficiency) consigns you to a high-cost provider position, not a comfortable place to be in the future.
2. Accelerate your plans to move to cloud computing. A two-year plan to stand up a dev/test cloud is just too long a timeframe. It risks solving yesterday’s problems and being bypassed by tomorrow’s solution.
3. Consider where to invest your staff skill development dollars. If an external company can operate a high-efficiency PaaS that your applications can use, perhaps you would be better served by leveraging that and directing your staff training and recruitment toward applications, skilling up on people who can place business logic on top of the PaaS infrastructure.
4. Ruthlessly standardize offerings. I made this recommendation last week and make it again even more strongly in the light of these new developments. “Have it your way” is a great restaurant jingle, but it’s hopelessly expensive in IT.
5. Consider using differential pricing to guide business unit behavior. Many business units will insist on “having it their way,” and in the absence of any reason not to, will expect IT to deliver custom configurations, etc. Offering the standard configuration at one price and the custom configuration at, say, 3X the standard price sends a price signal and allows the BU to evaluate how important the custom configuration is (hint: at a higher price, it may not be nearly as important).

This is, perhaps, the most interesting time in IT since the rise of the PC. Certainly the democratization and explosion of applications enabled by cloud computing resembles that heady period, when it seemed like new developments sprang forth daily. It’s fantastic to see companies like Facebook and VMware contribute to this innovation and provides anticipation for new developments that are on the horizon.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.

Follow Bernard Golden on Twitter @bernardgolden. Follow everything from CIO.com on Twitter @CIOonline