Last week’s OpenStack Summit left me with some strong impressions, along with some questions and reservations. The bottom line: If you’re an enterprise CIO, OpenStack is going to be very important to your future.
First, in terms of overall energy and tone, the Summit reminded me of last November’s Amazon Web Services AWS Re:invent conference. Both conferences exuded the unmistakable sense of, to quote Charlie Sheen, “Winning.”
The OpenStack Summit had more than 2,500 attendees, with large contingents from Rackspace, Dell, Hewlett-Packard and IBM. There were even a few end-user types sprinkled among the attendees. All seemed ebullient about OpenStack and in no doubt that it represents the future.
Based on the sessions I attended and conversations I took part in, here are my takeaways from the Summit:
Game Over for Private Cloud?
The most striking takeaway from the Summit is the depth of commitment to OpenStack on the part of the large systems companies. Dell, HP and IBM all seem to defining OpenStack as their choice for cloud orchestration.
IBM presented a session on upcoming products that are all standards-based, including OpenStack and the TOSCA cloud management standard. As far as I can tell, IBM says OpenStack will be its favored orchestration product going forward.
Now, IBM is a canny company that, better than any other, has figured out how to use open source to make money—and, just as importantly, how to use open source to compete with others. To me, this announcement indicates that IBM has concluded that cloud orchestration is going to be a commodity product, it will leverage OpenStack to achieve commodity economics and it will look to other layers in the stack—and hardware as well, no doubt—for revenue and profits.
OpenStack Summit News: Red Hat Advances Enterprise OpenStack Distro to Early Adopter Program
Dell and HP have both suffered from customer (and analyst) confusion about their primary cloud strategy. Both have had a bit of a smorgasbord approach in the past, offering multiple products from different vendors to (quite understandably) befuddled customers. From what I saw and heard at the Summit, both have fastened on OpenStack as their primary orchestration product and cloud strategy foundation going forward.
With the three largest systems companies choosing OpenStack as their orchestration vehicle, it raises the question: What’s going to happen to the other contenders for IT private clouds?
Commentary: VMware Tantrum Shows It’s Not Connecting With Cloud Buyers or Sellers
My interpretation of what the companies were saying at the Summit is that their main recommendation to users will be to implement OpenStack. This obviously excludes VMware’s vCloud, although crucially—and no doubt to the relief of IT organizations that have large investments in VMware’s ESX—it does not exclude VMware’s hypervisor. OpenStack can (at least theoretically) run on ESX. I would expect to see greater priority in the future placed on certifying OpenStack on ESX.
It seems the future battleground for private clouds will be VMware and its army of system integration partners fighting the large system vendors that will be pushing OpenStack. It’s for this reason I say OpenStack is going to be important for IT organizations going forward. When all the vendors align on a single solution, it’s hard to imagine it’s not going to become a core piece of IT infrastructure.
Deployment Matters, Not Just Development
On the other hand, don’t let all the hoopla and triumphalism distract one from a vital reality: OpenStack is a young product that’s not yet mature. The most vivid evidence of this is the Summit’s overemphasis on developers and development—or, perhaps I should say, the absence of deployment discussion and sessions.
If OpenStack is truly to achieve its destiny as the de facto private cloud orchestration product, it has to be easy to consume. That means deployment has to be given priority. It’s an open secret that upgrading OpenStack from one release to another is not seamless, and the releases come hot and heavy every six months. In fact, it’s typically easier to start fresh rather than try and migrate an existing installation to a new release. That’s unacceptable to a mainstream enterprise IT market.
News: Rackspace to Offer Openstack Deployments for Service Providers
More News: Red Hat, Hortonworks Prep Openstack for Hadoop
OpenStack release plans and decisions need to incorporate end-user deployment requirements much more than has been done in the past. It can’t just be lip service, either. Once a technology moves past the early adopter market, stability and manageability commonly become much more critical to the rest of the potential user base. Ignoring these requirements in favor of developer priorities will greatly hinder OpenStack mainstream market uptake.
Interoperability: How Should OpenStack Providers Compete?
Interoperability came up frequently at the Summit, and it’s interesting that it did. When OpenStack was originally launched, founders trumpeted its use of the Apache license, citing the license’s flexibility, which would let OpenStack providers modify the product for competitive differentiation purposes.
The downside of that licensing approach is that everyone’s OpenStack product is different and incompatible. “Incompatible” is a word enterprises hate, because they know that, over time, they will end up with at least one of everything—and if they can’t work together or, worse, work differently, it’s a nightmare to manage.
Analysis: Cloud Fight Keeps Amazon, Microsoft, Google and Rackspace Clamoring for Enterprise Customers
I participated in an interoperability panel and made the following points:
- Interoperability is typically important to users. At a minimum, users want to know that major versions of a product, such as OpenStack Grizzly, provide the same functionality. To date, OpenStack “distros” vary significantly more than Linux distros do. That’s unacceptable. A Grizzly distro has to be nearly identical for core functionality, no matter who it comes from.
- Specification interoperability—for example, when a vendor certifies that it developed its product to a certain standard—always gives way to creation of a test suite and certification of conformance to the standard by successful completion of the suite. Critical to this is definition of APIs and testing of API conformance. Using the same source base, while useful, is not sufficient for interoperability. Interoperability is based on behavior, not contents, so a test suite validating API conformance will be the true measure of OpenStack interoperability.
- OpenStack providers are going to feel significant end user pressure in the future to provide better interoperability. My prediction: This will lead to creation of a test suite and certification process and, once one large vendor (say, HP) certifies, everyone will have to do so ASAP for competitive reasons.
In summary, the Summit seemed to be OpenStack’s moment in the spotlight, an emergence into a role that is likely to lead to a position of preeminence in the private cloud world.
As the cliché goes, with great power goes great responsibility. As OpenStack moves along its path of destiny, it’s going to encounter new requirements that invariably accompany mainstream success—ease of installation, stability, interoperability and the like. My recommendation: The OpenStack Foundation should recognize the looming presence of these market needs and promptly move to include them in its plans.
Bernard Golden is the vice president of Enterprise Solutions for Enstratius Networks, a cloud management software company. He is the author of three books on virtualization and cloud computing, including Virtualization for Dummies. Follow Bernard Golden on Twitter @bernardgolden.
Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn.