An article last week in
Network World displayed all of the glorious promise, challenge, and contradictions of cloud computing in less than 1000 words. The article focused on
the quality of support for Amazon Web Services’ user forums, based on a study by a team of researchers from IBM and the University of
The primary finding of the study is that technical support questions posted to the AWS forums take a long time to get answered, with administrators
(I assume these are AWS employees who provide feedback and suggestions to questions posted in the forums) taking up to 20 hours to respond to
queries. The article implies, but does not say outright, that Amazon is falling short of providing the kind of support that should be part of a cloud
computing offering. It offers a couple of interesting anecdotes about customer support expectations that illustrate crucial aspects of cloud computing which
must be understood by cloud computing adopters to achieve success.
To address the first issue squarely: does Amazon’s forum support fall short of user expectations?
If one looks at the actual adoption numbers, it seems clear that there is strong market adoption of AWS. As I wrote last year, an analysis
conducted by Guy Rosen (@guyro) and RightScale showed that around 50K EC2 instances were being started in the US each day. The numbers are
bound to be higher today. So it seems that somebody must find the AWS offerings more than acceptable.
By the same token, it is also clear that the foundation of AWS support is more akin to open source than traditional “enterprise” compute offerings.
But is that a shortcoming?
Many of the adopters of AWS are people or organizations that have found the established mode of computing (aka, the traditional “enterprise”
mode) unsatisfactory. Developers, of course, have flocked to AWS. Likewise, many business units of companies, both large and small, are finding their
way to AWS as a way of bypassing the expensive and unresponsive enterprise IT groups. That characterization may be inflammatory, of course. A
different way to saying it is that the established processes and cost structures do not align with the agile and inexpensive ways AWS adopters seek to
meet their specific business challenges.
Clayton Christensen, author of The Innovator’s Dilemma, characterized this mismatch of offering and needs as “overserving the customer,” which is to say, the
established offering (in the case of enterprise IT, deliberate, risk-averse, process-heavy approaches) is designed for an offering way beyond what
developers and business units needs. In his theory, users turn to new offerings that fall short of meeting everything the current market leaders offer, but
help those users solve their business problems without all the overhead of the current market leader offerings. In this way, the new product serves a new
market, one that is overserved by the established products.
Christensen goes on to say that the “inadequate” new offerings gradually improve to the point where they are sufficiently robust to meet the market
requirements of the existing customers, which causes a tipping point in the overall market, in which the new offerings displace the existing ones and drive
the old ones out of the market.
He gives examples from many different markets, including disk drives, earth excavation equipment, and department stores. In examining department
stores, he identified the fact that a new breed of discount stores eventually drove mainstream department stores that could not develop a new approach to
the market out of business. For example, discounters came to dominate electronic appliance distribution (think TVs, audio equipment, etc.), displacing the
full service (and expensive) mainstream department stores.
Left unaddressed in Christensen’s theory is how customers accustomed to knowledgeable sales staff employed by the mainstream department stores
coped with discount stores that assumed the buyer would have sufficient knowledge to make a selection on his or her own.
And I think that is at the heart of the matter with respect to cloud support — how will IT organizations that operate with assumptions about
vendor support availability respond to the new world of cloud computing? But to my mind, it goes well beyond just having someone at the end of a phone
line ready to take a call.
The most fascinating part of the NetworkWorld article was an anecdote proffered by Lydia Leong, an analyst from Gartner. She noted that she had
just interacted with a large enterprise in which some developers had created a new application and thought it was fantastic that Amazon was going to
support it, freeing IT from having to do so. Except, as she noted, Amazon doesn’t provide end-to-end application support. Amazon take responsibility for
the compute infrastructure, but leaves the application support to the user. To quote the article:
“It turns out that these developers believed in the magical cloud — an environment where everything was somehow mysteriously being taken care
of by Amazon, so they had no need to do the usual maintenance tasks, including worrying about security — and had convinced IT operations of
I can vouch for this expectation. In one of our workshops, we had an operations manager from a very large, global consumer goods company. His
executive management was hot to trot to move to AWS, figuring that no outsourcer could meet AWS prices of 8.5 cents per hour for a supported
database server. Two days later, he was trying to figure out how to break the news to his boss that 8.5 cents per hour only addressed part of their
application support needs.
This sort of misplaced expectation is common, and will become a significant theme over the next couple of years, as IT organizations that have
embraced cloud computing learn that the cloud computing support is a shared responsibility, and that the user plays a significant role in operating a
cloud-based app — and turns to internal IT operations to take on application operations responsibility. In fact, we’ve seen so much of this
phenomenon that we have christened it “The Cloud Boomerang.” If you’d like to learn more about the nature of the Boomerang and what to do about it,
you can view a YouTube video we put together about it.
Even more striking, and more challenging, is what Leong goes on to say about this enterprise customer:
“Plus, they also assumed that auto-scaling was going to make their app magically scale. It’s not designed to automatically scale horizontally. Somebody is
going to be an unhappy camper.”
A different way to say this is that an elastic infrastructure does not automatically make an application elastic, or, as I said in a recent post on this
topic, Cloud Computing is not Hosting 2.0. By this
I meant that placing traditionally architected applications into a cloud computing environment (i.e., treating cloud computing as the next generation of
application hosting) is a recipe for disappointment — as evinced by the client Leong was speaking to.
Simply put, I believe the primary challenge for most enterprises with regard to cloud computing will not end up being whether to use a public provider
or to build their own, but, rather, learning how to build and run applications suited for cloud computing, wherever the cloud is located. And don’t
underestimate the changes this platform shift requires. Also, don’t imagine dealing with these changes can be avoided. They are part of any cloud-based
application, and we will see an explosion of applications as the implications of low-cost, on-demand computing infrastructures permeate businesses.
What should you do to prepare for this new world of low-touch, different architecture applications?
Here are several suggestions:
1. Recognize application support is a shared responsibility. The infrastructure provider (whether external like AWS or internal, provided by
IT operations) takes responsibility for a portion of the operations and support of an application. You, or your proxy, are responsible for the remainder.
This is similar to the fact that security is a shared responsibility in a cloud environment. Understand your portion of the responsibility and account for it in
your project plan.
2. Standardize and implement enterprise architecture. Your application portfolio is going to grow — a lot. Trying to manage a much
larger collection of one-off customized application environments will be ruinously expensive. Move toward a common application infrastructure and
standardized application components and design patterns. Anything else will be overwhelming.
3. Architect for cloud environments. Our phrase for this is “Build cloud apps, not apps in the cloud,” shorthand for a whole range of
practices including: planning for short-duration, unreliable compute resources; creating dynamic application elasticity via transparent resource joining (and
leaving); and adapting existing processes to incorporate cloud application development. See the previously mentioned “Cloud Computing is not Hosting 2.0” for tips on how
to do this.
4. Implement application management frameworks suited for cloud computing environments. Trying to use management frameworks
designed for physical, stable environments in a cloud computing world is like bringing a knife to a gunfight. Whatever application management system you
use has to fully support the new application architectures and operations approaches. Don’t be like a general fully prepared to fight the last war and
rendered irrelevant by the new one.
5. Above all, re-examine your assumptions about what application are, how they’re built, and how they’re run in light of cloud computing.
It’s a new world, and your old operating patterns are outmoded. Leave your old baggage behind for this journey.
Bernard Golden is CEO of consulting firm HyperStratus, which specializes in
virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to
Follow Bernard Golden on Twitter @bernardgolden. Follow everything
from CIO.com on Twitter @CIOonline