by Bernard Golden

Cloud Computing: What Clayton Christensen Can Teach Us

Jan 15, 2009 16 mins
Data Center Virtualization

Is cloud computing a sustaining or disruptive innovation? Both. But it's utltimately likely to be disruptive, despite the efforts of many IT organizations to corral it as a sustaining innovation. If that's you, your job's in peril.

Clayton Christensen’s book, The Innovator’s Dilemma, is a touchstone here in Silicon Valley. His book examines the process of innovation as it attempts to answer the question “why do most new technologies seem to come from startups and not from established companies that are also familiar with the technology?” He cites many markets as examples, including tube table radios (displaced by transistor radios), cable-driven steam diggers (displaced by hydraulic diggers), and disk drives (where successive waves of technology were represented in shrinking form factors) that brought new companies to the fore at each new wave. In each of these markets, according to Christensen, innovation shook up the established way of doing things and propelled new market entrants past companies that had dominated the previous technology.

So ubiquitous is Clayton’s book in the Valley, that its title is regularly used as a kind of talisman to explain the promise of a new product, or why a former technology powerhouse (e.g., SGI) has fallen on hard times. Like the phrase “How about this weather?” “the innovator’s dilemma” can be trotted out on almost any occasion and found suitable for purpose.

In fact, Christensen’s theory is far more powerful and much more subtle than might be expected, given its typical application in everyday conversation. He posits that there are actually two kinds of innovation: sustaining and disruptive. Sustaining innovation is when an existing technology is improved; it’s very hard for a new market entrant to be successful by delivering an improved version of an existing product. So, for example, if someone invented a more sensitive tuner for a tube table radio, it would be hard for a new company to make much headway in the market; existing players could improve their current products with the technology, and prevent the market entrant from gaining a foothold.

However, Christensen states, products can end up being improved past the point of usefulness to their existing market. A 2000 watt tube table radio is too good; no one has the ability to use 2000 full watts to amplify music. Christensen labels this as an “overserved” market. This opens up the possibility for a new kind of innovation to come forth. Typically inadequate compared to the state-of-the-art existing products, the new innovation delivers functionality at a far lower price point or in a way that the former product champions can’t.

Another way an innovation succeeds in the marketplace is by not competing with the existing solution; that is, the innovation serves a new market for which the previous market leader is unworkable. So, for example, transistor radios couldn’t hold a candle to tube table radios; they put out far less power with much higher distortion rates. This was true despite the best efforts of the dominant tube radio manufacturers to fashion a workable transistor table radio. Consequently, transistor radios achieved a market foothold by selling to teenagers who wanted to listen to rock and roll with their friends. They didn’t mind the fact that the sound was terrible. They were listening to their music in their environment. Transistor radios sold by the carload to teenagers, a market that would never have bought a tube radio. This kind of innovation—one which upends the existing order within a market—Christensen calls disruptive innovation.

Ultimately, what happens, according to Christensen, is that the new technology improves over time until it is functionally equivalent to the existing product, but is still available at a lower price point. Then the market shifts dramatically, and the existing dominant players are forced out of the market almost overnight. And—and this is vitally important to understand Christensen’s theory—the established players, even though they recognize the potential and, indeed, the importance of the new technology, fail to bring competitive products using it to market.

The reasons for incumbent failure to succeed might be summed up in the phrase “nothing fails like (previous) success.” Usually the incumbent is reluctant to ship a product that is feature deficient compared to its established products (a good example of this is SAP’s continuing inability to bring an SaaS product to market while SalesForces continues to grow like crazy; SAP continues to cite the need to deliver all the functionality of its packaged product in the on-demand one, a Sisyphean task if ever there was one).

Another reason incumbents fail to succeed with new technologies is that the technology is often accompanied by new business models, and the incumbents are reluctant to embrace a new technology that appears to threaten established margins and ways of doing business (this can be seen in the fact that most mainframe software vendors never made the shift to minicomputer or PC product success; they weren’t willing to sell a product that didn’t cost hundreds of thousands of dollars. Meanwhile, a little company called Microsoft did pretty well with software in the low hundreds to thousands of dollars).

Silicon Graphics also falls into this camp: it was used to selling its graphics software and specialized hardware for huge amounts of money. When commodity x86 boxes with “good enough” software came along, SGI refused to respond, believing that users would be dissatisfied with the new products. SGI later went bankrupt. One of the most telling (and poignant) things I ever read was a former SGI employee’s lament that “We’d all read The Innovator’s Dilemma and understood its message—but we still went down the path to failure.”

I’ve only given a brief overview of Christensen’s theory. It’s elegant and (to my mind) unassailable. I’ve heard many people criticize it, but never heard anyone refute it. If you haven’t read the book, you owe it to yourself to do so.

Now that we’ve reviewed the theory, what about the original question: Is cloud computing a sustaining innovation—one that makes it possible for end users to do the same thing, only easier, while still using the same old products in the same way? Or is cloud computing a disruptive innovation—a new technology that changes the way users operate? There’s not an obvious answer; much of one’s answer represents how one believes cloud computing will be applied in real-world situations. I guess the answer is that cloud computing can be seen as both sustaining and disrupting, but is ultimately likely to be disruptive, despite the efforts of many IT organizations to corral it as a sustaining innovation—and if you’re an IT organization on the wrong side of that corralling attempt, cloud computing may be the ultimate outsourcing option—and a real threat to your position in the company.

Why is this?

The foundation of cloud computing is virtualization—the abstraction of logical resources from their physical underpinnings. This abstraction allows these resources to be moved around, executing on actual physical resources where most convenient or most appropriate. The movement can be from one machine to another located in an organization’s own data center, or it can be from one machine in the data center out to someone else’s (e.g., Amazon’s EC2) data center. But virtualization is key to this abstraction.

You don’t need to be an investigative reporter to know that virtualization has been on fire the past few years. This new technology has cut costs, saved energy, reduced data center floor space use, and cured the common cold. Actually, that last one is a joke. But virtualization has taken IT organizations by storm, there’s no denying it.

Some people maintain that virtualization is a disruptive technology. But is that true? Well, the leading vendor (VMware) was a newcomer to the IT industry, so it seems like virtualization might be disruptive because no incumbent brought it to market. On the other hand, the consumers of the virtualization solution (operating systems installed on a hypervisor which is installed on bare metal) are the same organizations that consumed the previous solution (operating systems installed on bare metal)—that is, IT organizations. Consequently, virtualization is a textbook example of a sustaining innovation—it is a twist on established technology that aligns with existing patterns of use—just as an improved tuner in a tube table radio could be easily used by owners of the earlier versions of table radios. The fact that virtualization is a sustaining innovation accounts for why it has been so rapidly taken up. VMware didn’t need to find new users who were overserved by standard solutions; it sold its product to a well-established market that could continue behaving in established ways. Virtualization requires only minor changes to existing user practices and processes; certainly it does not require wholesale rethinking about the way the technology is used.

From the perspective of IT organizations, virtualization is an excellent technology. It enables them to make their operations more efficient and less expensive, but imposes no changes on end users in terms of getting applications purchased, installed, and so on. The overall process flow established at companies—end users make IT requests, IT evaluates the options, makes a recommendation, obtains funding, etc., etc., remain in place unchanged. Indeed, many IT organizations praise virtualization as being something invisible to the end user organization—its process effect resides solely within the IT organization and allows external interactions with other corporate groups to go on unmodified.

So, given that cloud computing leverages virtualization, and virtualization is a sustaining innovation, does that mean cloud computing is also a sustaining innovation? Perhaps, but it’s just as likely to be a disruptive innovation, both to established technology vendors, but also, crucially, to technology users, i.e., IT organizations.

Let me explain.

From the point of view of vendors, it seems clear that cloud computing holds the potential to be a disruptive innovation. The first entrant, Amazon, comes from outside the industry (i.e., is a new entrant). Cloud computing can substitute for own-hosted hardware, which is to say, a cloud computing application provider can run its application with no need to purchase or install hardware. This clearly poses a threat to incumbent hardware vendors, which have traditionally sold to every organization that wants to run apps. For software vendors, cloud computing also appears to potentially be disruptive, since new SaaS providers can offer their services more easily and with less capital expense.

Of course, vendors are responding to cloud computing with their own initiatives. IBM, for example, is building out cloud data centers. HP is participating in a cloud computing research initiative with Yahoo and Intel, along with several academic institutions. And, of course, Microsoft has launched Azure. It remains to be seen for all of these vendors whether they will successfully pursue cloud computing or will, as in so many of Christensen’s examples, end up implementing half-hearted solutions, hamstringing them in an effort to support or reinforce existing successful business offerings.

There seems to be no doubt, though, that from the point of view of vendors, cloud computing is disruptive—a new offering, from a new entrant, at a lower price point. (And please, no comments that cloud computing is like timeshare systems from the good old days; that’s like saying the Pony Express is like the Internet because they both deliver information).

But is cloud computing a disruptive threat to IT organizations? After all, they don’t sell products, they offer services. So maybe they’re immune to innovation. They’re still the chartered providers of IT to the rest of the company.

Christensen doesn’t restrict his theory to products, though. He addresses services as well. One of the examples in his book relates to retailers, specifically department stores. In the 50s, a new breed of retailer, the discounter, came into being. Offering less service, they charged lower prices, accepting lower margins on individual products, but managing inventory more efficiently, obtaining a couple of extra “turns” of inventory each year. Despite the lower margin, they still made plenty of money; as the old saying goes, they made it up on volume.

Incumbent department stores noticed this trend and responded by creating their own discount chains. One department store, however, decided to merge its discount chain with the main department store chain; after all, it was less efficient to manage two separate operations. By combining them, putative advantages in warehousing, overhead operations (HR, etc.), and purchasing discounts would be available.

Christensen notes that this ultimately led to the failure of the discount arm of the company—because the managers of the company applied department store expectations and practices to the discount division. They were used to higher margins, so they raised prices. They were used to less inventory turnover, so they managed supplier relations less effectively. The discount chain began to resemble the department chain—resulting in it failing in its own market to other discount chains that operated by more appropriate rules.

This tale indicates that cloud computing might also pose a disruptive threat to IT. Let me explain.

Unlike virtualization, cloud computing can have a significant impact on end users. In particular, two aspects of cloud computing provide much more direct interaction for application creators and users:

The first is self-provisioning. You can go to Amazon and create a running instance of a virtual machine with no interaction with another human being. All you need is a valid credit card. This enables an end user to create a virtual machine instance with no need to interact with IT. This is far different from the typical extended process required to obtain internal IT resources. And, by the way, an end user organization, even if it lacks technical capability itself, can hire a consulting company to provision up an Amazon instance. In other words, end users can obtain IT capacity without interacting with IT. In this respect, cloud computing is analogous to SaaS, but with the proviso that cloud computing offers far more flexibility; in other words, an end user isn’t limited to what the SaaS provider offers; rather, the end user can create any kind of application it wants.

Metered usage. Instead of a lump sum transfer or a long-term commitment, Amazon allows payment according to actual usage. Most IT organizations are not even set up with basic chargeback associated with use; instead, a crude accounting based on numbers of servers, or cost assessment according to end user organization size, is used. A by-the-minute or by-the-gig charge mechanism is out of the question. Amazon allows users to pay according to use, shifting costs from up-front to pay-as-you-go, and from capital expense to operational expense. This has significant implications for budget process, which is where end users and IT organizations meet. Greater end user control. An end user can directly manage (or have someone manage on its behalf) its cloud instance. No need to put in a request to a help desk or a monthly project prioritization meeting where it will be put onto a priority queue that the requester doesn’t control. Amazon enables direct control via the Internet.

All of these aspects of cloud computing allow end users to bypass IT, should they choose. While Amazon EC2/S3 does not address many issues that concern IT (e.g., data retention and discovery, to mention just one), it delivers a refreshing immediacy of availability and a comprehensible cost structure clearly tied to use.

How IT treats cloud computing will dictate whether it ends up being a sustaining or a disruptive innovation.

To the extent that IT examines the aspects of cloud computing I’ve outlined above and figures out a way to modify existing processes to incorporate them, cloud computing can be a big win for it and the larger organization. IT will continue to be the premier provider of data services.

However, to the extent that IT does not make those aspects of cloud capabilities more transparent to the end user—in effect, shielding the end user from the self-provisioning, metered usage, and greater control available via cloud computing — then it will create an incentive for end users to go directly to an outside cloud provider.

One of the most worrisome things that I’ve heard from IT organizations about cloud computing is to treat it as a great internal optimization (“this lets us configure systems much more quickly”) while still expecting end user organizations to maintain the same rules of engagement with respect to IT sevices. The phrase you often hear is “internal cloud.” All too often that’s a code word for “internal to IT,” not a more transparent use pattern for end users. Treating cloud computing as a purely internal IT initiative, a sustaining innovation, will ultimately cause end users to treat IT as an impediment to be bypassed. Once they’ve tasted the fruits of direct control, it won’t be easy to serve up the same old menu.

Many IT organizations raise issues about why they can’t provide the same services as Amazon. Data treatment, already mentioned, is one. Another common one is security. A third is legal and regulatory requirements. And one often hears that internal IT can’t justify investing in hardware that might not be utilized except at peak periods. While all of these are valid and important, posing them as reasons why IT can’t provide more cloud-like capabilities will only further cement a reputation as unresponsive, and for these issues to be viewed as rationalizations.

What is Christensen’s antidote for Innovator’s Dilemma? What can an organization that wants to successfully respond to innovation do? Christensen recommends setting up a separate subsidiary and forcing it to play by the rules appropriate to the new market. In the example of the discount stores, he would have recommended keeping the discount chain separate, despite the supposed higher costs of running two different organizations. Only in this way can the organization not be outflanked by innovation.

Applying this prescription to IT and cloud computing would recommend setting up a separate IT cloud computing organization and staffing it with people committed to its requirements and processes. Have the new organization mirror common cloud provider practices. Let both the established IT organization and the new cloud-based IT group offer their services to end users with complete description of the strengths and weaknesses of each offering; e.g., Cloud IT allows you to directly provision systems that we provide or that you obtain from somewhere else. The data is always current, with no data snapshots made, unless you wish to take advantage of that service, which has an incremental cost.

I can hear your response now: that all seems complicated, and a lot of work, and jumbles up all our carefully designed processes. Why should we go to all that bother. That’s what disruptive innovation always looks like. A lot of bother that one would rather avoid dealing with. It’s much easier to go on doing things the way they’ve always been done. And that’s why established players fail when confronted with disruption. But there’s always someone else ready to take on the bother and succeed in its face. So really the question is, do you want to meet the challenge or be found obsolete in the future?

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.