Commodity Clouds, the ‘Tuning Tax’ and What Cloud Users Really Need
Application-tuning capabilities coupled with today's commodity cloud offerings are more than many users need. Just like broadband Internet, though, it's only a matter of time before these 'overserved' users turn to the commodity cloud to meet 'unserved' needs. Will this leave enterprise cloud deployments in the cold?
In conjunction with Interop two weeks ago, analysts Ben Kepes and Krishnan Subramanian organized Cloud2020, an attempt to move the often jejune cloud computing discussion beyond the hackneyed trope of “Hybrid Cloud Means X,” where X is whatever product the vendor’s representative is pushing that week.
One topic became a central question of the day: Is public cloud computing a fait accompli victory for the commodity cloud providers exemplified by Amazon Web Services but also represented by Microsoft and Google?
Amazon makes no bones about its bare-bones offering:
Standardized services are delivered via APIs in a completely hands-off fashion.
Customers cannot request specific hardware or infrastructure configurations.
Commodity hardware components are selected for cost and made robust through redundancy.
Low prices, along with a history of dropping prices frequently, significantly undercut traditional offerings from hosting and managed service providers.
Notwithstanding Amazon’s enormous success, many other providers and users maintain that, over time, the “enterprise” cloud market will emerge. For many of them, there’s a clear implication that users will come to find the commodity cloud offerings insufficient for typical corporate workloads and will eventually shift to an enterprise CSP.
Underpinning this position is the observation that many of today’s workloads require significant tuning to achieve acceptable performance—even in infrastructure environments made up of enterprise gear from the giants of the industry such as Cisco Systems and EMC. There’s a lot of truth in this observation: Tuning an application configuration to perform adequately can require adjusting an amazing number of factors, including memory allocations, thread counts, network settings and hardware, and storage distribution.
A corollary to these factors is that the applications requiring this kind of effort are the “crown jewels” of the corporation—transactional systems that, should they not operate adequately, could negatively affect the overall financial performance of the company. In fact, it’s so important that these applications be appropriately configured that IT organizations skew their operational processes toward ensuring that these applications are sufficiently taken care of.
Consequently, it’s not surprising that many cloud providers believe that use of commodity clouds will fall away in favor of offerings that support this kind of tuning. It’s also not surprising that many users feel the same way as well. They believe the commodity provider’s focus on average applications and low costs precludes those providers from being able to address the complexities of the enterprise market.
Moreover, they believe the tuning-required application profile outlined above actually represents the majority of the market as it will eventually emerge. After all, those applications represent a majority of today’s market, right?
Commodity Cloud Is No Fad
Put another way, this perspective sees today’s commodity cloud user base as a brief explosion of simple applications that will eventually give way to a market primarily comprised of applications similar to the majority of workloads that today run in corporate data centers.
From this perspective, the cloud computing market of the future looks like this:
It’s a mistake, however, to examine today’s dominant application profile and operational practices and project that they represent the majority of tomorrow’s market. Clayton Christensen, in his groundbreaking work The Innovator’s Dilemma, observed that many complex and expensive technologies are overkill for many potential users. In his word, these technologies overserve many users.
Consequently, for reasons of cost or hassle, these potential overserved users forego using the technology and aren’t represented in the market as it exists at the moment. Characterized differently, this rich-tuning capability is seen by these overserved users as a tax—an extra cost that must be paid even though it’s undesired and unnecessary. And the tax is high enough that it dissuades many people from implementing applications.
When cheaper, easier-to-use technology emerges, these overserved users emerge and embrace the new technology, even though it’s initially inadequate to satisfy the current market user base. Ultimately, these underserved users eventually come to dominate the market, the emergent technology improves enough to satisfy the users formerly committed to existing technology, and the market shifts to the new technology.
Seen in this light, the cloud computing market actually looks more like this:
In other words, the commodity cloud offerings will enable a whole set of users who previously couldn’t use the existing technology approach because their applications weren’t important enough to get IT’s attention to obtain the hands-on tuning work—or, even more commonly, couldn’t economically justify the cost associated with the hands-on approach. Far from representing a minority of the ultimate cloud computing market, these users will come to represent a far larger portion of the market than the enterprise users.
Commodity Cloud Won’t Be For Everyone
This isn’t the end of the question, though. When what Christensen calls a disruptive technology emerges, it does more than grow the pie by letting overserved users finally get their mitts onto a way to satisfy their needs. Disruptive technology fosters an entirely new set of users, ones we might call unserved. This is a market that couldn’t exist absent the characteristics of the new technology, and it’s one that only emerges when the capabilities of the new technology are explored. New businesses built on aspects of the new technology only come to be understood after it’s placed into the market as well.
As an example of the unserved market (or markets, really), consider the technology of broadband Internet connectivity. While higher bandwidth supports more complex websites, the downstream implications of how high bandwidth will affect the business world are only now unfolding: The explosion of user-generated video, the potential of telemedicine and the entire “Internet of Things,” which posits billions of devices applied to specialized tasks, generating data and enabling everything from thermostats that learn to cars that drive themselves.
Seen from this perspective, the cloud computing market probably looks more like this:
One could nevertheless argue that, well and good, commodity clouds have a very bright future—but there will still be a need for providers that offer enterprise-quality configuration and tuning to support those applications that cannot, for technical reasons, be satisfied with commodity offerings.
That may be true, but it remains to be seen how big a cloud market that becomes. It’s hard to see how a service provider can make those finicky technology knobs and gears available as services that can be manipulated via an API.
While much of the cloud computing (and networking) vendor community is currently gaga over software defined networking, most of the discussion revolves around relatively simple configuration domains such as setting IP addresses and retaining relationships (e.g., this server needs to talk to that server). I haven’t seen any discussion about configuration and tuning capabilities of the sort required to wring the last drop of performance out of a network configuration.
In other words, can this complex tuning capability required for a certain domain of applications truly be offered as a cloud service, or are we really talking about spiffed-up colocation or managed service hosting presented as a shiny cloud offering? The crucial question is what depth of tuning is required for the applications that reside at the top of the pyramid and what proportion of those applications’ performance requirements can be served via an API.
Overall, I’m not convinced that performance-sensitive applications that require tuning represent the future of the cloud computing market.
Bernard Golden is part of the Cloud Computing Enterprise Solutions group at Dell. Prior to that, he was vice president of Enterprise Solutions for Enstratius Networks, a cloud management software company, which Dell acquired in May 2013. He is the author of three books on virtualization and cloud computing, including Virtualization for Dummies. Follow Bernard Golden on Twitter @bernardgolden.
Named by Wired.com as one of the 10 most influential people in cloud computing, Bernard Golden serves as vice president of strategy for ActiveState Software, an independent provider of CloudFoundry. He is the author of four books on virtualization and cloud computing, his most recent book being Amazon Web Services for Dummies. Learn more about him at www.bernardgolden.com.