You don't have to spend much time around cloud computing before you run into arguments regarding cloud economics and you will undoubtedly encounter the phrase "Capex vs. Opex." This refers to the fact that stocking your own data center requires capital expenditure, while using an external cloud service that offers pay-as-you-go service falls into ongoing operating expenditures: thus the contrast of "Capex vs. Opex." The next go-round of the argument then devolves into a tussle about which alternative is cheaper.
There have been many discussions comparing the cost of a 7X24 use of an Amazon EC2 instance against the cost of hosting a server within a company's data center. Usually people take the average selling price of a 1U server, divide it by 36 (the number of months in the typical expected service life of a piece of equipment) and show that it totals less per month than renting from Amazon. Therefore, they conclude, cloud computing is bound to be more expensive than self-owned, which means it's inappropriate for typical corporate apps that require round-the-clock availability. A further nuance often thrown in is the faintly ad hominen attack that, since cloud providers seek to make a profit, they are ipso facto more expensive than internal data centers.
In my opinion, much of this discussion is wrong, misunderstands the real key issues for most companies, and misdirects the conversation away from where it should be directed, which is what proportion of the total portfolio of corporate applications are appropriate for external cloud hosting, and what decision criteria should be used to make that assessment -- and for sure, economics is not the sole criterion. I just completed a series on "The Case Against Cloud Computing" and TCO was only one of the five issues discussed.
To turn to the most apprehensible part of the discussion, the actual cost of internal data centers vs. external cloud. Comparing the monthly cost of an EC2 server against a putatively similar piece of hardware in a data center is simple-minded, because it overlooks:
1. The direct costs that accompany running a server: power, floor space, storage, and IT operations to manage those resources.
2. The indirect costs of running a server: network and storage infrastructure and IT operations to manage the general infrastructure.
3. The overhead costs of owning a server: procurement and accounting personnel, not to mention a critical resource in short supply: IT management and its attention.
When added to the cost of a internal server, these factors significantly raise the monthly overall cost to host a server. In the recent UC Berkeley Cloud Computing Paper (which I discussed last week), the RAD Lab estimates that cloud providers have lower costs by 75 to 80 percent vis a vis internal data centers. Some of this advantage is due to purchasing power through volume, some through more efficient management practices, and, dare one say it, because these businesses are managed as profitable enterprises with a strong attention to cost.
Therefore, the typical cost discussion regarding internal data center versus cloud provider costs is typically over-simplified and fails to assign a true cost structure to the internal data center side of the comparison. This isn't really surprising, given that most IT organizations really don't have a clear understanding of their true costs to begin with, as my discussion about Activity-based Costing pointed out (see the section headed "Do the Math Correctly." Another perspective about why this comparison falls down comes from a blog posting by Steven Oberlin, Chief Scientist at Cassatt, commenting on a recent post of mine discussing cloud TCO. He noted that these kinds of cost comparison ignore the utilization of the internal server: if it's running at 20% utilization, the effective cost of a given level of computing is actually five times higher than typically assumed in these cost comparisons.
However, even this does not fully explore the reasons IT organizations and, more crucially, senior general corporate management, are interested in cloud computing. This is where we move into capex vs. opex territory -- and even most cloud advocates, who pontificate about the opex advantages of cloud computing, fail to limn the full range of reasons cloud computing is attractive in this respect. On the other hand, I have heard people describe the difference between the two types of expenditure as regards cloud computing as unimportant -- after all, they point out, it's all cash flow, and whether the expenditure is for a capital good or an EC2 payment, it's still the same amount of money. (This naturally does not address the topic just discussed, which is that they probably aren't the same amount of money, but you get the point -- the assertion is that, given a set amount of payment, the type of "bucket" it comes from is irrelevant). This is enormously wrong, and fails to comprehend why cloud computing will be so popular among senior corporate management.
For starters, even if the cash outflow was roughly the same, the cloud alternative would be more attractive. This is because a payment on a capital good like a server is one of a series -- each of which the enterprise is committed to, no matter if the server is being used or not. Once you purchase a capital good, you're stuck with it, as anyone who has purchased a car understands. Even if you're no longer excited about owning it, the finance company still expects its monthly payment.
By contrast, if you rent a car, you are committed to it only as long as you want to use it -- and once you've paid for that use, you have no further financial obligation. And guess what, pretty much everyone understands that you pay a premium for that flexibility, i.e., a rental car costs more per day than the same car would, if purchased. In MBA-speak, there is an option value in that flexibility, for which a premium is paid.
Consequently, even if the cloud alternative were more expensive over a given duration, it's understandable, since there is no implied commitment beyond the duration. Furthermore, there is an imputed value to the scalability offered by the cloud alternative -- the fact that I can easily grow my consumption in a short period is itself valuable, and, naturally, carries an option value. Therefore, one can conclude that, given the option values associated with cloud computing, companies might be willing to pay more than the cost of an equivalent amount of internal server capability.
The really crucial aspect about the opex character of cloud computing should be based on a better understanding of the role of capital expenditure within companies. Companies are limited by the public markets in the amount of capital expenditure they are able to make (in the case of privately held companies, the limitation is not imposed by public markets, but by lenders who benchmark against public company ratios). Because capital investment is limited, companies usually want to direct their investment toward revenue-generating activities. This is why many companies prefer to lease real estate rather than purchase -- they don't want to tie up precious capital in dead assets.
Rightly or wrongly, many companies view IT as the latter type of investment, and manage it with an eye to minimize its cost. This is why IT reports to the CFO in many companies. As one colleague who consults on financial strategy with many large companies said "You know what we think of IT? We think it always shows up, spouts unintelligible jargon, and asks for huge lumps of cash." With a perspective like that, it's easy to understand why any initiative that promises to reduce lumpy capital investment and transform it into smoother operational expenditure would be extremely attractive to bean counters.
Given all these factors, trying to fight cloud computing by making comparisons between the cost of running an internal server versus the cost of a cloud-based on is off-target. Unless the cloud numbers are significantly higher, there are many attractive aspects to cloud economics that would cause general senior management to view it as very desirable. A much better strategy would be to identify decision criteria for determining whether a given application should be hosted internally or could be moved to a cloud environment. With defined criteria, a portfolio analysis can be undertaken to make a set of recommendations and create an action plan. But providing "proof" that internal servers are cheaper is a losing strategy, and reminds me of arguments I've heard made by HR, legal, and other administrative groups -- just before their responsibilities were outsourced to providers who offer fixed costs, more transparency, and more flexibility.
Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualization to date.