Cloud CIO: The Cost Advantage Controversy of Cloud Computing
There is an enormous amount of controversy about whether the capex or opex approach to cloud computing is less expensive. CIO.com's Bernard Golden explores some possible outcomes of a shift from capex to opex and utilization risk.
Tue, July 19, 2011
CIO — One of the topics most associated with cloud computing is its cost advantages, or lack thereof. One way the topic gets discussed is "capex vs. opex," a simple formulation, but one fraught with meaning.
At its simplest, capex vs. opex is how compute resource is paid for by the consumer of those resources. For example, if one uses Amazon Web Services, payment is made on a highly granular level for the use of the resources — either time (so much per server-hour) or consumption (so much per gigabyte of storage per month). The consumer does not, however, own the assets that deliver those resources. Amazon owns the server and the storage machinery.
From an accounting perspective, owning an asset is commonly considered a capital expenditure (thus the sobriquet capex). It requires payment for the entire asset and the cost becomes an entry on the company's balance sheet, depreciated over some period of time.
By contrast, operating expenditure is a cost associated with operating the business over a short period, typically a year. All payments during this year count against the income statement and do not directly affect the balance sheet.
From an organizational perspective, the balance sheet is the bailiwick of the CFO, who typically screens all requests for asset expenditure very carefully, while operating expenditures are the province of business units, who are able to spend within their yearly budgets with greater freedom.
Summing this up, it means that running an application and paying for its compute resources on an "as-used" basis means the costs run through the operating budget (i.e., are operating expenditures — opex), while running the same application and using resources that have been purchased as an asset means the cost of the resources is a capital expenditure (capex), while the yearly depreciation becomes an operating expenditure.
It might seem obvious that the opex approach is more preferable — after all, just pay for what you use. By contrast, the capex approach means that a fixed depreciation fee is assigned no matter what use is made of the asset.
However, the comparison is made more complex by the fact that cloud service providers who charge on an as-used basis commonly add a profit to their costs. An internal IT group does not add a profit margin, so charges only what their costs add up to. Depending upon the use scenario of the individual application, paying a yearly depreciation fee may be more attractive than paying on a more granular basis. The logic of this can be seen in auto use — it's commonly more economical to purchase a car for daily use in one's own city, but far cheaper to rent a car for a one or two day remote business trip.
There is an enormous amount of controversy about whether the capex or opex approach to cloud computing is less expensive. We've seen this in our own business — at one meeting, when the topic of using AWS as a deployment platform was raised, an operations manager stated flatly "you don't want to do that, after two years you've bought a server." Notwithstanding his crude financial evaluation (clearly not accounting for other costs like power and labor), his perspective was opex vs. capex — that the cost of paying for resources on a granular basis would be more expensive than making an asset purchase and depreciating it.
The move to private clouds added to the complixity of this. Heretofore, most organizations worked on the basis of one application, one server, so the entire depreciation for the server was assigned to one application, making the calculation of how much the capex approach would cost relatively straightforward.
This became further complicated with the shift to virtualization, in which multiple applications shared one server. Now yearly depreciation needed to be apportioned among multiple applications — and this could be even more complex if one attempted to apportion the cost according to something other than assigning cost by dividing the cost by the number of VMs on the machine. Trying to assign cost on the percentage of total memory used by an application, or processor time requires instrumentation and more sophisticated accounting methods, so most organizations just work on a rough "X dollars, Y number of VMs, each one costs X divided by Y."