Cloud CIO: The Cost Advantage Controversy of Cloud Computing

There is an enormous amount of controversy about whether the capex or opex approach to cloud computing is less expensive. CIO.com's Bernard Golden explores some possible outcomes of a shift from capex to opex and utilization risk.

1 2 3 Page 2
Page 2 of 3

Today, though, organizations using compute resources don't want to pay a flat fee; after all, they may have transitory use, spinning up resources for a short-term test or a short-lived business initiative, why should they commit to a five-year depreciation schedule? Resource consumers expect to pay on an operating expenditure basis; after all, that's what's out there in the market. They want to pay only for what they use, no matter who the provider is.

IT organizations are intrepidly preparing for this world, implementing private clouds and moving toward granular pricing of resources, a task made difficult, it must be admitted, by the fact that most IT organizations do not have accounting systems designed to support detailed cost tracking.

So it will be the best of all worlds — resource consumers getting granular, use-based costing, IT organizations providing private cloud capability with support for sophisticated cost assignment, and no provider profit motive imposing additional fees beyond base costs.

Or will it?

Here's the thing — for every opex user there is a capex investor. For every user who delights in only paying for the resources used, there must be a provider who stands ready to provide resources and offer them on an as-needed basis — someone must own assets.

For that asset holder, a key variable in offering prices is utilization — what percentage of total capacity is being used. To go back to that crude pricing formula, an example of cloud utilization is what percentage of a servers total available processing hours are sold. The crucial factor is to sell sufficient hours — i.e., generate sufficient utilization — to pay for the asset.

This means that IT organizations need to become much more sophisticated about managing load and shaping use. This is typical of any capital- intensive industry — think of airlines and the sophisticated yield management measures they implement.

I have heard some people assert that utilization won't be much of a problem because most applications are not very volatile; that is, their resource use doesn't vary much. Therefore, high utilization rates can be achieved in private clouds by building a cloud to support typical use plus some spare capacity to support occasional spikes in demand.

I think this misreads likely experience and extrapolates the past inappropriately. This belief underestimates outcomes as application groups absorb the capability of cloud computing. For one, now that highly variable loads can be supported, application groups will begin creating more of these type of applications; heretofore, because it was extremely difficult to get sufficient resources for these type of applications, people didnt even bother thinking about them. Now that a highly variable load application is possible, people will start developing them.

A second way this perspective underestimates future outcomes is that it fails to understand behavior changes as organizations learn that they can reduce costs by squeezing application capacity use during low-demand periods. James Staten of Forrester characterizes this as "down and off," meaning cloud computing cost is reduced as ways to scale applications down or turn resources off. This cost reduction benefits uses, but causes problems for providers.

1 2 3 Page 2
Page 2 of 3
NEW! Download the Fall 2018 digital issue of CIO