Last week, I wrote about the first of cloud computing’s revolutions: the revolution in IT operations. This week I want to turn to the revolution that will occur because of the changed nature of cost in a cloud computing environment. In the Berkeley RAD Lab report on cloud computing, it identifies “pay-as-you-go pricing” as a key characteristic of cloud computing. Pay-as-you-go pricing refers to the fact that computing resources in a cloud environment are typically charged for on a fine-grained usage basis.
In Amazon, for example, one pays by the hour for processing capacity, and by the Gb for network transmission and storage of data. Instead of receiving a bill for, say, a month’s worth of managed hosting, the bill might be for 63 hours of computing. Because many applications only are used a portion of any given month, the managed hosting model would charge for hours where no actual computing is being done; by contrast, Amazon would only charge for hours that the application was needed and up and running. Of course, this is only possible if one monitors the application and brings it down if unneeded. If no monitoring is performed, and the application is left running even though no load is being processed, a charge is applied.
This type of pricing is often labeled “opex instead of capex.” (For more on this confusion, see my previous post on understanding opex vs. capex). That phrase summarizes the type of budgeting that takes place for a pay-as-you-go costs, and illustrates the changed nature of computing procurement in a cloud environment —and that goes to the heart of the revolution. The ultimate goal of running a computer is to execute some code, which has—putatively, anyway—business benefits. Historically, one could only achieve the aim of executing code on a server that one owned and operated.
In other words, a capital investment in computing gear was a prerequisite to running an application. Capital investments (or capital expenditures, aka capex) are large outlays that pay benefits over time and are therefore depreciated for tax purposes. In most organizations, capital expenditure is very carefully monitored, since companies need to keep financial ratios under control. That monitoring is done by the finance organization, which rations capital access.
Opex, on the other hand, stands for operating expenditure, and is more typically managed by the organization doing the spending; that is, the organization is given a certain operating budget, told to meet certain financial targets, and pretty much left on its own. As long as it stays within its operating budget and “makes its numbers,” it can chart its own course.
Because previous generations of computing required relatively large capital investments well before the benefits of the application began to flow, getting IT projects approved was not easy. Everything had to pass by the gimlet-eyed denizens of finance. This had the inevitable result that only the safest, lowest risk, most necessary applications got funded. Also inevitable was the fact that IT decision-making was shifted from the business units that would benefit from the investment to the finance group.
Because no capital investment is required for most cloud computing use, the decision making regarding what applications should be implemented will reside with the group providing the operating expenditure, i.e., the business unit. Finance will have much less influence over what applications have money steered toward them. A side question will be how this reduced need for finance approval will affect the organizational politics regarding where IT will report; in many companies IT reports to the CFO because of the capital-heavy nature of its business processes. Perhaps the move to cloud computing will result in IT moving out from under the CFO and reporting to business units or the CEO. It will be fascinating to watch this development.
This capex vs. opex is the basis for most discussion about the financial impact of cloud computing; in my view, though, it falls far short of understanding the revolution that will occur as a result of fine-grained operational cost assignment. We haven’t even begun to think about the downstream effects for line of business organizations when costs are more directly assigned to resource use. Here are a few of the changes we can expect to see due to the new cost mechanisms of cloud computing:
Control Shift From IT to Business Units
The need for large capital investment, coordinated by IT (and overseen by finance) has meant that business units have had less control over the computing resources supporting their business efforts. With the move to far less capital investment being needed, IT will have less say over how business units choose to direct their IT spend—and, given the reduced barrier to “shadow IT spend” (also discussed in last week’s post), perhaps far less knowledge regarding what computing business units are doing. This will give business units much more discretion regarding where they choose to invest.
Also, one can expect this environment to foster far less patience for the “this year, we have to invest in a big upgrade of the XYZ application, so your business unit application goals will have to wait until next year.” As the old saying goes, “he who pays the piper, calls the tunes,” and cloud computing will make it clearer exactly who the piper is—and the piper is going to ask for different tunes going forward.
We’ll see much more of the “I can afford it, and this is what I want” attitude in the future. We’re already seeing the impact of this—witness the firing of SAP’s CEO this week. While the financial crisis undoubtedly exacerbated the situation, the fact that business units were resistant to plunging huge amounts of money into an ERP system that does essentially nothing to assist innovation played a part as well. Being on the wrong side of the “assists innovation” equation is dangerous in a cloud computing world.
Direct Tracking of Resource Use to Business Value
In today’s environment, it’s not easy to make a direct financial connection between a business initiative that uses computing resources, and the value resulting from that use. The lumpy cost assignment typical of most computing environments makes it very difficult to match cost and benefit. Much more common is the situation in which a certain amount of value is realized (e.g., more visitors as a result of an online campaign with each visitor assessed as being worth a certain amount), but very little knowledge is available regarding the investment made to implement the initiative.
Using resources that are treated as “sunk cost,” failing to account for ancillary costs like network traffic, and, of course, failing to even countenance the overhead that should be assigned to the initiative make a realistic accounting nigh-on impossible. This situation will change when all of the computing resources are paid for “by-the-drink,” and can be more directly assigned to the value they generate. At a time when business operations are increasingly infused with IT characteristics, this tracking is coming along just in time.
Low Cost Fosters Experimentation
An aspect of cloud computing that isn’t emphasized enough in most discussions about it is the fact that it is ideally suited for application experimentation. Just as the high-cost, capital-intensive IT of the past caused investment to focus on the safest, lowest-risk applications, the low-cost, capital-lite IT of cloud computing will motivate business organizations to experiment with new business initiatives. Business initiatives that, in the past, couldn’t have gotten enough support to justify sharing precious capital to take a flyer on them, will find a far friendlier environment in cloud computing.
A good example of this is the NASDAQ Market Replay application that leverages Amazon Web Services. Trying to buy enough equipment for this application would have been prohibitive, even though the application’s value seemed intuitive. Using AWS, the application could be developed for much less, which made launching it much lower risk. New applications can be tried out at a cost of hundreds or thousands of dollars, rather than the hundreds of thousands of dollars required heretofore. If you are a line of business executive with innovative ideas, cloud computing is going to make your prospects much brighter.
In the “low cost fosters experimentation” perspective, cloud computing is much like open source. In his book Here Comes Everybody, Clay Shirky noted that open source’s low cost encourages experimentation and making mistakes. When the stakes are low, trials that don’t work out are much more acceptable—and increasing the numbers of trials increases the odds for success.
Scaled Cost Encourages Large-Scale Applications
The flip side of cloud computing’s low cost encouraging experimentation is that its linear cost increases that accompany increasing application scale make the prospect of application wild success palatable. Today, it is too common that a very successful application ends up being stifled because not enough capital can be found to support the necessary increase in resources, or, even if the capital can be located, the physical infrastructure can’t be brought on stream quickly enough. Success overwhelms systems and results in very successful applications being starved of resources.
When business owners can confidently assume that, should their initiative gain traction, the necessary resources will be available at a reasonable cost, they can be much bolder in their business initiatives. A great example of this is what happened for a client of ours, the Silicon Valley Education Foundation. Its Lessonpoly application serves up lesson plans for teachers, allowing sharing, mashing up, and so on. Recently SVEF was offered the opportunity to host a large number of lesson plans associated with the Winter Olympics, with NBC featuring the partnership in its broadcasts. SVEF could confidently agree to support the program, secure in the knowledge that its AWS-based application could easily scale: the admin merely shut down the 32-bit based small instance and launched a 64-bit large instance, immediately quadrupling the memory available for the application. If load increased beyond the 64-bit machine’s ability to handle it, partitioning the application onto several machines could be accomplished easily. In its old hosted environment, SVEF would have had to think twice about supporting the Olympics program; with cloud computing the decision could be based on the attractiveness of the opportunity.
In nearly any environment one cares to think about, transparency of costs and direct assignment of costs to benefits causes behavior change. As cloud computing’s pay-as-you-go pricing model begins to permeate business unit thinking, we’ll see more change in the way they plan their IT activities than we’ve seen in the past thirty years.
Out will be the big-bang, multi-year, multi-million forced marches; in will be agile, low-cost, experiments designed to identify nascent business opportunities and exploit them. Business units will more confidently push their agenda in IT, secure in the knowledge that more control is theirs in a cloud computing world. A new environment of assessing costs and business benefits relative to IT initiatives will arise. Cloud computing will upend the traditional relationship between business units and IT, and the business world will never look the same again.
Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.
Follow Bernard Golden on Twitter @bernardgolden. Follow everything from CIO.com on Twitter @CIOonline