Cloud Makes Capacity Planning Harder: 3 Fight-Back Tips

Cloud computing will kill capacity planning as CIOs know it and utilization risk will move to cloud providers, says CIO.com's Bernard Golden. Here's his take on what IT and airlines will soon have in common -- plus three strategies for avoiding trouble.

By Bernard Golden
Thu, January 13, 2011

CIO — One of the issues we focus on in conversations with companies evaluating moving to cloud computing is the importance — and challenge — of capacity planning in a cloud environment. The bottom line is that cloud computing is going to make capacity planning much more difficult for CIOs who intend to maintain all or most of their company's computing in internal data centers. Moreover, utilization becomes a highly risk-associated topic as utilization risk is shifted onto the cloud operator.

Why is this?

As a starting point, it's important to recognize that the scale of computing — the sheer number of applications that an organization runs is about to explode. I wrote about this last week and noted that we in the industry typically underestimate by a factor or 100 or more the growth unleashed by new computing platforms. This recent comment by longtime analyst Amy Wohl on a Google (GOOG) group mailing list reinforced my perspective: "On the day the IBM (IBM) PC was announced I had a one-on-one call with IBM about their new product (I couldn't get to the press announcement) and they assured me the total market for PCs was 5,000." Which explains why I found laughable this forecast by Bernstein Research analyst Toni Sacconaghi. With all due respect, we are on the cusp of seeing server demand explode as more and more applications get envisioned, funded, and implemented. The odds of server demand shrinking are vanishingly small.

Which brings us to the issue of capacity planning. The traditional mode of capacity planning — focused on new servers funded by applications able to achieve capital investment funding — is finished off by by cloud computing. If an application group assumes that resources will be available on demand, and can be paid for by assigning an operating budget funding code, there's much less forecast insight about total demand possible. Put another way, fewer signals about total demand are available, and the timeframe of insight is much shorter.

Some organizations feel they have dealt with this by imposing a limit to the number of servers that can be provisioned at any one time. The thinking is, a limit of, say, 10 servers, is imposed and any larger number has to go through an exception handling process. Which is fine, but the assumption underpinning it is that the number of applications will remain relatively stable — and if the total resources each app can request is limited, total resource demand can be limited, thereby making capacity planning manageable.

Continue Reading

Our Commenting Policies