by Bernard Golden

The Case Against Private Clouds

Jun 04, 20097 mins
Cloud ComputingVirtualization's Bernard Golden recently explored the advantages of private clouds. This week, he looks at the downsides.

Over the past few weeks, I’ve examined the role of private clouds in cloud computing. In posts one and two, I discussed the functional services that comprise a private cloud, both from the infrastructure as well as the application management perspective. Last week, I discussed the case for private clouds — why they are the most likely way mainstream IT organizations will implement cloud computing.

This week, I want to take the contrary position — why private clouds are not the future of cloud computing, and, in fact, will prove too daunting for IT organizations.

First, one of the main reasons posited for private clouds is that they enable IT organizations to repurpose existing infrastructure. But is that really true? The key to automating the bottom half of the chart — the infrastructure portion — is to use equipment that can be configured remotely with automated measures. In other words, the equipment must be capable of exposing an API that an automated configuration system can interact with. This kind of functionality is the hallmark of up-to-date equipment. Unfortunately, most data centers are full of equipment that does not have this functionality; instead they have a mishmosh of equipment of various vintages, much of which requires manual configuration. In other words, automating much of the existing infrastructure is a non-starter.

Worse, even if the data center equipment is mostly of recent vintage, additional investment will be necessary. Add-on equipment to manage networking and storage will be required for cloud capability. Absent sufficient investment to ensure every part of the kit is automation-ready, the goal of end-to-end automation that is cloud computing will fall short. That is to say, more capital investment is going to be required to get to the Nirvana of a private cloud. And in these economic times, where IT is slashing budgets, an initiative that calls for more investment is not really practicable.

Second, what most vendors aren’t saying right now is that, even if you’re ready and capable of supporting incremental capital investment, a private cloud is not an easy-to-assemble arrangement like something from Ikea. Far from it. In order to make all the pieces ready to work together and be provisioned in a holistic fashion, a great deal of manual work is required. Each of the vendors offers a range of professional services designed to help IT organizations get their cloud up and running. Left unsaid is the fact that creating a private cloud requires great slathers of expensive consulting. So on top of the additional hardware costs, there will be additional fees for the vendor’s personnel to “help you take advantage of your existing infrastructure.”

All in all, the ambition to leverage existing infrastructure to create a low cost private cloud may be a chimera — a misguided vision of achieving vast improvement with little additional investment. Far more likely is vastly lowered marginal costs achieved by the investment of large amounts of capital and consulting.

Another thing that becomes clear if you look at the private cloud chart, is that there are lots of services that either don’t exist today, or, if they do exist, are accomplished via manual processes. Just to take one example, policy. This is the function that determines whether an individual has the authority to request IT resources — a new server, storage, etc. Today, that is hashed out at some kind of project meeting, or an architectural review board, or something of the like. The infrastructure representative takes notes that a new system needs to be set up, leaves the meeting, sends one or more pieces of e-mail (or convenes another meeting) and ultimately, a new collection of compute resources is available. This is communicated back to the requesting party via an e-mail or, just as likely, in the next sitdown meeting. Finally, the requestor can get to work.

In a private cloud, the rules for who has authority to request resources must be captured in rules that can be checked during an automated request for resources. That is, someone fills out a Web form requesting compute resources, and the request flows in an automated fashion through a number of steps, one of which is a check to see whether the requestor has the authority to obtain resources.

All of this illustrates a fundamental fact about private clouds: For them to operate effectively, a huge number of informal and manual processes must be changed. Put bluntly, the operating assumptions of IT — the culture — require transformation, which makes the challenges of upgrading the equipment look small. The cost of human capital, embodied in processes and executed by emotion-laden individuals, each with their own motivations, dwarfs that of physical capital. The inertia to be overcome is enormous. There is no question that the benefit of cloud computing is tremendous — but at a cost of organizational upheaval, that benefit comes at too high a price.

Speaking of which, that brings us to our third issue — the disruption. Remember, all of this capital changeout and process disruption occurs in a setting of working systems. Every major IT organization has hundreds, even thousands, of applications running, and the mind boggles at the amount of disruption imposed by the need to reconfigure all of the hardware the systems run on, not to mention the disruption imposed on everyday working processes by the move to automation.

One of the first rules of IT is not to mess with stuff that’s working if at all possible. Only a brave — or foolhardy — CIO willingly invites disruption in order to putatively improve things. The more likely — and more rational — response is to leave what’s in place alone, and create a segmented section of the data center devoted to a cloud environment. That section initially can be dedicated to low-risk uses like test/dev, internal Web-based apps, and so on, which can prove out whether there is real value in cloud computing.

The real challenge will come a year or so later, when the undoubted benefits of cloud computing are manifest. Then the CIO faces the question — impose disruption on the existing infrastructure or not? On the one hand, there’s obvious value in the agility and scalability of the cloud. On the other, all those existing apps in the existing infrastructure have managed to operate for months or years without cloud capabilities, so do they really need it?

It’s quite possible that the long-term cloud strategy vis a vis the data center will be to leave it as is and use external resources — whether an cloud-capable outsourcer (inevitably referred to by trendy cloud cognescenti as a CSP) or an external cloud provider like Amazon, Microsoft, Google or their brethren. Gradually, over time, as the systems running in the data center are replaced by new ones running in some cloud environment, the square footage of the internal center will shrink, until all that is left are critical systems that, for one reason or another, can’t be moved outside.

So there you have it — the con case against private clouds. While the instinctual response by many IT organizations is to assume that a private cloud is the obvious, nay only, way forward, it should be clear that migrating a typical IT environment to a private cloud is not trivial. It may be, as the saying goes, a bridge too far. Certainly, one should not lightly assume that the logic of private clouds is obvious and unassailable.

Next week I’ll sum up this series of postings on private clouds and try and provide some guidelines, or action steps, that IT organizations should pursue toward this initiative.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date. Cloud Computing Seminars HyperStratus is offering three one-day seminars. The topics are:

1. Cloud fundamentals: key technologies, market landscape, adoption drivers, benefits and risks, creating an action plan

2. Cloud applications: selecting cloud-appropriate applications, application architectures, lifecycle management, hands-on exercises

3. Cloud deployment: private vs. public options, creating a private cloud, key technologies, system management

The seminars can be delivered individually or in combination. For more information, see

Follow everything from on Twitter @CIOonline