by Bernard Golden

The Case For and Against Private Clouds: Conclusion

Opinion
Jun 11, 200910 mins
Cloud ComputingVirtualization

CIO.com's Bernard Golden wraps up his look at private clouds with practical advice on a smart start for enterprise IT groups.nn

For the past few weeks I’ve been discussing private clouds—clouds devoted to a single entity. The very term private cloud is a bit loaded, in that some people feel that what one is really talking about is an internal cloud that is located in an organization’s own data center. Others point out that a dedicated cloud can also be hosted by a hosting provider or an outsourcer; indeed, many hosting providers and outsourcers are scrambling to implement cloud environments, seeing public clouds as a threat that must be answered lest business slip away.

[ Read the whole CIO.com series by Bernard Golden on private clouds. See Defining Private Clouds, Part One,Defining Private Clouds, Part Two,The Case For Private Clouds and The Case against Private Clouds. ]

My view is that private cloud is probably a better term; however, one must be careful to distinguish the implementation location, as some aspects of a private cloud hosted externally differ from an internal counterpart. For example, it’s likely that a formal contract containing an SLA will be in place with an external provider of a private cloud; negotiating and enforcing that SLA will probably be different than addressing an internal SLA.

In this post I’d like to summarize the series, draw some lessons, and offer some thoughts on what steps to take as you plan a private cloud implementation.

In terms of summing up, one factor to keep in mind is the “why” of private clouds: why does it make sense to consider implementing one?

The most important factor is that implementing a private cloud allows an IT organization to bypass many of the issues raised against public cloud services like Amazon EC2. First, one does not need to rely on the public cloud provider’s security measures. Second, a private cloud, as mentioned just above, can provide for an SLA, whereas a public cloud may have an inadequate or non-existent SLA. Third, and quite critically, certain privacy issues that arise with public cloud use can be avoided; an example of this type of issue is the ability of the U.S. government to access an organization’s data in a public cloud without the data owner knowing anything about the access. If the cloud is privately hosted, that unknown access is not an issue.

Also quite important is that implementing a private cloud offers an opportunity for IT to address some of the age-old criticisms it receives: IT is slow, unresponsive, paperwork-ridden. A private cloud enables business IT groups to provision compute resources in a matter of minutes, without any need for someone from the infrastructure groups to be involved at all.

A third factor, though somewhat less important, is that a private cloud enables existing equipment to be repurposed for a cloud environment. It’s great that existing equipment can be reused, but unless there’s a real payoff for moving to cloud computing independent of repurposing, this is irrelevant. Said another way, equipment repurposing should be a beneficial byproduct of the decision to move to cloud computing, not a major factor. I’ll say more about this in a moment.

If these factors were the only ones associated with implementing a private cloud, the decision would be obvious. It’s important to keep in mind that challenges accompany the decision to implement a private cloud, and those must be kept in mind.

One challenge, or at least question, is how well existing infrastructure can be repurposed to serve as a private cloud. In my piece last week, I said that the visions of companies providing private cloud offerings depended upon late-model hardware kit, which most organizations don’t have—or at least don’t have throughout their data centers. To quote the piece directly: “Unfortunately, most data centers are full of equipment that does not have this functionality; instead they have a mishmosh of equipment of various vintages, much of which requires manual configuration. In other words, automating much of the existing infrastructure is a non-starter.”

I came in for some (mild) criticism by another writer who noted that identity management and CMDB systems exist that can support automation. The writer went on to say “Any network or systems’ administrator worth their salt can whip up a script (PowerShell, bash, korn, whatever) that can automatically SSH into a remote network device or system and launch another script to perform X or Y and Z. This is not rocket science, this isn’t even very hard. We’ve been doing this for as long as we’ve had networked systems that needed management.”

Fair enough. Identity management and CMDB systems do exist and certainly assist implementing automated provisioning; they are by no means universally deployed in a fashion to support automated provisioning. With respect to the ability to install scripts on network endpoints, this, while true, is as much a problem as a solution. Home-grown scripts reflect the approach and skills of the individual implementing them and IT organizations often find themselves in a bind when the script creator leaves and someone else has to excavate the systems to understand how the scripts work and exactly what they do. The purpose of the new, automation-ready systems (e.g., Cisco UCS) is to implement a standardized (or at least consistent) approach to endpoint automation, and serve as the basis for straight-through automation.

The writer went on to say that the real challenge is orchestration, the ability to aggregate a number of individual automated configuration activities into one transaction. To quote the article: “automating a series of tasks, i.e. a process, is much more difficult because it not only requires an understanding of the process but is also essentially “integration”. And integration of systems, whether on the software side of the data center or the network and application network side of the data center, is painful.”

On this, I completely agree. Orchestration is critical, and not trivial. Implementing process change is much more difficult than configuring any piece of equipment. I summed it up as “human capital is much more expensive than physical capital,” which is a flip way of saying that organizations, made up of individuals with varying skills, interests, and motivations, are extremely difficult to redirect. And make no mistake about it, moving IT organizations to a streamlined, automated, orchestrated method of doing business qualifies as a redirect. I don’t mean to make this sound like this is all due to obstinancy from IT staff — many of the processes in place are a result of hard-fought battles to address other issues; for example, many IT groups, as a result of ITIL, ISO, etc., have fixed change control, etc., that make sense from one perspective, but are at cross-purposes with any kind of cloud implementation.

Underpinning the orchestration, of course, is fully automated infrastructure that can be driven by dynamic interaction rather than slowly-paced manual processes. A question remains, perhaps, about the extent of infrastructure automation actually out and about in the world—after all, why would Cisco be releasing its UCS absent the need for automated infrastructure components?

Aligned with the need for orchestration is the need for governance. Governance is the human authorization aspect of cloud computing that ensures that the right projects and people are interacting with the orchestration system to provision compute resources. Absent governance, compute resources will inevitably be exhausted by demand that is not aligned with need. Governance ensures that even authorized resource requesters (who are, naturally, listed as approved within the identify management system). To be truly effective, the identity management system must have data associated with the user record beyond the usual name, location, role stuff that maps as organizational policy regarding resource use.

If you want to move forward with a private cloud effort, what are the right steps? Here are some suggestions.

1. Start tactically.

I know this goes against common sense and, for that matter, the advice you’ll get from vendors and the IT staff itself. From the vendor perspective, they’re ready to sell you a ton of kit to build out your all-improved data center. They’ll pitch that you only get real value once everything is agile. Their interest in convincing you of this is pretty obvious.

Less obvious is why the IT staff would propose a strategic start. It’s because one of the most effective ways to kill an initiative is to set up a study along the lines of “total private cloud value when applied throughout the data center.” Politicians use studies all the time to kill politically unpalatable initiatives. Don’t get caught up in this.

2. Create a small, self-contained cloud environment of less than 50 machines.

This is large enough to deliver a proof point and determine whether there’s value in a private cloud initiative—without having to bet the farm on the answer.

3. Start with an app that begs for cloud implementation.

One of the best use cases for cloud computing is agile scaling, both up and down. So for your first cloud effort find an app that matches that profile. One of the best application profiles for experimentation is test/dev. It’s always a pain to get resources assigned for these purposes and the amount of work often seems out of proportion to the importance of the effort. Test/dev, by its nature, is transitory, yet many IT processes are oriented toward permanent installation.

4. Start with a new, fairly self-contained application.

You don’t want to get bogged down in a “to move this application from the data center we have to arrange for 14 different integration points” conversation. Start with something new that is relatively standalone. Obviously, if you’ve started with test/dev, this issue should not be a major one.

5. Evaluate the application post-implementation.

Take a look at the TCO as compared with what it would have been if provisioned the established way. Far better than one of the studies mentioned in #1 above is a real-world example with dramatic cost reduction.

This brings my five-part series on private clouds to a close. I’m sure I’ll return to the topic frequently, because I am convinced that over the next year vendors, press, and IT organizations will focus on private clouds as the best and quickest way to move to the next phase of IT infrastructure.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.

Cloud Computing Seminars HyperStratus is offering three one-day seminars. The topics are:

1. Cloud fundamentals: key technologies, market landscape, adoption drivers, benefits and risks, creating an action plan

2. Cloud applications: selecting cloud-appropriate applications, application architectures, lifecycle management, hands-on exercises

3. Cloud deployment: private vs. public options, creating a private cloud, key technologies, system management

The seminars can be delivered individually or in combination. For more information, see http://www.hyperstratus.com/pages/training.htm

Follow everything from CIO.com on Twitter @CIOonline