by Bernard Golden

Defining Private Clouds, Part One

Opinion
May 14, 200911 mins
Virtualization

VMware recently weighed in with its definition of private clouds. CIO.com cloud guru Bernard Golden explains his definition -- and his assessment of how IT groups can benefit from this arrangement.

The topic of private clouds is heating up. A private cloud is, essentially, a cloud computing capability dedicated to one organization. The term “internal cloud” is often used for this kind of functionality, but as many people point out, the term “internal cloud” conflates functionality with location. That is to say, one can imagine a company’s cloud that is physically hosted at an EDS site—the equipment, infrastructure, cloud computing software would all be dedicated to the customer company, which would make it a dedicated, or private, cloud, despite the fact that it is not located within the company-owned-and-operated data center. Therefore, private cloud is probably a more appropriate term, and is the one used throughout this post (and subsequent post, which will address private clouds from the perspective of the cloud user).

VMware Defines Private Clouds

VMware has its own take on what makes a private cloud, and its view stresses automation and chargeback capability. Get details from CIO.com’s recent story, VMware vSphere: Does It Solve IT’s Biggest Worries About Cloud?

Now that we have the terminology straight, what constitutes a private cloud, beyond being dedicated to one using organization?

The easiest way to characterize a private cloud is to identify its functionality as being similar to a public cloud, except that it is not available for use by just anyone. While there are many definitions of cloud computing, I like the one from the Berkeley RAD Lab’s cloud computing report, which I recently discussed here. The RAD Lab report says that cloud computing has three characteristics:

1. Huge Resources: Computing resources are available upon demand and appear to be infinitely scalable, thereby enabling highly agile and scalable applications.

2. No Commitment: Computing is immediately available and may be used without the commitment to an on-going or long-term agreement.

3. Pay By-the-Drink: Users pay only for the computing resources actually used; when resources are released, no further payment is required.

Note that the RAD Lab specifically states that they do not consider internal (i.e., private) clouds to be “real” clouds, since there is inevitably some point at which additional resources are not available, as when an individual company runs out of room in its data center.

What this implies for private clouds is that they should offer the same functionality: highly scalable for applications that require lots of headroom; no commitment beyond actual use, with dropped resources no longer charged to the former user of the resources (i.e., once I’m done with one or more virtual machines, I shut them down, and am no longer charged for them); and payment tied to actual resource consumption.

It also implies that IT organizations will implement a clear-cut distinction between infrastructure provision and cloud services consumption. That is to say, a clear demarcation between granular services provisioning and computing resources consumption is necessary. If you view my graphic representation of this concept, you can see that it portrays a private cloud, divided between core IT infrastructure cloud provisioning and business IT (i.e., application creators/users); the dark line halfway down the figure indicates the areas of responsibility.

I’ve taken a different approach to describing private cloud computing. Most other discussions of it focus on the hardware and software components that are used to create a private cloud. In this approach, I’ve chosen to focus on service capabilities, since a functional characterization is, in my view, really key to understanding the fundamental capabilities that the cloud consumer (i.e., Business IT application group) expects and the granular services that the cloud provider (in this case, the private cloud provider, aka IT infrastructure services) must deliver.

Focusing on the services is, to my mind, critical, because the whole point of cloud computing is to deliver a more satisfactory set of IT services, not to make infrastructure management more convenient. Put another way, unless cloud computing provides greater responsiveness and higher satisfaction to application users, it misses the mark. For this reason, I have chosen to depict the internal cloud as a set of services rather than a set of software components.

Below the line resides Data Center Operations, which is responsible for infrastructure management and delivery of core computing capabilities:

Virtual resources: Storage, machines, and networking. Each of these resources must be available on demand by Business IT—immediately and at the necessary scale; of necessity, this implies a fully virtualized set of resources. The ability to supply requested capacity is a challenge; as the Berkeley RAD report noted, at some point internal resources will exhaust. However, for most service requests, exhaustion will not be a problem. However, the resource requests are likely to vary, with the amount of variability falling along some kind of distribution curve. Certainly, one should expect that reducing the friction for computing resources will undoubtedly increase the overall amount of resources called for. The effort to achieve a fully virtualized environment should not be underestimated. Assignment of storage and network resources as virtualized resources requires that capacity be staged and be capable of being assigned without human intervention. A key issue for storage and networking is that additional hardware is typically required to support dynamic assignment.

Automated Sys Admin: To emulate the public cloud providers like Google and Amazon in terms of both responsiveness and economics, system administration must move from a manual or semi-automated (often via home-grown Perl scripts indecipherable to any but the original creator) process to one that is completely hands-off in terms of individual actions like assigning storage, etc. In fact, the administration neeeds to be completely automated and driven by policy.

That is to say, once physical resources are put into operation, fully virtual-capable and ready to have capacity assigned, no further human intervention to assign or manage the resources can take place. The individual actions to assign those virtual resources must be initiated by calls from other software applications. To the extent that the effort of an individual sys admin is required to provision a resource request, the private cloud has a hole in its fabric. Between the resources themselves being fully virtualized, and the management of those resources being fully automated, the data center may be said to support the “huge resources” and “no commitment” elements of the Berkeley RAD Lab definition. We will return to the third element (pay by-the-drink) in a subsequent post.

Capacity Planning: The private cloud inventory management. It doesn’t take a rocket scientist to see that doling out resources in a highly automated fashion, especially when the reduced friction promises to increase overall demand for computing resources, means that private clouds will require much more careful attention to the inventory of underlying physical resources. An analogy might be to the highly optimized supply chain of Wal-Mart—it keeps its shelves filled in the face of high demand by careful monitoring of overall demand and very robust ability to deliver goods as needed to replenish stock. in a private cloud, IT operations will need to track available computing resources with an eagle eye to ensure that capacity is always available to be assigned as demanded by application users.

Absent excellent capacity planning, it’s possible that resource demands from applications will come up dry, leading to insufficient capacity to meet computing demands as well as to uncomfortable meetings regarding the functionality of the private cloud. In a world of increased variability and low-visibility demand (after all, a private cloud implies that application managers can just press a button and get more of everything), capacity planning will move to a key role.

Security: Rather than being tied to resource provisioning as with the layers nearer the Hardware and Software Resources, security is an underpinning for all computing, no matter what specific resource is being used. The Cloud Security Alliance (CSA) just published a report on Cloud Computing Security, which is well worth reading. I will discuss the report in a future posting, but want to address a few specifics regarding how security must operate in cloud environments, including private clouds.

In a private cloud, security cannot be something to be evaluated on a case-by-case basis, subject to manual review, etc. In this sense, security for individual applications must be available just like the computing resources themselves: called by external actions and applied in an automated fashion. Security policy should be evaluated and defined generally, and then captured as individual rules to be applied according to application profile.

So, for example, if an application requires compliance with certain regulations, there must be a way to ensure those compliance measures are codified in some kind of rule that can be executed during application set up. Any need to examine the application’s security requirements via human intervention and then implement them via someone doing manual configuration just introduces friction in the provisioning process and stalls the benefits of a private cloud.

The CSA’s report goes into great detail about the security issues impinging on cloud environments; it’s too much detail for me to address in this post, but the key point to be raised is that for a private cloud to really operate as a cloud, it must move all discussions and manual configuration out of the path of application initiation. The security capabilities of the internal cloud must be sufficiently defined and granular enough that an application can define its security requirements and have them applied to the resulting software components and hardware resources automatically.

Bandwidth: This is a critical resource and one that is going to be stressed in the future. The low-friction application creation and (especially) the enormous growth in data storage enterprises will continue to experience, along with the increasing use of external services as part of application architectures, means that lots more traffic is going to get pushed through pipes. Moreover, it’s not enough that raw bandwidth is available, it has to support the latency levels needed by the various application components, which, just to reiterate the point, are likely to be more numerous than in the old, manual-oriented data center world, and also, as a result of larger data volumes, will require much more bandwidth. Perhaps it would be appropriate to say that this is a resource for which capacity planning will need to keep on top of demand patterns.

Identity Management: A robust identity management system needs to be in place to enable automation. Requests for computing services will come not from a sit-down meeting where authentication and authorization will be done on a personal basis—i.e., direct face-to-face interaction enabling the resource granter to identify the legitimacy of the request and the requestor—but from an service request via a software-enabled mechanism like an internal portal. This means that an identity management system must operate that encompasses all potential resource requesters as well as any system administration people involved in deploying resources. This identity management will need to be extended to incorporate roles to enable appropriate workflow, e.g., a project engineer requests resources (identity management system checked to see if this is an appropriate request for this individual), the request is passed on to the engineer’s manager for approval (system checked to see who is appropriate approver as well as authority to approve), and so on.

Policy: A policy engine that contains the rules for resource requests is a necessary complement to the identity management system. The policy system ties together requests for resources, checking authorization and authentication, assignment of resources, and output of access information back to the requester.

The important thing to keep in mind regarding private clouds is that: (1) it creates a greater segregation between resource provisioning and resource demand; (2) it is likely to increase the overall demand for resources, given that reduced friction in resource requests will inevitably cause people to use more; (3) that implementing the automated data center will require additional hardware resources to enable remote resource association with individual application instances; and (4) capacity planning to ensure resource availability will become a core data center operations skill.

With respect to number 4, capacity planning needs to implement something like “just-in-time” infrastructure availability. This is because having too much capacity, with resources lying fallow, means a waste of capital (a no-no these days); while having too little capacity will restrict application agility, which is one of the main points for implementing a private cloud in the first place.

Next week, I’ll look at private clouds from the User perspective, and discuss what processes and resources need to be in place for private clouds to be a workable solution.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.

Cloud Computing Seminars HyperStratus is offering three one-day seminars. The topics are:

1. Cloud fundamentals: key technologies, market landscape, adoption drivers, benefits and risks, creating an action plan

2. Cloud applications: selecting cloud-appropriate applications, application architectures, lifecycle management, hands-on exercises

3. Cloud deployment: private vs. public options, creating a private cloud, key technologies, system management

The seminars can be delivered individually or in combination. For more information, see http://www.hyperstratus.com/pages/training.htm

Follow everything from CIO.com on Twitter @CIOonline