by Bernard Golden

What The Private Cloud Supply Chain Looks Like

Aug 27, 2009
Cloud ComputingVirtualization

Cloud computing will require IT to be masters of a private supply chain, argues's Bernard Golden. Here's a look at the stack of services that must be in place for a cloud supply chain to operate.

Last week’s posting on the changes cloud computing will bring to what I termed “The Private Cloud Supply Chain” engendered really interesting and thought-provoking comments—so much so that I decided to expand on the posting and address the substance of the comments. To aid the discussion, I created a graphic outlining a cloud stack of services that must be in place for the cloud supply chain to operate. The graphic represents the different services that must be in place to support a cloud supply chain. Let me briefly describe this (admittedly) high-level visualization of a cloud supply chain, starting from the bottom:

Hardware Infrastructure: Computing operates, after all, on hardware, and cloud computing, no matter where it runs, requires hardware. For a private cloud, this is a data center festooned with servers, networking, and storage. Traditionally, these were installed and made available via manual work performed by sys admins. This is a well-understood domain, though it is worth noting that over the past few years new designs of these products are more virtualization-friendly, with larger amounts of resources like memory, etc. being designed in to support being deployed in a virtualized environment.

[For timely cloud computing news and expert analysis, see’s Cloud Computing Drilldown section. ]

Virtualized Resources: As just noted, hardware released over the past few years is designed to support large-scale virtualization. In essence, virtualized resources are becoming the norm: when someone wants to use a machine, of course it’s going to be a virtual machine. In a few more years, we will regard deploying an application/middleware/OS stack directly on bare metal as an aberration. However, for cloud computing to exist, with its emphasis on agile provisioning and easy scalability, virtualization is de rigueur. Unless and until a complete set of compute resources is available, cloud computing cannot happen. Note that this can be all or just a portion of a data center, but must include server, network, and storage as virtualized resources that can be deployed and re-deployed without anyone laying hands on physical devices.

Orchestration: A software layer that coordinates the assignment of virtualized resources in a coordinated manner. Orchestration is what allows someone to request a defined manifest of compute resources—a virtual server with so much memory, etc., network connectivity of a certain specification and on a certain VLAN, storage of a certain amount—to be provisioned as a single request transaction. Specifically, orchestration allows this single transaction to flow through automatically as one request.

IT Services Interface: This indicates a demarcation between IT operations and application groups and symbolizes the changed relationship between the two groups. Last week’s posting discussed the critical difference between pre-cloud and post-cloud provisioning processes. Pre-cloud, the provisioning process is a largely manual process, punctuated by repeated meetings, phone calls, emails, instant messages, chance encounters in the hallway, etc., the net result of which is a provisioning process which can stretch to weeks or months. Post-cloud, the provisioning process is fully automated, with requests flowing into a service interface (a computerized interface, not a help desk!), and virtual resources provisioned in an orchestrated fashion being returned, ready to use, in a matter of minutes. A key theme of the posting was that a services interface is opaque with respect to resource need signaling, since requests are issued in real-time, rather than over a period of weeks or months; therefore, the task of capacity planning is much more difficult in a private cloud setting.

Application Cloud Management: In a cloud environment, the application group assumes more responsibility for managing the operational environment of the application. It decides on provisioning, both timeframe and resource assignment. It is responsible for monitoring the application’s uptime and performance, and deploying additional resources (that is, scaling the application) to meet increased demand. Perhaps it might be better to say that the application group has the opportunity to take responsibility for those things; some organizations may choose to have a central group like IT Operations execute these activities— however, even if a centralized group performs these operations, the timescales the operations are executed in must conform to the “minutes, not weeks” characteristic of a cloud environment.

Application Software: This is the collection of software components that operates to deliver application functionality. There is no logical difference between application software in a cloud vs. non-cloud environment, though the ability to dynamically scale applications typically requires an architecture suited to support this, including horizontal partitioning and loose coupling.

Business Objective: The raison d’etre of the whole shooting match. Nothing happens—or at least, should happen—without a business objective driving the exercise. One of the exciting things about cloud computing is that it holds the potential to link resource assignment and costs more directly with business objectives, thereby aligning IT execution more directly to business results.

Two arrows are present in the graphic. The one on the left symbolizes what was in the last paragraph—in a cloud environment, IT resources are deployed to directly address business objective demands. The one on the right symbolizes that, in a cloud environment, business outcomes (defined for the particular business objective) result from the deployed resources.

One could say that cloud computing encompasses the layers from “Application Cloud Management” through “Virtualized Resources” and represent the transformation that cloud computing brings to the table. Absent these layers, IT would operate in the time-tested traditional “one application, one server” world where provisioning is a multi-step, prolonged effort. The posting last week noted that cloud computing requires more than the insertion of additional software layers into the traditional IT world; it also requires organization and process changes to adapt to the changed operational environment. Supporting an IT Services Interface requires a coordinated mix of software, organizational role shift, and process re-engineering.

What’s interesting about the comments to last week’s posting is how they represent some of the ways that various participants in the industry are reacting to the cloud. I’d like to denote these reactions and describe what they limn about the prospects for clouds— particularly the private variant. Here are the common positions that IT participants both internal IT organizations and vendors commonly take vis a vis cloud computing:

The cloud isn’t necessary: One commenter stated “I have read many cloud articles, blogs, and threads and the constant focus seems to be that users can self provision at will. Why would you ever allow this?” He then went on to describe processes his organization had put into place to support faster provisioning, including request, second level evaluation of the request versus existing resources, and a final stop where the request is approved or denied. He said that, with this in place, his organization was able to perform 48 hour provisioning. It seems that his organization has done a lot of the hard work in terms of re-engineering to streamline the process. It isn’t clear whether there is automation in place (i.e., whether the group has dynamic provisioning based on a service request/orchestration capability) or just works, really, really fast.

One might say that this approach represents optimizing the current practices to operate as efficiently as possible. It’s less clear whether a 48 hour provisioning timeframe will remain acceptable in a world that comes to expect 10 minute provisioning, not to mention whether this approach supports granular cost assignment. As a side note, if last week’s posting implied that end users would drive provisioning and scaling decisions, I wasn’t clear enough. The cloud vision places responsibility for these activities to the application group, not application users. These type of groups are usually IT professionals and often located within central IT, though many organizations place them with line of business organizations. In any case, most IT organizations haven’t done the work to get to the 48 hour responsiveness outlined in this comment, which means the contrast between as-is and cloud could-be will continue to be stark, and unlikely to allow a “cloud isn’t necessary” argument to hold much water.

Capacity planning is not that big a problem: Most enterprise applications are pretty predictable with regard to use patterns and growth, and we have a pretty good process for visibility into how many and what type of applications are coming down the pike. Therefore, the problem asserted for private clouds with respect to opaque demand isn’t that difficult. It’s true that many of the uber-scalable applications used as examples of why cloud agility and scalability are important are Web 2.0-ish. So perhaps this opaqueness isn’t that big an issue. Left unsaid in this reaction is any explanation of why enterprises would be so excited about cloud computing, if everything is so predictable. The thirst for on-demand provisioning and application experimentation suggests that expectations borne out of current use and growth patterns fail to address unsatisfied application demands, which would imply that there could be significantly more—and more variable—demand than this reaction expects. Moreover, this reaction fails to address the phenomenon described in last week’s posting that less provisioning friction will result in more—and more variable—demand overall.

The problem will be ring-fenced: At least in the short term, many organizations are focusing on test/dev as the cloud application. It’s clear that test/dev is poorly served by current practices and processes, focused as they are on ensuring production environments are defined and controlled. When someone is ready to develop, he or she wants to start now; waiting several weeks for resources hinders productivity, not to mention being a real drag. So many organizations plan to start their cloud computing initiatives with test/dev, devoting a portion of the data center to this activity (remember, a cloud environment can occupy only part of a data center, while the remainder continues to operate in established fashion). This isn’t a bad strategy; it’s just not clear how limited or sustainable it is. For one, I’ve heard people like the head of IBM’s cloud initiative assert that test/dev accounts for 40% of all activity; while activity wasn’t defined, it seems that 40% would require a pretty widespread deployment of cloud computing throughout a data center, which is to say that 40% doesn’t sound very ring-fenced! This level of test/dev resource use is echoed in this just-published article outlining Dell’s cloud environment, which describes Dell’s environment as consisting of 2,500 production VMs, with 3700 VMs being used for test/dev. In any case, this strategy, while sensible, may founder on expectations; once the developers and testers get immediate resource assignment, how satisfied are the application production groups going to be with a weeks-long deployment timescale?

A tool will solve the problem: a couple of commenters described how an orchestration tool can solve all these issues. Orchestration, as the graphic illustrates, is a key capability of a private cloud. However, an orchestration tool effects neither process change nor organizational role responsibility reallocation. In a way, these comments reflect the common reaction most of us in IT have: if we just implement the right tool that implements the right process, everything else will fall into place. We have a touching faith that organizations operate according to a Vulcan-like logic—it’s just those darn humans that keep failing us. Experience shows that tools support organization adaptation, not supplant it. Failing to recognize the issues that accompany a move to cloud computing is sub-optimal, to say the least. With regard to the challenges described last week regarding lack of capacity planning visibility and the need for process re-engineering, one could say that, far from solving the challenges, orchestration tools exacerbate or cause the problem!

The CAPEX vs. OPEX issues are just accounting entries: one commented indicated the financial flow changes required by moving hardware funding from applications to a central IT group would not pose too large a problem, since IT would develop a group responsible for forecasting investment needs and fund disbursement. In the long run, this is undoubtedly true. It’s the transition that is going to be sticky. Disbursing funds allots power—and those who currently have that power won’t abdicate it casually. Transitioning CAPEX responsibility to central IT will be an interesting process.

It’s clear that the implications of private cloud computing implementation is a hot topic, and one that many people have strong opinions about. Generally speaking, I’ve found that when passions are stirred, the whiff of personal and/or organizational opportunity, or threat, is in the air. After all, why get worked up about something that won’t bring much change? It’s a cliche that people resist change—an inaccurate cliche. We eagerly embrace change when we perceive it to be in our interest; just look at the mania for the iPhone. People love the change it brings to how they communicate and operate in their daily lives. We resist change when we perceive it to be unsettling or dangerous to our position. This is particularly true in organizational settings where most of us obtain some measure of personal esteem. Consequently, it’s not likely that the charge of cloud computing is going to be permanently side-tracked; it’s just the the journey is likely to be dramatic, not placid.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.

Do you Tweet? Follow everything from on Twitter @CIOonline.