As I noted last week, Gartner calls cloud computing the next big thing. I characterized the ability to move from talent-constrained, capital intensive data center management to inexpensive, pay-as-you-go cloud infrastructure as too logical to be denied. Oh, there will be plenty of FUD spread about the cloud’s shortcomings, but there was plenty of FUD about today’s current champions when they first got started. New things always look risky compared to what you’re now using — because you’ve internalized its risks, while the new solution’s risks are front-of-mind.
So what should you be thinking about if you want to get going with cloud computing?
Here are key factors for you to recognize:
Get used to virtualization: The foundation of cloud computing is virtualization. Cloud computing is completely different to the superficially similar external hosting, aka ASP — a trend that crashed and burned precisely because of its differences from cloud computing. External hosting merely moves the machine from your data center to someone else’s. You pay to manage a physical machine and are subject to that machine’s issues: hardware breakdowns, resource limitations, and inflexible hosting. By contrast, cloud computing starts off by turning the machine into a virtual image, which resides on some physical server in the cloud’s hosting environment; however, that virtual image can be shunted around, breaking the hardware dependency associated with external hosting. With hardware dependency no longer an issue, your system is insulated from hardware breakdowns — the cloud provider will automatically move your system image to another piece of hardware while it fixes the original hardware. And if your system begins to outstrip the resources assigned to it, more can easily be added to ensure your system doesn’t suffer.
This means that you’ll need to get comfortable with virtualization. While cloud providers like Amazon will attempt to provide management abstractions shielding you from the virtual systems, if you drill down into your system, you’ll soon see virtualization. So building up virtualization skills is a prerequisite for moving to cloud computing.
Get used to Linux: In order for cloud providers to deliver inexpensive computing, they’ve all leveraged Linux as their virtualization platform. While one or more of the providers will undoubtedly explore using Windows Server as the virtualization platform, it will prove difficult for them to get the numbers to pencil out — not to mention the challenges of license management in the cloud, since Microsoft licenses are designed for a more static environment. Just as the cloud providers will attempt to shield users from virtualization, they’ll attempt to do the same regarding the underlying OS, with the same results. When it comes to guest virtual machines (aka VMs), the cloud providers will support them, but you’ll face the burden of license management and the reality that bring up a Microsoft virtual machine will require intervention to manage license input, a disadvantage to the immediate availability of virtual machines.
Get used to a new type of application delivery: Given that you’ll be running virtual machines on top of a hypervisor and a virtual machine is a complete image containing OS, middleware, and application, soon application providers will deliver their product not on a CD, not in an installable image, but in a complete virtual machine, preconfigured, with all the other required software also installed and configured. You’ll simply plop the VM down onto a hypervisor and it will be ready to run. You may have to do some final configuration to tune the VM, but the
time-consuming manual work of installation and basic configuration will be done. As I say in my book, Virtualization for Dummies, once this new mode of application deliver takes hold, we’ll look back on the old way of installing applications the way we look at movies and see someone making a long distance call by telephoning an operator to make the connection: wow, so that’s how they had to do it, sure am glad we don’t have to do it that way anymore. Not only will you like this, as it frees you from a lot of tedious, error-prone hands-on work, the vendor will like it better too, as a significant percentage of their support calls occur during the initial installation and configuration phase.
Get used to a porous firewall: Once you get used to the idea that applications are obtained in ready-to-run VMs, the next logical step is that the VM can be run in any convenient location — either inside your data center or somewhere in the cloud. So the former rigid boundary separating “inside” the firewall from “the Internet” will become porous as application VMs shunt back and forth between internal and external hosting. The decision about where the VM should be run will depend on factors like data throughput, risk assessment regarding data privacy, and internal capacity. Forward-looking IT groups will begin to look at their data center as something like local cache on a chip — nearby and very high performance, while the cloud will be like disk — remote, not as fast, but cheaper by one or two orders of magnitude.
Get used to a new mode of data center operations: To echo the late, great Don LaFontaine, in a world where applications are delivered in VMs, and the VMs can be located (and relocated) either in the data center or in the cloud, the role of operations will change dramatically. Much of the traditional grunt work will disappear because app installation and configuration are no longer necessary, and the amount of physical server management will also drop since a cloud provider will take care of it. A new set of challenges for IT operations will arise:
- Keep track of VMs: Many naysayers about virtualization have raised the spector of “VM proliferation,” saying that the ease of creating VMs (as compared to the slog of creating a physical server loaded with an app ready to run) will cause VMs to breed like rabbits, overrunning the data center. Leveraging the cloud certainly reduces the specter of running out of data center capacity, but the issue of controlling VMs, tracking their location, and ensuring they’re located in the appropriate place to meet the system objectives will be important.
- The need for more sophisticated management tools: System management tools today are focused on local physical hardware and some aspects of application availability. While some of them manage VMs as well (or at least claim to do so), none of them have yet been designed to manage a data center/cloud mashup infrastructure. Having a unified management capability that enables VM location and relocation based on policies like processor headroom requirements, network capacity, etc., will be the next frontier of management. While it will be possible to use two tools — one for inside the data center and one for the cloud — that arrangement is suboptimal.
- Budget allocation: The question of chargeback is an interesting one: some people say most organizations do it, while others say most organizations say they’d like to do it, but fall back on crude arrangements of “the project will buy three servers, and we’ll split up the general overhead of IT operations according to the relative size of the different departments.” If a significant part of the hardware is in the cloud, those crude arrangements are no longer appropriate. So budget and cost allocation will be trickier in the future.
The march of virtualization and cloud computing is accelerating. The old ways of running data centers are rapidly being transformed as computing moves from being a hand-crafted, labor-intensive activity to one more automated and more prepared to meet business objectives with agility and cost-effectiveness. Where are you in your cloud virtualization planning?