Writer Nicholas Carr will earn the enmity of even more tech veterans with his newest prediction: Cloud computing will put most IT departments out of business. “IT departments will have little left to do once the bulk of business computing shifts out of private data centers and into the cloud,” Carr writes in his new book, “The Big Switch: Rewiring the World, from Edison to Google.”
MORE ON CLOUD COMPUTING
Cloud Computing: Watch Out It’s Raining Jargon
Analysis: Microsoft Buys Into the Cloud
SAP and EMC Jump Into the Cloud Together
An exaggeration? Of course. But there’s a kernel of truth beneath the hyperbole. Cloud computing, once a concept as murky as its name suggests, is becoming a legitimate emerging technology and piquing the interest of forward-looking CIOs. Out-of-control costs for power, personnel and hardware, limited space in data centers, and above all, a desire to simplify, have encouraged significant numbers of startups—and a still small number of enterprises—to move more infrastructure into a third-party provided cloud.
“The concept of cloud computing makes enormous sense, says André Mendes, the CIO of Special Olympics. “It helps the CIO abstract another layer of complexity from the organization and concentrate on providing the higher levels of value.” Mendes, who’s now moving much of his data center outside his enterprise via conventional hosting services, says he expects to move toward the cloud in the next few years.
Why now? Enabling technologies, including nearly ubiquitous bandwidth and widespread server virtualization, plus the lessons learned from the rapid ascent of software as a service (SaaS), are encouraging CIOs to think further outside of the data center.
To be sure, it’s still the early days of cloud computing. Concerns around security and application latency, to name two of the issues most commonly raised by the IT community, are real. Also, providers have not fully formulated their business and pricing models, which is one reason that some CIOs who did not reap the desired ROI from SaaS now look at cloud computing skeptically. Yet another issue: transparency. Entrusting mission critical applications and data to a third party means the customer has to know exactly how cloud providers handle key security and architectural issues. How transparent providers will be about those details remains an open question.
A New Level of Scalability
Unlike many “next big things” cloud computing didn’t just spring fully-formed from the brain of a Silicon Valley whiz kid. “It’s the logical corollary of what happened in computing over the last 30 years. In a sense, it’s a return to the past; time-sharing on steroids,” says Mendes.
True enough, but it’s easier to get analysts and IT insiders to talk about the features and goals of a cloud than it is to pin down an exact definition. Keep in mind, too, that different vendors will spin cloud computing differently. Salesforce.com’s vision of the cloud looks much like the SaaS you know today; IBM’s vision includes mashups of massive customer data sets on the fly.
“The cloud is basically a combination of grid computing, which was mostly about raw processing power, and software as a service,” says analyst Dennis Byron of Research 2.0. “In effect the cloud is network virtualization.”
Dennis Quan, CTO of IBM’s High Performance On Demand Solutions, says “We’ve designed the cloud around virtualization. You have a data center with many servers and they are all turned into virtual machines.”
One difference from the now familiar, multi-tenant SaaS model, in which numerous clients access a provider’s application: Cloud computing environments also allow the customer to run his own applications on the provider’s infrastructure.
At the provider level, the goal is to dynamically assign computing workloads as customer jobs come in, notes William Fellows, an analyst with The 451 Group. That approach helps the vendor maximize its resources and lets the customer ask for more computing power on the fly. That’s a key point. A big goal of cloud computing, whether IBM’s Blue Cloud or Amazon’s EC2 (Elastic Cloud Computing), is rapid scalability.
But elasticity is probably a better term, says Barney Pell, founder and CTO of Powerset, a San Francisco-based startup company building a natural language search engine. By elastic, Pell means the ability to stretch out when needed—and then snap back. His company is attempting to index an enormous chunk of the Web, and that compute-intensive task goes on most of the time. The work involves major spikes by users that would exceed the company’s normal computing capacity.
Rather than buy enough servers and other infrastructure to meet peak needs, Powerset became an early customer of Amazon’s EC2 and S3, Amazon’s related storage service. Powerset pays for the resources as it uses them, freeing up significant amounts of cash, Pell says.
Pell suggests that IT executives considering cloud services start by closely examining which resources their data center uses all the time—and which are only needed during periods of peak demand. What’s more, the use of an elastic service gives IT time to establish a baseline, that is, the minimum level of resources needed to run the business at all times.
Similarly, groups or departments within enterprises often have the need to prototype or handle a specific project, but don’t have the budget or desire to buy the needed infrastructure. Indeed, IBM itself is using its internal cloud to supply the resources needed for prototyping new applications or services, says Quan. Not every project uses that internal cloud, but more than 100 have, he adds.
The New York Times, for example, used Amazon Web Services (EC2 and S3) to generate PDFs of 11 million articles in the paper’s archives in less than 24 hours using 100 instances of EC2, instead of buying hardware for the project, Derek Gottfrid senior software architect for the Times, wrote in his blog.
Flexibility Up, Costs Down
For some enterprises, cloud computing can help a CIO tackle several problems at once, as was the case for Schumacher Group CIO Doug Menefee. Upon joining the Lafayette, Louisiana-based company three years ago, Menefee had to tackle a disaster-planning gap and find new ways for IT to keep up with rapid business growth.
Headquartered two hours west of New Orleans and 35 miles north of the Gulf of Mexico, Schumacher staffs emergency rooms for 150 hospitals across the U.S. It only takes a glance at the map to see how close it came to being hit by hurricanes Katrina and Rita. “It was an eye-opener,” says Menefee. “We didn’t have disaster recovery and business continuity capabilities. Had our headquarters gone down, it would have taken all of the regional offices down with it.”
At the same time, Schumacher’s IT group was struggling to keep with the demands of a company whose revenue was growing 20 percent to 30 percent a year—even faster when measured by the number of complex contracts it needed to manage. “We can go out and turn on five or six hospitals tomorrow. We need the flexibility to move data quickly,” Menefee says. But setting up and provisioning new regional offices was taking months.
As Menefee settled into his new job, he realized that running at least some of his applications outside Schumacher’s data center would solve a number of problems. Menefee decided to combine a custom application built by Apptus, an ISV, with a Salesforce.com CRM application to handle thousands of contracts among his company, the hospitals and the doctors. Those moves, which involved about half of the company’s IT infrastructure, avoided the expense of his hiring an additional three to five full-time IT staffers, at a cost of $40,000 to $80,000 a year, plus a large outlay for additional hardware, he says.
Security, of course, poses an issue. “Single sign-on service and password management were the biggest pain points,” says the CIO.
While very upbeat about his experience in the cloud, Menefee says his data center isn’t going away anytime soon. The company deals with very large image files and charts scanned into the system, which means that latency becomes an issue. So for now, that type of work stays in house. There’s also “a beast” of a legacy billing system to deal with that wouldn’t fit well into a hosted environment, he says.
Is Schumacher utilizing cloud technology, or is it really SaaS? “There’s a lot of gray area around that term [cloud computing],” Menefee says. “But for me, the idea of us using an infrastructure that isn’t our own, that is managed outside makes it a cloud. But I’m not looking to be part of a trend. I find a problem and look for a solution.”
Security, latency, service levels and availability are issues that rightly concern IT executives when the talk turns to cloud computing. Vendors will have plenty of work to do in the next few years to resolve them to IT’s satisfaction. But there’s also a less concrete, but important, issue on the cloud computing table: culture.
“Some people still view this as a loss of control,” says Adam Selipsky, Amazon’s vice president for product management and developer relations. “They’re starting to come to terms with the idea of data leaving their four walls, but we’re not there yet.”
Indeed, when asked what advice he has for other CIOs considering cloud computing, Schumacher’s Menefee says, “Your traditional IT staffer is going to be resistant. Enlist the guys who have experience developing for the Web.”
More caveats: Although it’s not a common issue, some applications call for specific hardware. If that’s the case, says Forrester principal analyst James Staten, forget about running the application in the cloud. And database performance in the cloud can still be problematic, says John Engates, CTO of Rackspace, an IT hosting company based in San Antonio, Texas.
On the other side of the ledger, though, CIOs will find benefits from cloud services, including more scalability, faster deployment times, and a simpler data center. There’s no rush, but while you keep your feet firmly on the ground, it’s time to take a peek into the cloud.