How Not to Build a Cloud

Here are six no-no's for cloud computing planning in your enterprise. From legacy software to outdated metrics, these potential pitfalls deserve careful consideration.

The metaphor of cloud computing may go all the way back to mainframe computing, though some cloud gurus heartily dispute that view. Still, the implementation is new and complex enough that many of the basic rules are still being set, according to analysts and IT departments building heavy-duty cloud infrastructures.

As many as 17 percent of companies are interested in using virtual desktops provided by an external company, says Mark Bowker, analyst at Enterprise Strategy Group.

[For timely cloud computing news and expert analysis, see CIO.com's Cloud Computing Drilldown section. ]

Despite the rampant interest, few battle-tested best-practice guides exist for hybrid internal/external cloud networks. Data-center guidelines apply, but don't cover many of the complexities of combining external IT with internal IT, or delivering applications to the desktop fast enough that end users can't tell which applications come from inside and which don't, Bowker says.

There are a few rules for what not to do with your cloud computing efforts, however. Here are six starting points to keep in mind.

1. Don't centralize too much

Using IT services concentrated in data centers is the whole point of cloud computing and virtualization. But keep your eye on end user response times, says Vince DiMemmo, general manager of cloud and IT services at Equinix, which essentially provides the platform and infrastructure services on which other companies build their own. Customers include Verizon, AT&T, Japan Telecom, MCI, Comcast and YouTube, among others.

The lag between when a user presses a key and the response from a server that could be anywhere in the world could spell life or death for cloud computing projects, he says.

"If the user experience isn't fast enough, VDI services won't be accepted, just as SaaS wouldn't be," Bowker says.

Equinix spreads user access nodes around the edge of its network—close to concentrations of end users. "We cut down the physical distance between the network and the end user—even inside the building if we can connect directly," he says. "Every hop is a piece of equipment that contributes its own network latency and extra response time, even if it's only microseconds, if you add that to every packet for every node, it adds up."

2. Don't forget about the hardware

Virtualization and cloud computing are supposed to make the hardware invisible, but that doesn't mean the people providing servers and storage can cut corners with servers that are less powerful or even I/O that restricts the flow of data, according to Gordon Haff, high-performance computing analyst at Illuminata.

"Every cycle a command is waiting in a queue or to get through the [server or storage I/O bus] just adds more latency," Haff says. "The faster, more powerful the servers are the shorter the latency to the user, whether they're in the cloud or in a more traditional data center."

3. Watch the legacy issues

"Most legacy applications weren't designed to run in clouds or other elastic environments, so even their data structures are wrong to work well in a cloud environment," according to Steve Yaskin, CTO and founder of Queplix, which makes tools to port legacy apps to clouds. "All the data on one person could be spread around three or four databases—transactions in one, addresses in another."

Metadata catalogs from Queplix or Springsource, which was acquired by VMware in August, can reduce the number of times an app has to assemble data from multiple sources; good caching can keep frequently used data available, and structuring storage behind the apps according to how frequently it's used, rather than its value or other criteria, will also drastically cut response time, Yaskin says.

4. Don't let your software get chatty

Making the network and the servers race only covers two-thirds of the latency pool, DiMemmo says.

Many cloud-based applications use standard browsers, rather than interfaces designed for fast-response across the WAN or interfaces at both client and server that size their packets and limit administrative chatter between the two to keep performance as high as possible.

"A lot of those APIs are pretty chatty," DiMemmo says. "They have to carve the message up three, four, five times and each of them add 40 or 50 milliseconds. We used to think if we got a 150-millisecond round trip from user to the server, that was pretty good. And it would be if you were only doing it once. These are doing it over and over for every communication."

5. Don't measure success the way you did before

Service-level agreements that lean heavily on wonky, IT-centric metrics such as IPPM guidelines don't cut it for cloud-based apps. Quality of Experience (QOE), a subjective measure of how well an application performs from the end user's perspective rather than by measuring packets through a network, is the metric cloud customers are demanding more frequently, DiMemmo says.

SLAs focused on how well IT departments do their jobs, not how happy end users are with the result, according to Chris Wolf, analyst with The Burton Group. Clouds have to focus on the end-user experience, which may be easier because IT itself will be the customer in many cloud relationships, he says.

6. Don't save pennies and waste dollars

Adding internal or external cloud services allows IT to fundamentally change the way it builds and supports infrastructure, and should serve as good justification to go beyond the cost-saving goals many companies set for their server virtualization projects and may apply to desktop virtualization, according to IDC desktop virtualization analyst Ian Song.

Pinching pennies when building a cloud infrastructure will short-change a company, not only by delivering cloud systems with performance that doesn't match end-user expectations, but also by restricting the potential of an extraordinarily flexible technology, Song says.

"When companies go to cloud computing, they really have to do it with high-end capabilities—high availability, fault tolerance, and plan for flexible SaaS," Song says. "They should expand on what they did with server virtualization, rather than go with an end goal of saving money or limit the intent to the capabilities of virtualization. Cloud pushes them beyond that.

Follow everything from CIO.com on Twitter @CIOonline.

Join the discussion
Be the first to comment on this article. Our Commenting Policies