How Not to Build a Cloud

Here are six no-no's for cloud computing planning in your enterprise. From legacy software to outdated metrics, these potential pitfalls deserve careful consideration.

By Kevin Fogarty
Tue, April 13, 2010

CIO — The metaphor of cloud computing may go all the way back to mainframe computing, though some cloud gurus heartily dispute that view. Still, the implementation is new and complex enough that many of the basic rules are still being set, according to analysts and IT departments building heavy-duty cloud infrastructures.

As many as 17 percent of companies are interested in using virtual desktops provided by an external company, says Mark Bowker, analyst at Enterprise Strategy Group.

[For timely cloud computing news and expert analysis, see CIO.com's Cloud Computing Drilldown section. ]

Despite the rampant interest, few battle-tested best-practice guides exist for hybrid internal/external cloud networks. Data-center guidelines apply, but don't cover many of the complexities of combining external IT with internal IT, or delivering applications to the desktop fast enough that end users can't tell which applications come from inside and which don't, Bowker says.

There are a few rules for what not to do with your cloud computing efforts, however. Here are six starting points to keep in mind.

1. Don't centralize too much

Using IT services concentrated in data centers is the whole point of cloud computing and virtualization. But keep your eye on end user response times, says Vince DiMemmo, general manager of cloud and IT services at Equinix, which essentially provides the platform and infrastructure services on which other companies build their own. Customers include Verizon (VZ), AT&T, Japan Telecom, MCI, Comcast (CMCSA) and YouTube, among others.

The lag between when a user presses a key and the response from a server that could be anywhere in the world could spell life or death for cloud computing projects, he says.

"If the user experience isn't fast enough, VDI services won't be accepted, just as SaaS wouldn't be," Bowker says.

Equinix spreads user access nodes around the edge of its network—close to concentrations of end users. "We cut down the physical distance between the network and the end user—even inside the building if we can connect directly," he says. "Every hop is a piece of equipment that contributes its own network latency and extra response time, even if it's only microseconds, if you add that to every packet for every node, it adds up."

2. Don't forget about the hardware

Virtualization and cloud computing are supposed to make the hardware invisible, but that doesn't mean the people providing servers and storage can cut corners with servers that are less powerful or even I/O that restricts the flow of data, according to Gordon Haff, high-performance computing analyst at Illuminata.

Continue Reading

Our Commenting Policies