by Bernard Golden

Cloud and Web 2.0 Insights from Structure 09 Conference

Jul 08, 20095 mins
Cloud ComputingVirtualization

At GigaOM's Structure 09 Conference, cloud guru Bernard Golden gained some perspective on bandwidth fears, commodity hardware questions, and real-life cloud challenges from companies already racing to keep up with computing demand.

Last week I attended the GigaOM Structure 09 Conference, which is an innovation-oriented cloud computing conference. One of the interesting things about the Silicon Valley-based event: it brought together a mix of different types of companies, emerging technology products and services, and people with cloud challenges:

1. Mainstream vendors discussing their cloud computing plans, with an emphasis on enterprise offerings: their challenge is convincing potential customers that there’s a pony somewhere in the cloud (please excuse the mixed metaphor.)

2. Startups, which in the main are focused on helping cloud users increase productivity by providing services that make existing cloud providers (particularly Amazon) easier to use: their challenge is the challenge of startups everywhere, which is to get people aware of their solution and interested enough to try something from an untested (and unproven) vendor.

3. Large-scale Web 2.0 companies like Facebook, LinkedIn, and the like: their challenge is two-fold:(1) managing the enormous amounts of compute, storage, and network traffic they experience; and (2) if that isn’t difficult enough, having scaling those resources as more and more people pile onto the apps and the companies roll out new features as well.

The large Web 2.0 companies were represented in a panel on managing huge cloud systems, which was really fascinating for two reasons:

1. Large Web 2.0 companies are somewhat akin to scouts, in that the challenges they face will inevitably come to most compute users. One of the things we talk about with companies is that the nature of compute tasks is morphing away from transaction-focused to interaction-focused, with the latter category generating much more data from clickstreams, sensor outputs, and internal and external collaboration. Most companies will eventually face the same scale challenges explored in the panel.

2. One really got the sense from these companies that they are running as fast as they can to keep up with demand growth. In other words, they’re creating solutions and fixing problems in real-time, which must make their daily lives interesting.

[For timely cloud computing news and expert analysis, see’s Cloud Computing Drilldown section. ]

Another important panel was “Better Broadband: Enabling the Cloud Era,” which focused on one of the key issues confronting cloud computing—getting data to and from cloud providers.

We refer to this issue as the “skinny straw,” which vividly describes the issue: the limited bandwidth available to many organizations and enterprises to communicate with cloud-based applications and storage. This issue is particularly thorny for organizations that want to take advantage of one characteristic of cloud environments: huge storage. While clouds can be great for storing and manipulating terabyte-scale data stores, if it’s impossible to actually get the data into the cloud, the scalable storage available there is a tantalizing but unobtainable goal.

One participant in the panel was from AT&T and, after hearing other panel members complain about bandwidth, he observed that the issue is not bandwidth availability, it’s business case, i.e., it’s not that bandwidth cannot be obtained, it’s just a question if one wants to pay for it. Interestingly, in a conversation with a networking company earlier in the day, they maintained that the cost of very significant bandwidth is relatively modest, around $3500/month—not trivial, but in the overall picture, not a insurmountable barrier for many organizations.

So, at the end of the conference, I was left with an enigma, if the cost isn’t that great, why were the people on the panel complaining so much? The impression they left was that bandwidth availability, not cost, was the issue. I guess this will remain an unresolved issue for the moment!

I moderated a panel on “Hosting Cloud on Commodity Hardware.” I posed the question, “if Google purchases stripped-down, custom-designed hardware to power its cloud, and the mainstream vendors like IBM and HP propose high-end blade servers as the right hardware for private clouds, what does Google know that IBM and HP don’t, or, conversely, what do IBM and HP know that Google doesn’t?”

The panel members kicked this around without coming to any concrete answer. One participant proposed that Google arranges its data centers and purchases the kind of hardware it does because it has lots of sysadmins racing around on roller skates who can remove and replace inexpensive servers, which obviates the need for more robust hardware. I don’t think that’s the answer, however. Google and its brethren are famed for their system administration automation, which means they actually have a much lower ratio of people to servers than their enterprise counterparts.

Actually, it’s more likely that Google and its ilk recognize that hardware is cheap, but people are expensive, and it makes sense to optimize for labor costs via automation, rather than optimize for hardware costs. Enterprises creating private clouds continue to operate on historical assumptions that hardware is rare and precious, despite the massive increase in both hardware power and sheer numbers of overall devices. Consequently, they carry forward the traditional practice of keeping individual hardware devices up and running, rather than migrating to the practice of achieving application robustness by over-provisioning hardware, thereby protecting themselves from hardware failure.

Participants at the conference were obviously excited about cloud computing and its potential. That’s understandable, of course, given that they were at a cloud computing conference and most of them work for cloud-oriented companies. The atmosphere was reminiscent of early Internet conferences, where observers might quite reasonably have wondered whether the attendees were overly febrile in their enthusiasm. Of course, we know how the Internet thing turned out, so maybe the ambiance wasn’t inappropriate.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.

Follow everything from on Twitter @CIOonline