Last week marked the second OpenStack Design Summit. OpenStack, if you’re not familiar with it, is an open source project founded by a joint effort
and code contribution of NASA and RackSpace; however, the project has grown rapidly and has many more participants today. Among companies
participating in the OpenStack project: Cisco, Dell, NTT, Citrix, and many others.
The energy at the conference was quite amazing and attendance went well beyond what the organizers expected. In the interest of full disclosure,
I chaired the Service Provider track, which brought together presenters from AT&T, KT (Korea Telecom), and other companies rolling out OpenStack
offerings. I expect that this track will become a fixture at future Design Summits, as many service providers throughout the world will be interested in a
low-cost, high-quality open source-based cloud computing software stack.
Beyond the technical presentations there were a couple of items that I found really interesting. They both applied well beyond OpenStack itself and
offered insights and opportunities for users no matter what cloud infrastructure is used.
The first item was the keynote presentation by Neil Sample, VP Architecture for eBay. eBay is considering Rackspace as a platform for its use of
public cloud (although Sample was careful to note that eBay is also considering Azure as well).
What was fascinating about Samples presentation (which you can review on SlideShare here) was that he walked the audience through eBay’s
thinking on the topic and offered real financials about the numbers driving its decision.
EBay’s Load Lessons
The fundamental reality confronting eBay is that it has extremely spiky computing use (see slide 4 of the presentation). Even after taking all the
obvious, straightforward actions (e.g., move non-critical computing to off hours to reduce peak load; move remaining excess computation load to off-
hours locations to take advantage of unused capacity), eBay still faces peaky load that has, in the past, required it to own more capacity than it typically
So eBay set out to investigate how it could leverage public cloud computing to reduce its computing costs. eBay spends about $80 million per year
on data centers, and each “computational unit” (its normalized measure) runs it around $1.07. What eBay found was that, for a broad range of public
cloud computing costs, it could reduce its total spend significantly. In fact, even if the public cloud cost for a comparable computational unit was up to
four times eBay’s internal cost — in other words, even if a computational unit from a cloud provider cost $4.28 — eBay would still save
money. A lot of money. (See slide 10 for the graphical presentation of the cost curves).
Why is this? The primary factor driving the public cloud computing benefit is the fact that an eBay data center’s cost structure is almost entirely fixed
— $.88 of the $1.07 computational unit remains whether there is any work done in the data center or not (see slide 8 of the presentation).
If eBay can avoid purchasing computing capacity that sits idle by using a public provider, it can save money even if the public provider costs
significantly more than eBay’s own resources.
Essentially, this is an example that illustrates something we preach all the time — data center utilization rates are the key to cloud computing
economics. Unless one can guarantee that a cloud data center will operate — on a sustained basis — at 70% or more of capacity, it will be
hopelessly uncompetitive from a financial perspective.
If eBay can manage its capacity such that its own data centers operate at high utilization rates and it can harvest additional capacity from public
providers at anything like typical rates, it will drop its overall computing cost by something like 40%.
Dell’s Secret: Sleds
The second interesting item at the Design Summit related to hardware that Dell was showing off. The company demonstrated a high-density
collection of servers and storage. Particularly interesting was the configuration of the server portion of this collection. Dell’s offering does not use the
common blade design that is often used to increase computing density. This is because blade designs commonly fail to offer redundant system
services — especially network connections, so that if the network connectivity of the blade chassis fails, the entire blade collection is unable to
Dell’s design, by contrast, provides system resources for each computing device, which it refers to as “sleds,” although to me they looked more like
trays. Each “sled” is one or two sockets, contains a boatload of memory, and completely separate network connectivity from the other sleds. The only
shared resource among all the sleds in a system is the power supply, and two are included for robustness reasons.
Each sled is connected to 12 2.5 inch drives, making very large storage capabilities part of this system. You can see an actual sled and get a sense
of how they are constructed in this video I shot of Rob Hirschfield of
Dell describing one.
Dell’s hardware demonstration is indicative of another aspect of cloud computing: the rapid evolution of different components within the total
aggregation of resources necessary to support a cloud computing environment. Two weeks ago I wrote
about Facebook’s Open Compute initiative, which addresses the physical infrastructure of a cloud environment; Dells offering complements it with
a high-density power-efficient computing platform. Suddenly, data centers constructed with the existing components seem horribly out-of-date and
I don’t expect that Dell’s offering is the pinnacle of what we’ll see on the hardware side — far from it. But it is, for sure, one of an ongoing
steps that will be taken to support the vastly higher scale of computing for the future, offering products that push the boundaries of capability and
These are just two of the elements that struck me about the Design Summit, and this doesn’t even address the main subject of the conference,
which was to enable collaboration and help push OpenStack toward increased functionality and quality. As Inoted at the beginning of this post, the
energy at the conference was palpable.
What these two elements do illustrate, however, is how cloud computing continues to morph as providers and users gain more experience with
the domain. eBay’s presentation indicates why cloud computing has so much end user attention — the cost structures associated with
traditional computing environments in the face of scale growth make existing infrastructure approaches obsolete. Dell’s “sled” computing shows how
new infrastructure products are being created by vendors to better suit these new computing environments.
From my perspective, cloud computing, far from falling into Gartner’s famous (or notorious, if you will) “trough of despair,” appears to be picking up
steam and gaining even more prominence. To quote Al
Jolson, the star of “The Jazz Singer,” the first motion picture talkie, “you ain’t seen nothin’ yet!”
Bernard Golden is CEO of consulting firm HyperStratus, which specializes in
virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to
Follow Bernard Golden on Twitter @bernardgolden. Follow everything from CIO.com on