Last week marked the second OpenStack Design Summit. OpenStack, if you're not familiar with it, is an open source project founded by a joint effort and code contribution of NASA and RackSpace; however, the project has grown rapidly and has many more participants today. Among companies participating in the OpenStack project: Cisco, Dell, NTT, Citrix, and many others.
The energy at the conference was quite amazing and attendance went well beyond what the organizers expected. In the interest of full disclosure, I chaired the Service Provider track, which brought together presenters from AT&T, KT (Korea Telecom), and other companies rolling out OpenStack offerings. I expect that this track will become a fixture at future Design Summits, as many service providers throughout the world will be interested in a low-cost, high-quality open source-based cloud computing software stack.
Beyond the technical presentations there were a couple of items that I found really interesting. They both applied well beyond OpenStack itself and offered insights and opportunities for users no matter what cloud infrastructure is used.
The first item was the keynote presentation by Neil Sample, VP Architecture for eBay. eBay is considering Rackspace as a platform for its use of public cloud (although Sample was careful to note that eBay is also considering Azure as well).
What was fascinating about Samples presentation (which you can review on SlideShare here) was that he walked the audience through eBay's thinking on the topic and offered real financials about the numbers driving its decision.
EBay's Load Lessons
The fundamental reality confronting eBay is that it has extremely spiky computing use (see slide 4 of the presentation). Even after taking all the obvious, straightforward actions (e.g., move non-critical computing to off hours to reduce peak load; move remaining excess computation load to off- hours locations to take advantage of unused capacity), eBay still faces peaky load that has, in the past, required it to own more capacity than it typically uses.
So eBay set out to investigate how it could leverage public cloud computing to reduce its computing costs. eBay spends about $80 million per year on data centers, and each "computational unit" (its normalized measure) runs it around $1.07. What eBay found was that, for a broad range of public cloud computing costs, it could reduce its total spend significantly. In fact, even if the public cloud cost for a comparable computational unit was up to four times eBay's internal cost — in other words, even if a computational unit from a cloud provider cost $4.28 — eBay would still save money. A lot of money. (See slide 10 for the graphical presentation of the cost curves).
Why is this? The primary factor driving the public cloud computing benefit is the fact that an eBay data center's cost structure is almost entirely fixed — $.88 of the $1.07 computational unit remains whether there is any work done in the data center or not (see slide 8 of the presentation).
If eBay can avoid purchasing computing capacity that sits idle by using a public provider, it can save money even if the public provider costs significantly more than eBay's own resources.
Essentially, this is an example that illustrates something we preach all the time — data center utilization rates are the key to cloud computing economics. Unless one can guarantee that a cloud data center will operate — on a sustained basis — at 70% or more of capacity, it will be hopelessly uncompetitive from a financial perspective.
If eBay can manage its capacity such that its own data centers operate at high utilization rates and it can harvest additional capacity from public providers at anything like typical rates, it will drop its overall computing cost by something like 40%.