Amazon Hails Era of "Utility Supercomputing"
Cloud computing giant Amazon Web Services is heralding the era of utility supercomputing, whereby massive computational resources and storage requirements can be accessed on demand.
Thu, March 08, 2012
Techworld — Cloud computing giant Amazon Web Services is heralding the era of utility supercomputing, whereby massive computational resources and storage requirements can be accessed on demand.
Speaking at the launch of Intel's Xeon E5 processor family in London this week, AWS technology evangelist in residence, Dr. Matt Wood, said that cloud computing was a utility service like electricity and gas, in that it allows consumers and businesses to pay for consumption of a service on demand.
The cloud's capacity for storing and processing Big Data is only limited by the infrastructure it sits on, explained Wood. While the technology can act as "friction", extending the time it takes to move from an idea to a result, more powerful processors are helping to reduce this lag time, opening up new opportunities for a whole range of industries.
In particular, scientific and financial organisations with massive computational demands will be able to rent resources from the cloud to be able to do their work - whether it happens to be product modelling, simulation or informatics - without having to invest in huge infrastructure.
"We are entering the era of utility supercomputing where anybody can dial up computational resources and massive storage requirements on the fly," said Wood. "Traditionally these organisations would have to provision for 10-15 percent over the peaks in demand, but the cloud allows for bursty scalability, lowering the barriers to entry and allowing them to spend at least 70 percent of their time on differentiated work, rather than keeping the light on."
Wood's assertion builds on the ideas of Jason Stowe, CEO of Cycle Computing, who first proposed the concept of utility supercomputing in October 2011. Cycle Computing helps researchers and businesses run supercomputing applications on Amazon's EC2 infrastructure.
"The problem is, today, researchers are in the long-term habit of sizing their questions to the compute cluster they have, rather than the other way around. This isn't the way we should work. We should provision compute at the scale the questions need," said Stowe in October.
"We're talking about taking questions that require a million hours of computation, and answering them in a day. Securely. At reasonable cost.
"Scratch the surface of this idea, and you'll see a world of research the way I see it. No more waiting. No more R&D folks task-switching for days or weeks while compute is run. Only answers at the speed of thought, at the speed of invention, at the scale of the question."
Amazon in November launched a public beta of Cluster Compute Eight Extra Large (CC2), its most powerful cloud service yet. Every CC2 instance has two Intel Xeon E5 processors, each with eight hardware cores, as well as 60.5GB of RAM and 3.37TB of storage. It communicates with other instances - or virtual servers - in a cluster using 10 gigabit ethernet.