Cloud computing is slowly becoming a mainstream technology. However, even before the hype cycle has died down, cloud providers are already starting to see downward price pressures. Every hosting provider under the sun is now repackaging itself as a cloud provider, and basic capacity and services are quickly becoming commodities.
With downward price pressures, the excess capacity that providers must have on hand for peak traffic becomes a drag on profits. After the cloud gold-rush era is over, providers will need to find ways to turn that cost center into a profit center. Otherwise, they'll lose out to competitors.
"Cloud providers have had a tough time figuring out their capacity projections," says Antonio Piraino, CTO of ScienceLogic, a provider of cloud management tools. "These days service providers are judged by their margins. Many have impressive gross margins, but for almost all of them, their net margins are in single digits. Once you account for power, software licensing, real estate and so many other costs, your profits evaporate."
Piriano argues that this puts smaller cloud providers at a disadvantage. A behemoth like Amazon can afford to have excess capacity on hand, while for smaller providers, it's a challenge that must be solved in order to compete and survive.
Larger cloud providers have already started to differentiate themselves through services. That's how they compete with Amazon, but it's also how they entice large enterprises to get off the sidelines and participate in the cloud revolution.
"Larger enterprises look for holistic services. They seek out more robust solutions, with more services layered on top," says Ellen Rubin, vice president of cloud services for Terremark/Verizon. As larger enterprises sign up for cloud services, they expect detailed SLAs, security, disaster recovery capabilities and management tools.
The focus on services beyond basic capacity is starting to trickle down market, as well, and this challenge could end up being a huge opportunity for service providers.
Why Excess Capacity Is an Opportunity
Excess capacity, after all, needn't be idle capacity. If non-time sensitive services can use the capacity, then something that previously dragged down profit margins now becomes a revenue driver. With businesses already demanding services like security, disaster recovery and management, cloud providers conveniently have a service roadmap laid out ahead of them.
Of course, if cloud providers are slow to act, startups will rush in to seize this opportunity. Startup OnApp was founded to help providers roll out new services. In 2010, as providers rushed to get into the cloud game, OnApp rolled out a cloud deployment and management platform that helped simplify the cloud roll-out process.
As the company worked with its service provider customers, it soon realized that excess capacity in data centers could be put to good use - and it could help providers earn more money. OnApp then rolled out a cloud-based content delivery network (CDN) service that relies on excess capacity at service providers all over the world. Not only do providers get an extra service to sell, but CDN services could be offered at a price that smaller companies could now afford.
Czech Republic hosting provider, SuperHosting.cz, used OnApp's CDN service to roll out its own CDN service, CDN77.com. Because of how the service is designed, the company can offer "no commitment" CDN services to its customers for as low as $4.90 per 100 GB. That would have been unheard of two or three years ago.
Another service that used to be out of reach for smaller and even any mid-market companies is unified communications (UC).
"UC has traditionally been deployed either as a static cloud service with few features and no customization or in a highly customizable, chassis-based environment on the company's premise," says Jon Brinton, president of Mitel's NetSolutions division.
With this excess cloud capacity being available, Mitel has been able to create a UC cloud service by offering not just the UC software, but a virtual private data center on which to run and mange it. The service includes computing, memory, storage, connectivity and SIP trunking. In other words, it takes a complex, expensive solution and offers it up as an affordable, easy-to-consume service.
"This new kind of cloud-based service, which wouldn't be possible without the excess of cloud capacity we enjoy today, allows customers to rapidly deploy our solution in a scalable, manageable, resilient environment that delivers the benefits of centralization and virtualization without the heavy lifting associated with sourcing a cloud provider," Brinton adds.
Grid Computing Helps Search for Intelligent Life
Some of these advances actually harken back to the past. Before cloud became all the rage, a number of "grid computing" companies raked in serious VC funding. Probably the most famous grid project is SETI@home. This UC-Berkeley hosted project uses idle computing capacity to help search for extraterrestrial intelligence (SETI stand for Search for Extraterrestrial Intelligence). Remember the movie Contact? That's pretty much what SETI is up to.
SETI uses radio telescopes to listen for narrow-bandwidth radio signals from space. Such signals are not known to occur naturally, so detection would provide evidence of extraterrestrial technology. SETI was originally powered by supercomputers, but then researcher David Gedy proposed creating a virtual supercomputer. SETI offers people screen saver software, and when the screen saver kicks in and indicates that a person's PC is idle, the idle capacity is used to analyze data from the radio telescopes.
Lately, most grid projects have been expensive and confined to compute-intensive fields such as drug discovery.
Now, grid's cousin, cloud computing, is poised to upend the grid model too and here again startups are pioneering new models.
Startup Cycle Computing was founded to offer cloud-based utility supercomputing services. Recently, the company spun up a 50,000-core virtual supercomputer using Amazon Web Services. The cluster was used to help develop drug compounds for cancer research. The customer, computational chemistry company Schrödinger, used the cloud-based virtual supercomputer to analyze 21 million drug compounds in three hours at a cost of under $5,000.
Using traditional computing models, a drug discovery company would have to spend $20 to $30 million on infrastructure alone, and even with that in place, the drug analysis process would take hundreds of hours to complete, not three.
"Having access to a virtual supercomputer frees researchers up," says Jason Stowe, CEO of Cycle Computing. "In the past, companies would have to scale down their research questions to match their infrastructure. You couldn't think too big because your internal 1,500-core cluster could only handle so much."
Stowe also says he believes that excess capacity represents an opportunity for more than just cloud providers. As businesses increasingly adopt virtual desktop infrastructures, powerful servers will be operating at near capacity from 9 to 5, and then they will be completely idle until the next morning.
Perhaps, like people with solar panels on their roofs, in the near future businesses will be able to sell capacity back to service providers. Drug discovery, financial risk management and engineering projects could all benefit from cheap, large-scale virtual supercomputing.
And what company wouldn't want to be able to say that it helped solve cancer, while also lowering its computing costs in the process?
Jeff Vance is a Los Angeles-based freelance writer who focuses on next-generation technology trends. Follow him on Twitter @JWVance.