With the increasing maturity and acceptance of cloud infrastructure services in the enterprise, migrating important workloads to the cloud—public, private or hybrid—is becoming a tantalizing proposition. But migrating mission-critical applications, especially ones built on legacy architecture, can be challenging to say the least.
Many enterprises are now planning to move a significant portion of their infrastructure to the cloud, says Dave LeClair, senior director of strategy at Stratus Technologies, a specialist in high availability solutions.
A recently released survey conducted by Stratus, North Bridge Venture Partners and GigaOM Research found that 75 percent of firms are now reporting the use of some sort of cloud platform, and the worldwide addressable market for cloud computing will reach $158.8 billion by 2013, an increase of 126.5 percent from 2011.
"Enthusiasm for the cloud continues to grow, moving beyond historical concerns such as security," LeClair says. "But, what is also clear from this year's results is for cloud adoption to continue accelerating beyond its current pace, companies are going to be looking to vendors to enable always-on infrastructures that support their more critical business applications in this new environment.
Companies need to take a hard look at which applications they are putting in the cloud, then, consider what's involved in managing this shift from a resource, skillset, cost and complexity standpoint, LeClair says. We know first-hand these considerations are not a one-size-fits-all answer and rewriting applications for the cloud will not be the solution in many cases."
Availability an Issue for Mission-Critical Apps in the Cloud
The value proposition of moving applications to the cloud seems clear: It can vastly improve agility and the scalability of applications. In many cases, your mission-critical applications stand to benefit the most from cloud infrastructure. But availability remains the bugbear, LeClair says.
Clouds are architected for scale and elasticity, LeClair says. Individual cloud components may fail and not get replaced. Unless your application is designed to work around these failures with the architecture of the workload, you might run into a serious problem, he says.
"We're seeing a lot of basic applications moved over," he says. "We're seeing new applications being built in a cloud environment. But we're not seeing a lot of tier 1 applications moving over."
The High Price of Downtime of Mission-Critical Apps
Downtime of a mission-critical application paralyzes a business. Just before Thanksgiving last year, for example, United Airlines suffered a nationwide glitch in the software that controls its ground operations, which caused a two-hour outage. That, in turn, caused United passenger delays and missed flights across the country. And on Christmas Eve, a malfunction in Amazon's AWS cloud infrastructure kept Netflix from streaming content to millions of customers just as they were sitting down to watch their favorite shows and movies.
According to research by the Aberdeen Group, the estimated average cost of downtime is now $138,888 per hour.
"Over 50 percent of IT decision makers want to have less than 30 minutes of downtime a year," LeClair says. "They're actually not getting anything close to that [in the cloud]. They're actually getting two 9s availability today. What they're asking for is four 9s or five 9s".
In the end, LeClair says, certain applications may simply never migrate to the cloud because the expense and risk doesn't justify it. They'll remain in bare metal, or even virtualized, noncloud environments. These applications may require dedicated hardware for performance or functionality reasons. Or perhaps regulatory compliance demands locked-down, secure environments.
Enterprises will have to evaluate each application case-by-case to determine whether it's best-suited to a physical environment, virtualized environment, private cloud, public cloud or hybrid cloud, he says. In every case, trade-offs will be required.
Three Important Availability Considerations
The first step to take when considering any application deployment, LeClair says, is to evaluate the cost of downtime.
"Whether you're looking at a cloud-based opportunity or an on-premises opportunity, we really recommend that you know the cost of downtime of your applications," he says. "The cost of downtime can be measured in dollars, reputational damage—it can even be measured in loss of lives in the case of public safety applications. This lets you understand the level of availability you actually need to apply to these applications and how best to deploy them."
If you do decide to go the service provider route, LeClair says it's vital to examine the service level agreements (SLAs) closely to determine what actually happens if you don't get the availability you're promised.
"Some SLAs may say things like, 'we guarantee 100 percent uptime,' but look at the actual contract details," he says. "It might say, 'If we fail to meet that, we'll give you a 20 percent credit on next month's bill.' A typical tier 1 application may cost you $150,000 per hour of downtime and you're going to credit me 20 bucks? Great. The solutions need to be guaranteed at a much stronger level."
He notes that Stratus offers a $50,000 guarantee if customers suffer any downtime at all.
LeClair also says that it's also essential to consider data protection.
"It's one thing to protect the application at the transaction layer, but you also need to consider other kinds of downtime that can occur," he says. "What happens if a tsunami takes down my entire building? How far back do I need to back up my data?"
Thor Olavsrud covers IT Security, Big Data, Open Source, Microsoft Tools and Servers for CIO.com. Follow Thor on Twitter @ThorOlavsrud. Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn. Email Thor at email@example.com