Most of today's applications, and all of tomorrow's, are built with the cloud in mind. That means yesterday's infrastructure -- and accompanying assumptions about resource allocation, cost and development -- simply won't do.
Cloud computing is starting a revolution in applications. If your perspective on cloud computing is that it’s like your established computing mode, just outsourced to an external provider, you’re wrong — and you face a painful transition period during which your assumptions and practices will be wrenched and dislocated beyond all recognition.
A combination of the infrastructure capabilities of cloud environments and next-generation application requirements are causing this transformation. The result will be something faintly recognizable as an application but vastly different in design and operation from what you used to call an “application.”
You can easily understand the reasons for this sea change by contrasting the characteristics of traditional applications with the new applications enabled by and depending upon cloud infrastructure. The following table illustrates key differences between traditional and cloud applications:
As you can see, traditional and cloud applications differ in several critical aspects. The assumption underlying traditional applications is that their characteristics can be forecast upfront: How many people will use them, what devices will be used to access them and how much infrastructure will be required to run them.
Cloud applications means elastic infrastructure
Today’s applications are completely different. It’s impossible to predict load; the very makeup of the user population is unknown, since it includes not only employees (traditionally the only significant user population) but also an unknown number of customers, partners and interested parties — that is, floating populations of bystanders directed to your application due to its novelty or notoriety.
By the way, that unknown user population won’t be accessing your application with a limited number of IT-approved devices. It will be using an ever-increasing number of computing devices (PCs, tablets, and smartphones) and, in the future, objects barely recognizable as “computing devices” — think smartwatches and single-application specialized hardware, not to mention special-purpose devices such as medical monitoring machines.
The back end of your application won’t run in the stable confines of a dedicated infrastructure, either. First off, it probably won’t be your infrastructure; it will come from an external provider such as Amazon Web Services, Google or Microsoft. It certainly won’t be running on dedicated hardware; the practices and economics of cloud providers depend upon shared infrastructure that fluidly shifts from one customer’s workload to another’s.
The configuration of your application’s infrastructure definitely won’t be static, either. It will grow and shrink as application loads vary. This will be due in large part to the way you’re charged for the infrastructure resources you use.
In traditional application design, you forecast how much infrastructure you need, then purchase that amount as a capital expenditure. While it’s difficult to really predict how much resource you’ll need to run an application, the amortized cost of the resource is consistent: It’s the inexorable depreciation of the infrastructure, which never varies whether the resources are 100 percent loaded or runs without a bit of load.
Cloud applications, on the other hand, impose a cost for all resource consumption. Running resources that aren’t performing any useful work impose a cost despite their waste. Your infrastructure won’t be static.
I’ve heard some people pooh-pooh the need for highly variable public cloud environments, based on the fact that most IT applications run with predictable loads and therefore can leverage static infrastructure environments. Don’t use this breezy assumption as a crutch for avoiding the hard work of architecting applications for cloud computing.
The fact is, traditional infrastructure is inflexible and extremely difficult to modify — and impossible to modify quickly. Therefore, traditional IT environments perform as Procrustean beds: Fixed environments in which applications are “right-sized” through stretching or lopping without adjusting the size of the bed to fit the need.
That approach won’t be acceptable for next-generation applications. Once it’s obvious that these artificial limitations are no longer necessary, developers will insist that whatever infrastructure is used must support flexibility and elasticity. Critically, once developers internalize the assumption that infrastructure is easily available and malleable, they’ll discover new application needs that require cloud infrastructure environments — so that once-tenable assumption about the highly stable nature of application infrastructure requirements will be outmoded.
As the saying goes, past experience is no guarantee of future performance. Simply put: Future applications are all cloud applications and need to be designed and operated as such.
The need for better application management
With this in mind, these four assumptions and practices should guide you as you design and implement future applications:
Assume a dynamic application topology. You’ll have virtual machines joining and leaving the application pool frequently, so be sure your application can gracefully accept and release resources. One way to enable dynamic application topology is to …
Separate code and state. It’s tempting use sticky state settings in the load balancer to direct all session interactions to a single server. However, that can cause unbalanced server loads. Worse, if a server crashes, user state can be lost; that can be disastrous.
The right approach is to move state into a separate storage location, such as some kind of database, which has built-in redundancy and can allow any server to pick up state and continue session interaction. Of course, this can make the database a bottleneck, so prepare for the next step and &heiilp;
Move state into cache. Cache tiers keep session data in fast RAM, obviating the need for time-consuming disk access and improving session data retrieval, thereby improving overall application performance. Cache solutions typically incorporate redundant infrastructure, protecting against data loss by resource failure. It’s not uncommon to have two or more caching tiers in a highly dynamic app.
Naturally, you’re now faced with another challenge: Managing all these dynamic resources and multiple tiers. This suggests you should …
Leverage a sophisticated application management solution that treats your application topology as a coordinated set of resources and can dynamically (and automatically) add and remove resources, ensuring there are always enough resources available. Automated management also removes the need for error-prone manual operations interaction, a common source of application failure.
Finally, and quite importantly, dynamically adjusting the amount of resources assigned to an application ensures that resource cost matches user load. This should allow better cost/value balancing.
Developers building cloud applications have new expectations
In closing, let’s return to a statement made above: “Once it’s obvious that these artificial limitations are no longer necessary, developers will insist that whatever infrastructure is used must support flexibility and elasticity.”
It’s important that we, as an industry, internalize the implications associated with the new developer expectations. The history of IT is that new platforms enable new applications types that rapidly become the vast preponderance of every company’s total application portfolio. Oh, and by the way, that portfolio explodes in size, since every new platform represents at least an order of magnitude cost/benefit improvement.
Given what’s now available via public cloud computing, here’s the new expectation of developer benchmarks:
Resource availability within minutes, not hours or days.
Full infrastructure malleability; on-demand virtual machines with firewall changes that take weeks is unacceptable.
A rich set of supporting services, such as highly scalable object storage, redundant database, queues and email.
I believe we’re working in the most exciting time ever for IT. Ten years from now, the landscape of what we call “IT” will look so different from today that we’ll scarcely recognize it. The key is to recognize that all value of the field of IT is associated with applications. The critical task is to optimize our environments, our processes and our thinking around that reality.
Named by Wired.com as one of the 10 most influential people in cloud computing, Bernard Golden serves as vice president of strategy for ActiveState Software, an independent provider of CloudFoundry. He is the author of four books on virtualization and cloud computing, his most recent book being Amazon Web Services for Dummies. Learn more about him at www.bernardgolden.com.