by Bernard Golden

Cloud Computing: The Future of IT Application Architectures

Jan 19, 2010

Assumptions we have traditionally used to design application architectures are increasingly outmoded, says's Bernard Golden. Application architectures will change - just as much as IT operations - during the next five years due to the nature of cloud computing applications.

Last week I wrote about the impact cloud computing will have on IT operations. I noted that the increasing scale of data dramatically changes the expectations of how data centers are operated. This week I want to turn to how cloud computing affects IT application architectures, specifically examining the flip side of the coin of data growth: application load. Succinctly put, the assumptions we have traditionally used to design application architectures are increasingly outmoded due to the changing nature of applications. Application architectures are going to change — just as much as IT operations — over the next five years due to the nature of cloud computing applications.

Cloud Computing Definitions and Solutions

What are the reasons that applications are going to change so much?

All of that big data is going to mean software applications are going to need to change to manage it.

As I noted last week, IDC projections indicate that the average company will experience a seven-fold increase in unstructured data (think click stream capture and video storage, etc., etc.), accompanied by a doubling of structured data (think database row-and-colum info). I actually think that IDC’s projections are understated on the structured data side, because of the constrained assumptions it (very reasonably) brought to its analysis. The remarkable decrease in the cost of IT brought about by cloud computing will — no surprise to economics majors everywhere — lead to much larger amounts of computing being done, which, in its turn, will lead to larger application architectures and topologies.

The Business Use of IT is Changing

In the past, IT was used to automate repeatable business processes — taking something that already exists and computerizing it. The archetype for this kind of transformation is ERP — the automation of ordering, billing, and inventory tracking. That “paving the cow paths” approach to computing is changing. Today, businesses are delivering new services infused and made possible by IT — in other words, creating new offerings that could not exist without IT capabilities. A dramatic example of this is the way music services have developed. Pandora leverages the knowledge of experts to deliver customized song streams to its customers; Pandora tracks the preferences and feedback of every one of its listeners to ensure each receives a personalized offering. Pandora’s service could not exist without the support of massive amounts of computing power, which forms the core of the business. Less dramatic, but no less reliant on IT infusion, is the personalized service offered by high-end hotel chains. The personal attention that employees offer guests — going *way* beyond the “prefers non-smoking room” of yore to, say, “likes to see avant-garde theater and new museum exhibitions” — enables highly specific employee interaction with customers. And, guess what, it’s all driven by new applications.

[For timely cloud computing news and expert analysis, see’s Cloud Computing Drilldown section. ]

The Nature of Applications is Changing

Heretofore, most computing has been driven by human action — someone making a purchase, requesting a Web page, and so on. In the future, a growing percentage of computing will be driven by non-human activities from devices like sensors. As an example, much has been made of the move to smart electric meters — instead of your meter being read by a human walking through your neighborhood, the meter itself will connect back to the electric company data center and upload its billing information. However, one of the other ballyhooed characteristics of these smart meters is their ability to give real-time readouts of load to users. This data about electric usage — the metadata, if you will — will be invaluable to electric companies to help understand how usage changes with immediate pricing feedback. This will result in far more data than just a monthly reading being sent to their data centers. And — wait for it — that data will be transmitted in irregular patterns, leading to highly variable loads, thus affecting the nature of application architectures.

Given how the number, type, and nature of applications are changing, what does this imply for the future of applications and, to address the specific topic of this post, the future architecture of applications? The implications are fourfold:

Application load variability will increase: The driver for the vast changes in resource load variability is application load variability. For hotels, the traditional busy times are early morning (checkout) and late afternoon/early evening (checkin). In the future, personalized attention will mean high application load at other times. In essence, application load will vary throughout the entire day — all 24 hours of it — rather than being focused during business hours. Applications will need to be much more able to dynamically scale.

Application interfaces will change: Instead of being human- (and thereby screen-) focused, data will pour into applications from other applications, sensors, file uploads, and, undoubtedly, things we haven’t even thought of yet. So service interfaces and upload interfaces will join terminal interfaces. Applications will need to be able to gracefully — and dynamically — add new data streams as inputs.

Application characteristics will change: The increasing importance of geolocation in apps will necessitate the rapid ability to shift context and data sets. If I’m driving in a taxi, the “nearby” services change quickly as the car moves down the road. Being able to shunt data in and out of working sets quickly (not to mention being able to blend contexts as applications support multiple people sharing a “nearby” context) will become vital. Naturally, this requires high performance.

Application topologies will become more complex: As scale and variability increase, architecture designs must change. I hinted at this last week, when I mentioned the use of memcached as a data caching mechanism used to increase throughput. Complex applications often incorporate asynchronous processing for compute-intensive tasks; message queues are often used as part of this approach. Therefore, application architectures need to change to incorporate new software components and application design.

What are some practical steps you can take to ensure your cloud-targeted application can support these new application requirements? Here are some suggestions:

1. Review software components that you plan to use in the application. Many software components were designed to be used in a static environment with manual configuration and occasional updating. A common design pattern for these components is the use of a “conf” file which is edited by hand to configure the component context. Once the conf file is complete, the component is started (or restarted), reads the configuration information into memory, and goes into operation. In a cloud world, in which context changes constantly as new connections and integration points join and drop, the “edit and restart” model is unsustainable. Look for components that have online interfaces to update context and dynamically add or delete connections. Nothing is worse than rolling out an application and later realizing that some part of it can’t really support dynamic topology shifts.

2. Plan for load balancing throughout the application. Many applications support load balancing at the Web server layer, but assume constant numbers (and IP addresses) for application components at other layers. With very large load variability, other layers need to be scalable and need to support load balancing to ensure consistent throughput. Don’t design an application with the expectation that only two application components will reside at certain layers. Plan for dynamism and load balancing at all layers.

3. Plan for application scalability. Maybe this is hammering the point home too many times, but double or triple your capacity planning and application architecture assumptions — maybe even factor in a 10X growth possibility. When you plan for much larger scales, you pay attention to bottlenecks and plan to how to relieve them dynamically. If you don’t expect scalability, you don’t examine your architecture assumptions critically. So review your architecture for scalability bottlenecks.

4. Plan for dynamic application upgrades. Forty years ago, auto manufacturers took two weeks to change over factories to prepare for new model manufacturing. Toyota figured out how to do it in two hours. That meant they had to design for dynamic factory upgrades. Cloud computing, with the 24 hour use cycles, means no downtime for application upgrades. Architecting applications so that the topologies can be changed while users continue to access individual servers requires Toyota-like planning. Likewise, upgrading database schemas (and data sets) to support new application versions necessitates Toyota-like approaches.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.

Follow Bernard Golden on Twitter @bernardgolden. Follow everything from on Twitter @CIOonline