5 Elements Your Cloud Infrastructure Needs to Enable Application Agility

Cloud computing offers affordability and agility, but that doesn't mean it automatically enables business agility. To achieve that, you may need to rethink the way you design, deploy and manage the application development lifecycle.

A couple weeks ago, I discussed how cloud computing enables affordable agility. Automated provisioning and easy scalability make it possible, for the first time, for companies to experience infrastructure agility.

However, it's a mistake to assume that agile infrastructure equals application agility (by which I mean both delivering applications into production more quickly and having those applications easily grow and shrink in terms of scale). In fact, one commonly encounters IT personnel who assume that the mere fact of hosting an application in a cloud environment will magically transform it into an all-singing, all-dancing agile application.

That's far from the truth. Cloud infrastructure is necessary but insufficient to achieve overall application agility. In order to achieve the nirvana of application agility, a number of other conditions need to be in place—principally relating to how the application itself is designed, deployed and managed—along with the organization and its processes surrounding the application lifecycle.

Here are five elements that need to change along with the shift to cloud infrastructure in order to allow business agility to take place.

1. Agile Development: Making Business Agility Dreams Come True

Agile development is so well-established in the industry that it seems like it would be a given for organizations seeking agility. However, many IT organizations have not yet implemented agile development practices or are still working toward implementing them in their development lifecycle.

Commentary: Kicking Waterfall Project Management Habits Requires Agility

Counterpoint: Why Agile Isn't Working: Bringing Common Sense to Agile Principles

While agile development is a concept rather than a specific set of practices, common to most perspectives on the term is the use of short delivery cycles, small and incremental functionality releases, and ongoing interaction between the application sponsor and functionality definer and the development team. This avoids the long, "big bang" development timelines that, at the end of an extended development cycle, deliver something unusable or unwanted. It's safe to say, though, that absent agile development, dreams of business agility will remain just that: Dreams.

2. Organizational Silos: Leave Them on the Farm

Manual handoffs between organizations are anathema to business agility. Unfortunately, many companies fail to recognize that their processes need to be automated just like their infrastructure. Without streamlined process, there's a mismatch between the speed of development and the speed of application delivery.

Some organizations propose a philosophy of continuous integration (agile development practices fostered by constant integration of incremental application functional improvements) and occasional deployment (release of updated applications into production on a less-frequent basis). That's not necessarily a bad approach, but if it's proposed as a rationalization for organizational silos and manual handoffs, then it's not a great solution.

Case Study: How Qualcomm Broke Down Silos, Improved Application Integration

The experience of leading Web-based companies is that pushing out frequent updates is far more successful and far less disruptive than occasional large code drops. The challenge for many IT departments is that organizational structures designed to meet previous infrastructure requirements are ill-suited to agile development and deployment. Particularly unhelpful are manual handoffs that result in each group redoing the application deployment to align with its internal practices. Each do-over takes time and poses the risk of errors.

3. Common Artifacts: Many Cooks, One Kitchen

One issue with this silo approach is that each group creates and uses its own artifacts. Developers use Amazon Web Services virtual machines and, often, Github and Jenkins. QA pulls code and rebuilds it using its own tools. When it's turned over to operations, the code is redefined in an automated runbook tool. After the application goes into production, changes flow through a change control board that uses manual tracking methods—which result in trouble tickets that, in turn, lead to hands-on configuration changes.

Analysis: Software Testing Lessons Learned From Knight Capital Fiasc

With all these different artifacts, it's no wonder everything takes forever and suffers from a constant barrage of errors, in the IT version of iatrogenesis. Unless there's one version of code that's consistently used by all groups, chaos will be the norm.

4. Consistent Tools: One Screwdriver Is Enough

The constant problems associated with manual interaction with the code are only exacerbated by each group using its own set of tools. This kind of retranslation of an application so that it can be managed by each group's chosen tools preordains confusion and errors. A better solution: Find a single set of tools that can be shared across multiple workgroups. Chef or Puppet, along with GitHub, are good choices for tools that can be used throughout the lifecycle of an application.

5. Incremental Application Change With Conditional Execution

It makes sense to have the application deployment model match the development model—agile, that is, with frequent releases of small amounts of incremental functionality. For many organizations, the overhead of releasing code into production is so high that many functionality changes are bundled into a single update. This inevitably causes problems when some part of the update fails, but it's difficult to isolate just which part is the problem.

How-to: 3 Ways to Be More Agile With Software Shipping Decisions

More: How to Deal With Software Development Schedule Pressure

There's a better way. Solve the frequent release into production problem, then put continuous, small updates into production. The risks associated with this can be minimized by exposing the functional code via a conditional expression—for example, display new feature A if "DISPLAY NEW FEATURE A" macro is set as environment variable. That way, the functionality can be exposed in only a subset of the total computing pool and can be easily shut off if the functionality appears to be problematic.

Full achievement of the promise of cloud computing is only possible when infrastructure agility is married to application agility. Failing to implement automated streamlining of the application lifecycle dooms IT organizations to dashed dreams of faster time to market.

Bernard Golden is the author of three books on virtualization and cloud computing, including Virtualization for Dummies.. He is senior director of Cloud Computing Enterprise Solutions group at Dell. Prior to that, he was vice president of Enterprise Solutions for Enstratius Networks, a cloud management software company, which Dell acquired in May 2013. Follow Bernard Golden on Twitter @bernardgolden.

Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn.

Join the discussion
Be the first to comment on this article. Our Commenting Policies