by James Staten, VP, Forrester Research

5 Best Practices for Test and Development Projects in the Cloud

Opinion
Nov 09, 2010
Cloud Computing

Application test and development projects are a natural starting place for companies moving to use cloud computing services. Forrester Research's James Staten discusses how to tell if your project is a good match for the cloud.

The hype around cloud computing is undeniable—especially regarding how it can save the enterprise money. But unfortunately, the hype paints a very generic picture of cost savings that doesn’t resemble the truth. After countless client inquiries, it’s become clear to Forrester that positive ROI from cloud computing can’t be achieved as a blanket business case because the benefits of cloud computing vary based on the application and use case. Ultimately, this means the testing and development of new applications in the cloud becomes an ideal way to provide a business justification and to ensure the right fit between the applications, the IaaS solutions, and the right IT ops protections and processes.

A core fundamental of IaaS cloud computing is that these platforms offer a virtual environment that is standardized and automated to be easily consumed by multiple constituents with less intervention by IT professionals. Public (and increasingly, private) cloud platforms meter resource consumption, which drives a different consumption behavior—one in which cost savings can be achieved through proactive efforts to reduce or minimize consumption. That key concept is crucial to understanding the business case for cloud computing.

For example, ask a test lab manager about the challenges they face, and they’re likely to complain about how demanding developers can be and how much time is “wasted” in setting up and tearing down test environments for them. But then look for developers who are viewed as the most productive and innovative on a team and ask how they manage to be so capable among so many complaints. If they’ll tell you (and often they won’t) they might just confess that they’re productive because they don’t use the lab resources provided by IT ops and instead go directly to platform-as-a-service (PaaS) or IaaS cloud where they can get resources within minutes and only pay for those resources when they need them.

Of course, when developers turn to the cloud to get their job done, IT ops may be concerned they’re putting the organization at risk. Instead of discouraging testing and development in the cloud, consider endorsing its usage, but with guidance. For example, create a work-in-progress cloud use policy, or consider funneling developer use of the cloud through a central resource request tool.

One important point to note here is that not all development cases fit in the cloud. It will take time and cooperation between the IT ops team and APM to define crisp criteria for best fits. Here are five common best practices to use as starting points:

1. Test deployments that can be accommodated on a standard virtual environment. IaaS platforms expose server virtual machines and virtual storage volumes. PaaS clouds expose higher level abstractions—middleware or directories where applications can be deployed. In most cases, dedicated physical resources are not provided.
2. Test environments that stand alone. If the development project can be tested in isolation, meaning it does not require integration with production systems, it can normally be tested on a cloud platform. The most cost-effective uses of cloud platforms are those that do not consume outbound bandwidth. They also represent the lowest risk to the company as no firewall ports must be opened.
3. Projects that have a lifespan of fewer than 12 months. Most cloud platforms are priced per hour, and when consumed perpetually over a 12-month span, they typically cost more than traditional hosting options. So these will cost less to operate internally on a virtualized lab environment. Here’s a basic rule of thumb: if you’re constantly striving to return your public cloud platform bill to zero, you are using it effectively.
4. Projects that don’t expose the company to new compliance or regulatory risk. Hold off on putting test projects into public clouds until you have confidence in your ability to ensure the compliance of these uses.
5. Multi-VM applications that use Web services. Public clouds spread customer workloads across standardized virtual infrastructures and interlink them using IP protocols and Web services. If developers need application components to talk to each other, don’t expect to find support for protocols that are latency sensitive or require specific network configurations or close-coupling of components. Most public clouds don’t support multicast protocols, and clustering can often be a challenge. Hold off on endorsing these uses until you can ensure they’ll perform as expected. Most intercommunications based on existing Web services should work well.

If accelerating time-to-market for new applications and services is a priority, test-and-development projects that meet the above criteria will deliver developer agility which business-justifies the expense of cloud platform consumption. It also warrants investment because it replaces long-term capital and operating expense need with more efficient flexible operating expense investment. Depending on the volume of development projects across the enterprise and the percent that fit with the cloud, it can also reduce demand for internal lab resources where allocated budget can be repurposed more efficiently.

James Staten is Vice President at Forrester Research, serving infrastructure & operations professionals. His research provides insights and best-practice use of emerging infrastructure technology and services trends including cloud computing, strategic rightsourcing, infrastructure consolidation, and application-specific infrastructure optimizations.

Follow everything from CIO.com on Twitter @CIOonline, and the CIO.com Facebook page