by Dr. Bill Curtis

How to Control Application Risk with Quality Measures

Opinion
Oct 06, 2010
CIOIT Leadership

How CIO's can quantify application risk and use the information to make management decisions.

With IT becoming the virtual bricks and mortar of modern enterprises, is it any wonder that most CIO’s believe IT failures are their greatest risks? A recent report from the Economic Intelligence Unit found that business executives ranked the threat of IT failures ahead of such headline grabbers as terrorism, natural disaster and regulatory constraints¹. Historically, neither the business nor IT could measure application risk because they lacked useful measures of the underlying weaknesses in the software. Today, CIO’s can quantify application risk and use this information to make management decisions.

(¹Coming to grips with IT risks — A report from Economic Intelligence Unit, 2007)

Four converging challenges elevate the importance of measuring application risk:

Application malfunctions: Although many inconvenience only a few customers, some threaten the business. Inoperable corporate Websites, corrupt financial data, personal data breached and account statements miscalculated are only a few of the fiascos.

Business agility: Directly linked to the internal quality of its critical applications, this declines unless the quality of each application is sustained throughout its useful life. As software quality erodes with age, the ability to rapidly implement new functionality declines just as the demand for this functionality accelerates.

Supplier dependence: Critical application software is increasingly supplied by external sources such as contractors, vendors, and outsourcers. The quality of externally supplied software presents a business risk that is difficult to control proactively, since service-level agreements usually focus on post-delivery performance.

Application ownership costs: Without constant attention to software quality, today’s state-of-the-art application quickly devolves into tomorrow’s legacy monstrosity. As low quality applications age, the percentage of time devoted to understanding their construction and fixing their defects increases relative to the percentage of time spent implementing new business functionality, severely diluting their ROI.

Why Is Application Risk Harder to Measure and Control Today?

Modern business-critical applications are no longer developed as monolithic systems written in a single, or at most two languages. Rather, these systems consist of millions of instructions, written in various programming languages, interacting with a complex data model that is controlled by hundreds of business rules. For example, a simple J2EE application may be composed from multiple technologies including JSP/JSF, JavaScript, or HTML for the presentation layer, XML for the coordination layer, Java for the business layer, and SQL for the database layer.

Controlling the risk of a business critical application is therefore a multi-technology challenge where many quality problems occur at the interface between technologies. The technical complexity of such polyglot applications exceeds the expertise of any single developer or project team because of the multiple languages, technologies, and platforms involved. For this reason, the quality of an application is more than the sum of the qualities of its component features. Application quality should be treated as an additional level of quality that presents unique risks to the business.

Figure 1 displays the complex web of interactions among myriad languages and technologies that must be mastered to ensure application quality and reduce the business risks of a modern application. Is it any surprise that 50 percent of the effort required to modify a business application is spent trying to figure out what is going on in the system and how it is connected²?

(²Prof. Mordechai Ben-Manachem – Software Quality, Producing Practical and Consistent Software)

figure1_bus-apps.jpg

Why Testing Is Insufficient?

The traditional solution to application quality risks has been testing; however, testing can provide only part of a quality solution. Most tests are based on the application’s requirements. Consequently, they focus primarily on whether an application functions correctly — in other words, whether developers “built the right thing.” It is typically the non-functional aspects of the application — whether developers “built it right” — that cause devastating outages, performance degradation and security leaks during business operations.

To add even more challenge, few modern business-critical applications are developed in a single project. Rather, the multiple subsystems that provide business functionality, data management, user interface, Web access and other capabilities are often developed in separate projects on separate continents by separate organizations. Most quality practices have been designed for use on and by a single project and are focused on evaluating an application subsystem. Unless this distributed application work is integrated from a quality perspective, problems in some of the critical interactions among technologies that produce the biggest business headaches can go undetected.

Development decisions in complex applications involve tradeoffs between performance, maintainability, security and other quality factors that cannot be fully understood without comprehensive knowledge of interactions among application components and technologies. Quality practices such as testing and peer reviews have not proven to be as effective in detecting non-functional problems because the breadth of technical expertise in the development and test teams limits their diagnostic coverage. Consequently, a thorough evaluation of application quality cannot rely solely on human dependent processes such as peer reviews and test case design.

The evaluation of application quality must be automated and performed at the systems level and must present objective, quantitative information about the quality-related attributes of the system. Measures of the internal structure and attributes of an application provide diagnostic indicators that point to areas of suspected weaknesses in its construction. Further, an automated evaluation of whether an application’s code complies with rules of good coding practice provides even more direct evidence of potential risks and problems.

What Application Quality Factors Affect Business Risk?

The risks to business can be broken down into five factors of application software quality. These factors can be related to the business benefits and risk in an application. Taken together, these quality factors describe the overall health of an application — the extent to which it is free from pathological conditions — and its ability to service the current and future demands of the business.

Each of these application quality factors can be quantified in a series of metrics that can be presented in quality profiles and summarized in composite measures presented on management dashboards. Each factor is a combination of measurable software attributes and adherence to rules of good software engineering practice. Development teams can use these application quality factors proactively for early detection and remediation of problems. They can be used surgically during maintenance to drill down to specific sections of an application that need correction. They can also be used diagnostically to track changes in software quality across a portfolio of applications.

How Should Application Quality Measures Be Used?

IT must take a four-tiered approach to reporting and using application quality metrics. These uses involve IT governance and decision-making, project tracking and control, code remediation, and outsourced development. Figure 2 displays these four uses and the different constituencies for application quality measures.

figure2_app-quality.jpg

IT Executives — Summary application quality metrics should be used to assess and govern risk across the application portfolio. These measures can be used to initiate conversations with the business about their tolerance and priorities for different types of application risk. Summary measures can also reveal quality trends across the portfolio that affect the cost of ownership, as well as informing decisions about application investments and retirements.

Application and Project Managers — Measures collected at defined events such as code builds can be used to track quality trends throughout application development and maintenance. Managers can use these measures to track progress and to set priorities for remediating weaknesses in the code. Target values for the most critical quality attributes can be established as gating criteria for releasing the application into production. These measures can be used to predict future costs and risks related to the application.

Application Developers — Quality measures that assess structural attributes and violations of good coding practices can guide developers to specific areas in the application that need to be remediated to eliminate or reduce application risks. Experience has proven that this diagnostic information helps development teams learn about interactions among languages and technologies they had not previously understood. Simply knowing that non-functional quality will be measured causes developers to be more meticulous in the development of their code.

Vendor Managers — Complaints about the quality of outsourced application work is growing. IT organizations frequently have little visibility into the quality of outsourced application code. The use of measures as quality gates for code provides an objective basis for accepting deliveries against contractually agreed quality targets. Measureable criteria for accepting code have proven to improve outsourcing relationships by moving them to a more objective platform.

IT organizations should begin their use of automated analysis and measurement of application quality on a pilot project to gain experience. As they learn key success factors from early deployments, they can begin integrating application quality activities into their standard development processes. For instance, collecting quality metrics can become a standard task integrated into build procedures that are supported by a separate build or configuration management function. The interpretation and use of quality measures may be aided by product assurance staff who are broadly knowledgeable in the technologies comprising most applications. In an environment where managers and developers take pride in the quality of the applications they provide to the business, quality measures can become as powerful aids for helping them achieve professional objectives, while reducing business risks.

Dr. Bill Curtis is Senior Vice President & Chief Scientist at CAST, Inc., & Director of the Consortium for IT Software Quality (CISQ).

Follow everything from CIO.com on Twitter @CIOonline, and the CIO.com Facebook page