by Fred Hapgood

The Importance of Automated Software Testing

News
Aug 01, 20014 mins
Developer

As complicated as our relationship with computers has been during the past half-century, there is at least one constant: Wherever you find a computer you will find a swarm of bugs. For decades users and managers have searched for weapons to use against infestations. Some try structured programming tools, environments supposedly too sterile for bugs to breed. Others rely on ambitious alpha and beta testing programs. But conceptually most satisfying is the idea of getting computers to detect and report (and maybe even fix) their own bugs automatically.

For some bugs, automated testing requires a very high level of skill, one verging on true artificial intelligence. Not all of these pests are out of reach, however, and products devoted to their extermination started to appear on the market in the ’80s. As a rule those programs worked by letting managers capture or create typical user sessions (sequences of keyboard strokes and mouse movements) on pieces of software. Programmers could then run these sessions through the program being debugged and examine the output for errors. By the early ’90s the automated testing sector was sufficiently developed for us to publish a survey on the technology (“Bug Busters,” CIO, March 1993).

Generally we were unimpressed. The tools seemed expensive, clumsy and missed a lot of problems. The programs also presented a steep learning curve. “In the short run, organizations deploying such products should expect protracted production schedules, increased demands on development staff and a falloff on software quality,” we wrote. “IS directors are doing well if a suite of tools pays off…after three years.”

A manager at the time might have been skeptical that network systems of any scale could survive their own bug plagues. Yet, perhaps counterintuitively, the development of the Internet turned out to be a boon both for the war on bugs in general and automated testing in particular. Code sharing among developers became easier, cheaper and faster, dramatically leveraging the effectiveness of manual bug hunting. (The high reliability of open-source software is the leading illustration of the benefits.) Patches became easier to distribute. Networks permitted the installation of “flight recorders”?sensors that sit inside applications and send reports of dysfunctional sessions back to the vendor. “This is a big deal because it means technical support doesn’t have to try to replicate the bugs on its end,” says Oliver Cole, president of OC Systems, a system availability tools vendor in Fairfax, Va. Flight recorders can also generate high-quality test sessions as input for automated software testing. (Atesto Technologies of Fremont, Calif., produces such tools.)

And while networks did not make automated testing directly smarter, they did introduce a wave of simple, stupid, bugs (such as dead HTML links) that were perfect for automated testing, even at its then-current skill level. As a result, investment and revenue flowed into the sector, and in time the technology did get smarter. For instance, several companies (including OC Systems) have developed tools that “watch” while test sequences run to determine which instructions are invoked. If the tool fails to call some instructions, the programs generate new test sequences specifically designed to test the missed code.

Much hinges on whether the testing tool market’s success can repeat in the next decade. The exciting possibilities for the future of IS?ubiquitous or pervasive computing, semantic networks, adaptive programs, full device independence, distributed computing, IBM’s initiative in “autonomic computing” (self-managing and self-repairing networks)?all require considerable improvement in our capacity to find and fix errors. For example, embedded systems usually do not have writable storage or often even network connectivity, which means they cannot be fixed with repair patches or maintenance releases. If they cannot be built without error, embedded systems will fail to deliver on their promise.

Perfection is being pursued from several directions. Some companies are hoping that the new Unified Modeling Language will let device engineers design inside simulations of end user environments so that they will able to do a kind of field testing in the earliest stages of design. Lucent is developing a system that will automatically generate logical representations of running code that can be tested to see whether specific kinds of bugs, such as feature interaction conflicts in distributed systems, are possible even in theory. As mentioned, the Internet has introduced large-scale peer review to more types of software.

It is probably not right to say that the future of software depends on automated bug detection and repair. But automated bug busting may ultimately push software development to new levels that until now have been only the stuff of dreams.