by Meridith Levinson

11 Ways to Improve Software Testing

Nov 15, 20058 mins

Three years ago, Station Casinos came up with a great promotion to lure customers: $25 worth of free slot play on their electronic loyalty cards. It worked like a charm too. Gamblers flocked to the casino in droves.

That should have been a good thing.

But one Friday night, shortly after the promotion began, when players inserted their cards into the slot machines, nothing happened. The sheer number of people trying to access the machines—at the same time the accounting department was running a number of financial applications—caused the servers that stored all the promotional information to freeze. Irate, players threw their loyalty cards on the floor and raised a ruckus.

That was a bad thing.

The source of the problem? Testing. Marshall Andrew, Station Casinos’ VP of information technology and CIO, says Station Casinos never anticipated such an overwhelming response to the promotion. Consequently, IT did not test the system for such large volumes of activity, and certainly not while other programs were running. Station lost the cash they would have made that Friday, alienated customers and had to run another campaign to apologize; the casino invited some customers to return another weekend for $50 worth of free slots.

The moral: Testing is essential to developing high-quality software and to ensuring smooth business operations. It can’t be given short shrift; the consequences are too dire. Businesses—and, in some cases, lives—are at risk when a company fails to adequately and effectively test software for bugs and performance issues, or to determine whether the software meets business requirements or end users’ needs. (See “The High Cost of Flawed Testing” on Page 66.)

“The important thing when you roll out a system is to make sure it works,” says Andrew, who has made significant changes to his testing organization (known as quality assurance, or QA) since then. First, he changed the testing process itself. Previously, developers had a great deal of freedom to change code while it was being tested to keep the project moving. Now, there are tight controls on the developers’ access to test code. To keep everyone honest, Andrew had the QA specialists begin reporting to the business analyst group rather than to the development group, whose work it was evaluating. Next, he hired more QA specialists—with business training—and involved them in the development process earlier, when business analysts are creating requirements documents, so that they can then develop test scripts based on business specifications right from the beginning.

The following list of best practices for testing software and running your testing organization were gleaned from interviews with companies that have rigorous testing needs and standards. These tips go beyond the “test early and often” mantra and will improve your IT organization’s testing capabilities—not to mention the quality of the software you release.

1] Respect your testers. In many companies, testing is an entry-level job. As a result, testing isn’t done well. Instead of hiring people off the turnip truck, recruit candidates who are detail-oriented, methodical and patient. Look for people who know how to code. Your developers will respect them more, and they can code some of their own testing tools. “If the development organization and the QA organization don’t respect each other, we won’t be able to achieve our high-level quality goals,” says eBay’s VP in charge of QA, David Pride.

2] Colocate your testers and developers. Putting developers and testers together goes a long way toward improving communication between two groups that often lock horns (after all, testers are paid to find fault with developers’ work). Physical proximity “facilitates the nuances of testing” that are best communicated through personal interaction rather than by e-mail or an application development workflow tool, says Pride.

3] Set up an independent reporting structure. Testing should not report to any group that’s evaluated on meeting deadlines or keeping costs down for a project, according to John Novak, senior VP of hotel chain La Quinta. Having testers report to the development group is the worst choice of all, Novak says. If developers are behind or having trouble with code, they will be tempted to keep testers out of the loop. Instead, Novak has testers report directly to him. Andrew has testing report into his business analyst group as a way to foster communication and to get testers involved in the development lifecycle early.

4] Dedicate testers to specific systems. At Barnes & Noble, one group of testers focuses on store systems, while others tackle financial and warehouse systems. Barnes & Noble CIO Chris Troia says focusing testers on one set of systems deepens their understanding of how those systems are supposed to work and gives them the expertise to identify problems that might not show up in a formal test document. EBay takes the same approach, but goes one step further. The company has three distinct testing groups: one for site functionality, one for payments and one for data warehousing applications.

5] Give them business training. Station Casinos’ Andrew makes members of his testing department work the front desk, the casino floor and in different corporate departments so they can learn the lingo and better understand the systems they’re testing. (Most of his 125-person IT staff had never placed a bet on a sporting event at a casino prior to joining the company.)

6] Allow business users to test too. Most testing involves banging on systems and fiddling with code—technical stuff—which can tempt IT to leave business users out of the loop. Bad mistake. At La Quinta, “the testers are always coming out of the business community,” says Novak, to ensure that the systems IT is developing meet their specs. For some applications, especially those that run in hospitals, getting end users to test applications is a matter of life and death. “Technology people can only go so far,” says Patricia Skarulis, vice president of information systems and CIO of Memorial Sloan-Kettering Cancer Center. “We need to have users involved.”

7] Involve network operations. Nate Hayward, vice president and director of quality management with HomeBanc Mortgage, says that during testing, his company’s network operations group uses a software tool (Compuware’s ServerVantage) to monitor servers for performance issues that could originate from the way hardware or software is configured. Involving the network operations experts in testing also gives them the opportunity to rehearse a deployment before a system goes into production, ensuring that the actual implementation will proceed smoothly.

8] Build a lab that replicates your business environment. Four years ago, Station Casinos built a costly test lab that looks like a minicasino with slot machines, point-of-sale terminals and Web-based kiosks that simulate the computing environments at all 13 of Station Casinos’ properties. Ninety percent of the applications the company runs, including wireless apps, are duplicated in the test lab. For the other 10 percent of applications, which are too big or complex to create an exact testing replica, Andrew comes up with a scaled-down subset of the app to predict how it will run when it’s fully rolled out. Or he gets help. With Station Casinos’ last system rollout, he used Microsoft’s test labs to run simulation models.

9] Develop tests during the requirements phase. Companies traditionally have waited to do testing until requirements have been established and coding has begun—or finished. A growing school of thought says that testing can still be done effectively even if the requirements have not been developed fully. Fans of “agile programming” (see “Fixing the Requirements Mess,” Page 52) believe that testing should be done continually from the beginning of the project until the end.

10] Test the old with the new. EBay uses a statistical analysis tool it built in-house to compare defects discovered by testers to the code that was tested during a particular testing cycle. The goal is to make sure that previously tested pieces of software still work properly when new features are added. Pride says the statistical analysis tool pinpoints where testers need to add test cases in the current project and also helps determine the overall effectiveness of current regression tests for forthcoming software projects. EBay needs to continually refine the tests because some new projects may contain the same functionality as previous projects. The better those tests can be, the better future projects will be.

11] Apply equivalence class partitioning. This is a mathematical technique that testers can use to identify additional functional requirements that business analysts and users might have overlooked or not articulated, says Magdy Hanna, chairman and CEO of the International Institute for Software Testing. He says equivalence class partitioning gives testers a clear picture of the number of test cases they need to run to adequately exercise all of a system’s functional requirements. Pride says equivalence class partitioning is one way his group can determine all the ways in which eBay’s 157 million users might use its online auction platform.