by Andy Hayler

How to estimate the correct testing requirements

Opinion
May 31, 20124 mins
IT LeadershipMobile AppsTelecommunications Industry

Back in the days when I actually developed software, I became interested in software testing. Once you write some code, what is the right amount of effort to devote to making sure it actually works?

Clearly, the criticality of the software will affect the degree of effort: a cruise missile control system will demand a higher level of care and attention than that of a program shuffling some data around for a file conversion.

The first thing to be aware of is that it is virtually impossible to test a piece of non-trivial software perfectly.

Consequently testing becomes an issue of economics: how much is it worth to test the software relative to its development cost?

There are also numerous styles of testing, from peer-review checks of software code through to developing batteries of expected outcomes of the software, so that you can go through and repeatedly run tests to compare what is supposed to happen with what the software actually produces.

If you think: “We just buy packages, so this is someone else’s problem”, then think again.

Commercial software packages should have received some level of testing at the vendor, but they won’t have tested your particular implementation of the package.

This will have your peculiar local data, parameters and choices, and those little tweaks and add-ons that your users insisted upon, all being deployed in the unique environment of your company with its labyrinthine combination of versions of operating systems, browsers and databases.

No one has ever installed exactly that package in exactly that way before, so you still need to do testing.

If you follow a waterfall project methodology then there are some well-established rules of thumb suggesting that catching errors early (at the design stage) is much cheaper than catching them after the software has shipped, which is common sense but does not really help in deciding how much effort to allocate.

Of course it all depends on complexity and the consequences of failure, but as a guideline most projects should typically allocate between 10 and 30 per cent of their efforts to testing.

So you have set your testing budget – now what?

There is a lot you can do in order to be more efficient about how you approach testing. From code and design reviews through to testing specific outcomes, there is a lot of choice to be made.

I once knew a contractor who made a living from destruct-testing software. He had a knack of finding bugs in even apparently well-tested software, and was employed to try and break software that had already been supposedly thoroughly tested.

The fact that he made a steady living doing this tells you all you need to know about how robust most commercial software really is.

At my previous software company I introduced automated test software, which batches up all those inputs and expected outputs in such a way that you can replay them time after time.

Having such repeatability is a huge advantage, because testing software is, let’s face it, inherently dull and open to human error.

Just how alert will your tester be when running through some mind-numbing test scenario for the nth time?

While there are particular types of people that love software testing, you are going to get variation in the quality of testers, whether they are in-house or off-shore.

Computers don’t get bored, so by using automated testing tools you can build up an ever more comprehensive battery of tests which you can add to as you discover new bugs.

I was genuinely surprised to find out how little use is made in the software industry of such automated testing, judging by the bewildered looks I was given by my software development team when I insisted on implementing automated testing some years ago.

Such software is not a panacea, as your software itself changes with its new versions (screen X may no longer exist, may produce a different result, or may have an extra field, for instance).

This means that the test batteries themselves need to be checked just as the software itself does, and this has a cost.

The tests can be fragile and break, causing tension between the testers and the development team; it can also instil complacency, with developers assuming that bugs will get caught by the test suite.

Nonetheless, in most cases the benefits of having a battery of automated tests outweighs the costs.

The question is, just how confident are you in your own test processes and your use of the best test tools and techniques?