Testing to avoid destruction

It's a good day for some when a bank's ATMs start dishing out far more money than the customer requested, and brightens up many a commuter's journey to work as they read all about it in their morning paper. But clearly it's not a great day for the bank, and it's a downright stinker, career-wise, for those who designed and tested, or failed to test, or inadequately tested, the ATM software involved. And for much applications software – for air traffic control, or government-held personal databases, or production lines – there is emphatically no lighter side to the catastrophes that can arise when the testing process is shown to have been wanting.

Small wonder, then, that growing dependence on IT in all parts of the economy has let to explosive growth in the software testing business, which, despite the cost, is seen as a relatively inexpensive piece of insurance compared with the costs that can arise when it is skimped. The growing importance of legislation and regulation has strengthened the focus on testing for the many industries that face shut-down if they fail the compliance tests – from drug companies and banks to airlines and utilities.

So why is it, given the huge expansion in testing that has occurred, that we continue to see spectacular software failures that can be traced back to inadequacies in the testing process? Analysis of such failures reveals that in many cases the issues are down to a wrong-headed testing culture – one which views testing as an end-of-line activity that takes place as the last step before new software goes into live production. What might be called the Y-Z as opposed to an A-Z policy.

If you look at the reasons why new software is developed, you can see why the Y-Z approach fails. An organisation rarely sanctions a significant software development just for fun, or to keep its IT people happy. It is generally a response to a real business need – a new market opportunity, a drive to cut costs, a takeover or merger, new regulation (or deregulation), a new product launch, or the perceived benefits of a large-scale IT transformation. In all cases plans are made and a business case put together. Invariably, new IT support is required to support the new venture, and a statement of requirements for the new system is produced as a preliminary to developing the applications to meet those requirements.

In theory it's a logical progression, but in practice things can easily go wrong at every stage. There are many examples of disastrous breakdowns in communications when the statement of requirements is drawn up and signed off by the business sponsors, even when the business and the IT people concerned are all highly competent professionals. Why so? Simply because the two groups often 'speak different languages', with their own jargon, assumptions, modes of thought and, often, blissful ignorance of the other side's ideas and assumptions. Further problems can occur at the next stage, when the senior IT people heading up the project spell out the requirements to the teams actually developing the new applications. The former group often lack the latter's knowledge of the arcana of Java, C++, SAP, Oracle or whatever, and further communications lapses can all too easily happen.

Finally, at the testing stage, test scenarios are often devised by IT specialists who lack a realistic appreciation of the real-life business situations likely to arise. The net result can be a new application that runs just fine for a while, but is highly prone to stumbling, and breaking its leg - or its neck, when the unforeseen happens.

Related:
1 2 Page 1
Page 1 of 2
7 secrets of successful remote IT teams