23 March 2010

Supertest

By Andrew Clifford

Fully automated regression tests are worth the high costs required to create them, but require confidence and discipline to gain the value.

I am feeling smug. I have just finished refactoring the test packs on our Metrici Advisor product, and, though I say it myself, they are now superb.

Before we did this work, the tests were pretty good. We had automated tests for every part of the system, from low-level code to web pages and user interaction. But there were a few problems. Although it is a OS-independent Java-based system, lots of the test code was Windows-specific. Many of the tests required detailed inspection of the output. The tests involve a number of separate runs. Running the tests took 45 minutes or more, and you need to be an expert in the system to interpret the results.

That has now changed. All 2500+ tests run from a single command on Windows or on Linux, and automatically indicate if and where there are any errors. The tests take about 15 minutes to run. (We run most of the tests using a home-written XML-based test harness, with plug-ins to test XML-based services and XSLT, and to retrieve, navigate and test the output from the web front-end.)

We made these changes because we want to carry out some major development on Metrici Advisor, potentially involving more developers, and we need to make running the tests as simple as possible. The tests allow us to respond to requirements quickly, with little effort, and with certainty that they will not impact other parts of the system.

To achieve this value, we now have to be prepared to rely on the tests. We have to be confident that the coverage of the tests is sufficiently good that there is no point running a few extra tests just to be extra safe.

We also need to be disciplined. We have to add tests for all new code, for any bugs we find, and for any other conditions that come to mind. If we can not think how something can be tested, we have to redevelop code so that it can be tested.

These fully automated regression tests are a significant asset. If we did not have them, we would spend much more money on manual testing, and would not have such a high quality product. However, if we had not built the tests at the same time as the code, I estimate it would take a couple of years to develop this depth of testing. Few organisations would justify this level of effort for retro-fitting automated regression tests to an existing system.

However, for any system that you intend to keep for some years, adding fully automated regression tests is both possible and worthwhile. Start by creating an almost empty test pack, with simple examples of each type of processing. Through time, add to this to test all changes and bug fixes. This adds little or nothing to your testing costs. Gradually, the coverage of the automated tests will increase, and testing time and cost will decrease as less and less functionality requires manual testing.

As we have found, fully automated regression tests are very valuable, and are achievable even with complicated systems. All you need is clarity of purpose, confidence and discipline.