20 November 2007

The testing time bomb

By Andrew Clifford

Testing is an increasingly important part of IT. We face serious problems with the long-term management and support of systems because testing tools are not based on standards.

I once investigated time usage in a systems development department. 12% of time was programming and 40% was testing (the rest was analysis and support).

In more recent work, I have been using test-driven development and automated regression tests. Looking at our source code repository, 9% is documentation, 30% is functional code, 6% is test code, and 55% is test data.

These examples are typical. We produce more tests than programs, and spend longer testing than programming.

Our emphasis on testing is growing. Testing has moved on: from debugging, through demonstrating requirements, to test-driven development. Automated regression testing is becoming more important as our portfolios of systems grow and age.

There are many tools to help us test. JUnit and similar tools help unit testing. There are tools for planning tests, tracing requirements, running high-level system tests, and checking test coverage. There are session capture and replay tools. There are tools to simulate multiple users for performance and stress testing.

These tools make testing more effective and more efficient, but they create a dependency between the system under test and the testing tool. Testing is critical to ongoing support and the long-term viability of systems. Using testing tools means that systems become dependent on the upgrade path and success of the testing tool vendor (or open source project). If the testing tool fails to stay current we have to redevelop the tests, which could easily require twice the effort we put into programming.

Despite its importance, we place relatively little emphasis on the choice of testing tools. We spend much longer arguing about design approaches, application frameworks and programming languages, even though we will spend longer using, and arguably have more dependence on, testing tools.

There are three ways out of this problem.

The third option interests me most. We need a standard, implementation-neutral syntax for tests to remove our dependency on specific tools. This would be a complete specification of the input data, operations, and expected outputs, not just documentation of test requirements.

This could then be used by testing tools:

Testing tools are a time bomb waiting to wreck the long-term management and support of our systems. If we can find a way to standardise, we can reduce our exposure to this risk significantly.