Bloggo back to the blog
Test Automation How Far Should It Go?-->
In the old days (and I have been in the profession for about 25 years) testing of code modules was an intensively manual process. Test harnesses needed setting up by hand, and then populating with carefully chosen input values and laboriously calculated expected outputs. If you wanted to monitor test coverage this generally had to be done by use of Trace statements and some kind of manual analysis after the event. Results were output in formats that needed skilled decryption before a verdict can be reached as to whether the test had passed or not.
The end result was (usually) code of good quality because it had had all this care and attention lavished on it, but also a sense that there were better things to do, and a productivity ratio of 3:1 test:coding was not considered good! As the kind of work that novice programmers were often put onto it could also be considered damaging from a morale point of view. But, we could all see the potential for a high degree of automation…
Surveying the scene now, 25 years on, it is possible to see massive changes: test harnesses can be generated in a matter of seconds, results are output in a clean and readable form, coverage analysis is there at the end of the test run, and most crucially the opportunity to generate tests which ‘pass’ is now on the table. What I am referring to here is the option to have the test tool automatically select inputs to drive code down a full set of paths, and even calculate what the expected results should be. Is this going too far? Surely the point of testing code is to demonstrate that it does (or doesn’t) do what it is supposed to do. Does not the process described above have the potential to lead to a lead to an entirely mistaken sense of confidence? Are there circumstances when testing in this way has validity?
In a world where we see lives and livelihoods depending on decisions made within computers driven in turn by their software is it a good or a bad thing to be trusting software to test software?
Ian Gilchrist entered the software profession as a an Assembler language programmer in the early 1980’s. He has worked at various levels including project manager in a variety of environments since then, using a variety of languages including Fortran, Ada and C. Most recently he has been involved in the production of IPL’s testing tools and their application to safety-related and mission critical projects.