Bloggo back to the blog
Cutting through the noise: moving towards an objective discipline of testing-->
Combinatorial methods, multiplicative coverage, cause and effect graphing – the list of proposed test case design methods runs on. The debate on how to best test software looks set to continue indefinitely, as the advocates of one method remain unconvinced by the arguments put forward in favour of the others.
Llyr Wyn Jones recognises this impasse in the testing community. In an upcoming ebook series, A Critique of Testing, he considers the reason why it exists, and offers a means to cut through the ‘noise’ of the debate, providing testing with a solid basis.
Observing the deadlock reached when comparing methods of test case design, Jones concludes that methods of analysis tend to be subjective, a posteriori and anecdotal. Because of this, he argues, no consensus can be reached, as there are no first principles to build upon when answering the question at hand: how can one be sure that a piece of software has been fully tested?
The subjectivity Jones describes is the reason why defects slip through the net, even as the time and resources spent on testing increases. (Currently, testing can take up to 50% of the SLDC, while expenditure on testing has increased by 23% in the past year).
Without formal principles, testers are marking their own homework, leading to “WYGIWYG” testing, where ‘what you get is what you get’, not what you wanted. Lacking a formalized discipline with which to assess their work, testers are manually creating test cases based on what they think needs testing, assuming that this constitutes 100% functional coverage. Instead, they need grounds to know when they have fully tested a system.
This is what Jones sets out to offer. He strips testing to its first principles, establishing a fundamental framework for objective analysis. Drawing on quantum mechanics, information theory and the notion of entropy in particular, Jones derives five primary tenets of testing.
He applies these a priori tenets consistently in assessing each test case design method, working from the foundations up. The principles of his favoured test case design methods are therefore justified by definition and one can be confident that this method – flowcharting – really offers the best way to test as much as possible (for reasons he explains, Jones does not believe exhaustive testing to really be possible).
By contrast, testing methods which rely on anecdotal, subjective analysis for support do not fare well – the reasons for their shortcomings (e.g. poor coverage) are laid bare, with the explanation of these shortcomings also explaining why they cannot be overcome without changing methods.
Amidst the quantum mathematics and conceptual analysis, Jones provides visual diagrams and gives each test case methodology a score out of ten, based upon the five criteria derived from first principles. The reader can therefore see for themselves the respective merits, and shortcomings, of each method considered, alongside the conceptual explanation for this.
By reverting back to first principles, Jones engages in a Critique in the full sense. Moving beyond the subjective accounts, he hopes to provide the discipline of testing with objective foundations comparable to “the bedrock … on which most mathematical and computational progress in the last century or so has been achieved.”
If you would like to read Jones’ innovative analysis of test case design methodology, as well as why he thinks flowcharting is best, read the primer to A Critique of Testing.
The full eBook A Critique of Testing is currently being written, and will be released in a series by Grid-Tools.
Reference:  http://www.zdnet.com/software-testing-and-qa-budgets-keep-rising-with-more-emphasis-on-the-new-stuff-7000034517/
Llyr Wyn Jones is a Senior Programmer at Grid-Tools, the leading Test Data Management Company. He has invented, developed and implemented new software solutions for multiple worldwide clients. He tweets at @gridtools. Profile: https://www.grid-tools.com/llyr-jones/