Bloggo back to the blog
A New Model of Testing-->
Don’t we know everything there is to know about testing? The response from pretty well everyone who knows anything about testing is ‘certainly not’. I am proposing a New Model because I believe that the testing world is getting shaken up quite dramatically. The current confused state of affairs [1,2] could mean that some testers will lose their jobs and be reassigned to do other things, and some of the value that testers add (but few can articulate, by the way) will be lost. The software industry will be the poorer for it.
The current perspectives, styles or schools of testing will not accommodate emerging approaches to software development such as continuous delivery and, for example new technologies such as Big Data, the Internet of Things and pervasive computing. These approaches require new test strategies, approaches and thinking. Our existing models of testing (staged, scripted, exploratory, agile, interventionist) are mostly implementations of testing in specific contexts.
I believe there is an underlying model of testing that is context-neutral and I have tried to shed some light on what this might be by postulating the Test Axioms, for example . The Axioms are an attempt to identify a set of rules or principles that govern all testing. Some people, who have used them think they work well. They don’t change the world, they just represent a set of things to think about – that’s all. But, if you choose them to be true, then it becomes possible to avoid the quagmire of debates about scripted versus unscripted testing or the merits and demerits of (current) certifications, the value of testing and so on.
The model of testing presented in this paper is an extension to this thinking. The model represents the thought-processes that I believe are going on in my own head when I explore and test. You might recognise them and by doing so, gain a better insight into how you test. I hope so. As George Box said, ‘essentially, all models are wrong, but some are useful’. This model might be wrong, but you might find it useful. If you do find it useful, let me know. If you think it’s wrong, please let me know how I might improve it.
This paper presents an alternative view of the core activities of testing and a New Model of it. The aim of the paper is to make this model available and through discussion and challenge – to improve it or kill it. It is a straw man. It is a model. It is wrong. It might be useful.
I will use my selected definition of testing and suggest a model based on a belief that ALL testing is exploratory .
When tests are performed on-the-fly, based on mental models, the thought processes are not visible to others; the thinking might take seconds or minutes. At the other extreme, complex systems might have thousands of things to test in precise sequence, in complicated, expensive, distributed technical environments with the collaboration of many testers, technicians and tool-support, taking weeks or months to plan and apply.
Depending on the approach used, very little might be written down or large volumes of documentation might be created. I’ll call the environmental challenges and documentary aspect ‘test logistics’. The environmental situation and documentation approach is a logistical, not a testing challenge. The scale and complexity of test logistics can vary dramatically. But the essential thought processes of testing are the same in all environments.
So, for the purpose of the model, I am going to ignore test logistics. Imagine, that the tester has a perfect memory and can perform all of the design and preparation in their head. Assume that all of the necessary environmental and data preparations for testing have been done, magically. Now, we can focus on the core thought processes and activities of testing.
The model assumes an idealised situation (like all models do), but it enables us to think more clearly about what testers need to do.
At the most fundamental level, all testing can be described thus:
1) We identify and explore sources of knowledge to build test models
2) We use these models to challenge and validate the sources
3) We use these models to inform (development and) testing.
I make a distinction between exploration and testing. The main difference from the common testing view is that I will use the term Exploration to mean the elicitation of knowledge about the system to be tested from sources of knowledge.
There are two modes of thinking in our test approach – exploration and testing – that have distinctly different goals. By separating the two, we allow our minds to focus on the different goals at hand. Our thinking is clearer because our judgement on whether a source is reliable is not clouded by whether (or not) we have found a good test of the system (and vice versa). This is not an argument for staged testing. Rather, I make the case for clear thinking, depending on what your goal is at the time – creating good models from trusted sources or creating and applying effective tests.
We start by exploring our sources – we formulate models; we use models to challenge our sources through example to improve our sources and our models. When we are satisfied that a model is adequate, we use the model to inform our testing. I use the term ‘inform’ deliberately. The model may be formulated in such a way that test cases are readily obtained. Some models, for example state diagrams, boundary values or decision tables expose test cases readily. Other models such as check lists of risks or design heuristics require further thinking. For example, ‘which tests will best demonstrate whether a mode of failure is possible or likely?’
Some (perhaps most) mental models cannot easily be described. They could be based on our experience, imagination, prejudices or biases. They might exist only in our subconscious mind. There may be several or several thousand different visualisations, patterns or forms that our models might take. The workings of our brains is still a mystery. There might be a more satisfactory description of how brains work in the future but right now, for the purpose of this paper, we need only believe that models are formulated in the brain of the tester.
The schematic above illustrates the two modes of thought – exploration and testing, governed by the mission of the tester. Judgement is required when moving from one mode to the other. The two modes of thought represent two different processes followed by testers. I describe each in more detail in the full paper.
Without any further explanation, here’ is the full model. I’m not going to explain it but suggest that if you are interested in reading a full description, you can download it from here
I believe that our existing models of testing are not fit for purpose – they are inconsistent, controversial, partial, proprietary and stuck in the past. They are not going to support us in the rapidly emerging technologies and approaches.
A New Model of testing might be a useful framework for thinking about testing and how testers think. I have tried to be consistent with the intent and content presented in the two pocketbooks [3, 4]. Some more obvious challenges to the model have been considered and discussed.
The certification schemes that should represent the interests and integrity of our profession don’t, and we are left with schemes that are popular, but have low value, lower esteem and attract harsh criticism. My goal in proposing the New Model is to stimulate new thinking in this area.
The model has been presented to substantial conference audiences in Finland, Poland and the UK during April, May and June this year. It was challenged and debated by test managers, senior testers and consultants in a workshop at the Test Management Summit in April. The feedback and response has been notably positive in all cases. I am planning to present the New Model at several more public and company-internal conferences in the UK and elsewhere during 2014.
This is a work in progress. I am actively seeking feedback and guidance on the New Model and the narrative in the full paper at: http://dev.sp.qa/download/newModel.
1. Testing is in a Mess, Paul Gerrard, http://gerrardconsulting.com/?q=node/591
2. Will the Test Leaders Stand Up?, Paul Gerrard, http://gerrardconsulting.com/?q=node/621
3. The Testers Pocketbook, Paul Gerrard, http://testers-pocketbook.com/
4. Business Story Pocketbook, Paul Gerrard & Susan Windsor
EuroSTAR Conference Chair 2014, Paul Gerrard
Paul GerrardPaul Gerrard is a consultant, teacher, author, webmaster, developer, tester, conference speaker, rowing coach and a publisher. He has conducted consulting assignments in all aspects of software testing and quality assurance, specialising in test assurance. He has presented keynote talks and tutorials at testing conferences across Europe, the USA, Australia, South Africa and occasionally won awards for them.
Educated at the universities of Oxford and Imperial College London, in 2010, Paul won the EuroSTAR European Testing excellence Award. In 2012, with Susan Windsor, Paul recently co-authored “The Business Story Pocketbook”.
He is Principal of Gerrard Consulting Limited and is the host of the UK Test Management Forum and the UK Business Analysis Forum.