As EuroSTAR 20 Year Club member, Paul Gerrard, has long reminded us, testers use models all the time. We use visual models to map complex systems and requirements on whiteboards and during sprint planning meetings. When we spot unusual behaviour or a bug, we create “implicit” models of the systems in our head to work out what to test next. When defining test cases and expected results, we are creating a model of how the system should work.
Testing uses models all the time, but doesn’t always make these models “explicit”. This talk will explore what happens when testing explicates its modelling processes. It will discuss how the modelling work already performed in tester’s heads and on white boards can be formalised, exploring the value of these different model-based approaches. This value touches on every stage of testing and development, and typically includes:
- The maintenance of an agreed understanding of fast-changing systems. Creating “living documentation” reduces miscommunication and the impact of poor software requirements. Knowledge is also maintained over time, as it exists in artifacts as well as in people’s heads.
- Close collaboration with non-testers, including Business Analysts, Product Owners and Developers. “Business” stakeholders are typically already familiar with certain types of models, while formal models provide the logical precision developers need to understand complex systems. With different levels of abstraction, models can be used to collaborate seamless across roles, while individual stakeholders can drill into the level of detail they need to work as effectively as possible.
- Rapid test creation and maintenance. Formal, logically precise models offer scope to apply algorithmic test creation techniques. This automatically generates a set of functional test cases that can be optimised to cover the whole model, or to target certain areas. This can be huge time-saver when compared to designing test cases one-by-one. It also helps boost overall test coverage.
- The ability to target testing effectively based on change. Creating central, formal models also creates a place to pool data from across the entire SDLC. Using interfaces like APIs, data can be gathered automatically, being used to inform the central models. As the models change, testing can be fine-tuned to target areas of changing systems where it will deliver the greatest value.
Come and see this value, and more, and learn why testing should make its models more explicit!