Track Talk

In Praise of Manual Testing

Sue Atkins

14:15-14:45 CEST Wednesday 29th September

With the current drive to automate so much, there is a concern that we will lose the valuable insights that Manual Testing provides. To understand the value of manual testing we first need to understand how humans learn and interact with the world – we learn by doing. Finding out what ‘feels’ right.

As an example, I recently installed a washbasin and tap in my bathroom – on paper the design worked. Mechanically, it passed all the checks. But ultimately it failed the human use test – you can’t get your hands under the flow of water. Added to that if you fill the basin to a useful amount you may get splashes over the side (leading to water ingress). The manual (in-person) test found these issues very quickly (and with a very simple usecase).

In this presentation I will outline a number of factors that contribute to the strength of manual testing. These include:

Human perceptions of systems – It is interesting how we view and interpret the systems and products we work with. In some instances we may look upon them as tools – products to be used to extend our capabilities. At other times we may view them as places – portals or areas where useful information lies. This change of view is difficult to express in any other way than through the human experience.

Being ‘Present’ – Julian Harty prefers the term In-Person Testing when referring to human-based test activities. Personally, I believe this a wholly more appropriate term than Manual Testing as it highlights the need for the tester to be fully there, fully ‘present’ in the moment, bringing all their awareness of where they are and what they’re doing to bear on the system under test.

The use of Personas – Humans have a vast library of experiences to pull from – if you were asked to use a system like a child or someone with hearing loss you would be able to draw on experiences and adapt the test accordingly.

In short, manual testing, in all its forms (from scripted testing to exploratory sessions) provides a human insight into our systems under test that is difficult (if not impossible) to obtain from automated testing.