Bloggo back to the blog
G(r)ood Testing Volume 9 – Test Strategy Revisited-->
Last year an English colleague spent a few days in the Netherlands. This was a good opportunity to meet him and exchange some ideas. We met in Café American in the midst of Amsterdam. While enjoying a good cup of coffee, we exchanged our project experiences and we excitedly told each other about our activities. Soon our discussion came to the core topic of the day: agile versus methodical testing.
We discussed practical ways to create an effective testing strategy: it seems a recurring theme, because many testers wonder how their test strategy will change if the move to agile. If I’d create a test approach by the book, then I would perform a risk analysis and decide what quality attributes I want to cover with my tests. I’d define test types and maybe even assign what test design techniques to use while testing the various system components.
In our company it is a common practice that we facilitate an interactive Product Risk workshop. This yields a Product RIsk Matrix (PRIMA) that includes input from the stakeholders and creates a collective understanding of the items that are important to test. But this does work? Does this approach really yield the results that we pursue as a tester? I think it sometimes does, but certainly not always.
There are two major bottlenecks. We have to translate the results of the PRIMA into testing activities. If we do not do this extremely well, our stakeholders will fail to understand how the tests align with their input. Consequently we will lose their trust and attention. Additionally for the stakeholders quality attributes and IT itself are abstract concepts. We must realize that they are not IT professionals. “I just want it to be fast,” I heard a business manager state in a meeting, “Now don’t you start asking me about queries again”
When the waiter brought us our second cup of coffee we were on a roll. Both my colleague and I recognized the above situation; we devised a more user-oriented approach. Would that work better? The requirements were very clear:
A effective strategy should clearly be based on the input from the stakeholders,
It should fit within an agile context,
It should give directions to the tester in what to test, but give him/her the freedom to define tests on the basis of his/her test-, system- and domain-knowledge
We defined three steps to create a test strategy that meets the above requirements
Step 1: User Analysis.
Find out who uses the system. Define user profiles. Take it up with the business and create personas. By dividing the population of users into groups will gain you a lot of understanding of what they expect from the system. It will help to create personas. “A persona is a description of a fictional person representing a user segment of the software you are developing”, says Dr. Charles B. Kreitzberg, “Of course, the word “fictional” applies to the person not the description; that should be as grounded in reality as possible.” Personas are a well-known tool in both UX-design and Marketing, so maybe information is already available within your organization.
Step 2: qualifiers.
Organize a workshop to learn what the solution should to be like in order to prompt a WOW effect by the user. Qualifiers are those attributes of the system that make users really appreciate the system. In mobile context, the qualifiers will trigger you to show the app on a birthday party: “Have a look, I found a cool App!”
Identifying the qualifiers can be done in a brainstorm or brown paper session, or by drawing a mind map collectively. When doing so, we touch upon the field of the business analysts, so why not invite them to the session as well. They may seem reluctant at first and state that everything has already been incorporated into the design. Yet, sessions prove to provide useful insights for both testers and stakeholders. I have done sessions where testers suddenly realized that certain qualifiers were only important for limited group of users, and that sometimes, different personas had contradictory needs.
Even if there is no design, these sessions can be fruitful. Based on the identified qualifiers we can derive the positive or hygiene tests which we use to verify whether the application does what its is supposed to do.
Step 3: disqualifiers.
In the 3rd step we’re looking at the properties, omissions or errors that will result in a negative experience. In mobile context, the disqualifiers will make you delete the app from your phone because it is trash. Information about the disqualifiers will help you to define aggressive tests. It is up to the tester to define effective attacks and prove that the application actually does not exhibit negative behavior.
There are various ways to complete these steps. You can have interviews, brainstorm sessions like the previous mentioned brown-paper of mind map sessions. You can include users, IT or business managers whoever has a opinion on the quality and testing. If you want to try it, read some more about the Kano analysis technique. It has a comparable approach. I gave several workshops in which participants get a hands-on experience with this approach. You can view my session slides on my blog.
I believe this approach is useful since if we report our test results in reference to the qualifiers and disqualifiers, our stakeholders will understand its value. Passed test will confirm the presence of qualifiers that make the user happy and will indicate that it would not likely be put with the trash. The above approach will force testes to be critical and creative, because ultimately testing needs to be thoroughly. That remains. But my, English colleague and I concluded, while ordering our third cup of coffee, it puts testing into the spotlight. Business is involved, understands what we are testing for and is watching us closely. Just like we want it.