Bloggo back to the blog
How do you evaluate testers? by Shmuel Gershon-->
I frequent many of online forums. It’s a great way to grow and learn, as the discussions are fantastic opportunities to challenge us with questions we don’t face on our day to day work. I also enjoy the questions that I do encounter in my day to day, because when answering these (or reading answers) for a context different from mine, I discover new ways of thinking about matters I do out of routine.
In English, an interesting site is the “Testing.StackExchange” Questions&Answers site. Unfortunately the site is less active than it could be, but there are good questions and very good answers in the list. And even some big names like Alan Page, Michael Bolton and BJ Rollison answer questions there, so it’s a good place for you to ask yours.
This week an interesting question popped at the Testing.StackExchange forum: How can you evaluate the performance of a tester? How can you compare testers’ efficiency?
This is a question that entertains me once a year, when my company does employee evaluations, and my trial at answering it is below.
Can you suggest other points of view? What else you recommend to consider? How do you evaluate testers?
The question goes similar to this:
If I assign one requirement to two testing engineers, one tester will come up with 10 test cases for the given requirement and the other one will come up with 15 test cases. Both of them affirm to cover all the scenarios with their tests.
How can I decide which one is better? Apart from considering the minimum amount of test cases…
Are there other factors to decide who the most efficient tester is?
If you don’t want to consider the minimum number of test cases, you can consider the one with the most number of test cases then… :)
Jokes aside, trying to determine efficiency of testers based on numbers like that will lead you very far from the answer you look for. For instance, to be simplistic, both testers can be doing the exact same tests, just by dividing them differently in 10 or 15 cases. Or both testers can be doing a terrible job, but the mere numbers won’t tell you that. The number of tests executed or planned does not show the contribution of a tester to a project. By counting test cases you are looking at something very superficial.
In order to decide which tester is better, we have to do it in the same way we decide which of our friends is the best friend. It takes time and intimate acquaintance.
Which tester gives better and more useful information to managers? To programmers? To peers? Which one communicates better? Which one is funnier? Which one makes the job easier for the rest of the team?
There’s no one discipline where you can rate the testers fairly. One may find the most bugs, the other prevents most bugs, another may help the more peers, other has a better relationship with programmers. And the other one, that one in the corner that looks less efficient than all the other testers… well, that one is the one doing the support work so all the other testers can shine.
Think of all the dimensions where testers affect a product/project. You have to study subjectively the performance of the testers in all these dimensions in order to compare them.
Testing skills, communication skills, teaching skills, writing skills, team skills, truth skills, learning skills…
Yes, it sounds difficult. Maybe because it is difficult.
But for any other arbitrary measurement you pick to make the task easy instead, you may as well randomly draw name slips from a hat.
While we are at that, do we really need to know which one is the most efficient? Does this comparison benefits our judgment and decision (and consequently the team and product)?
Done wrongly, you may end up with a group of all non-efficient testers. Again, think of the similarities with the group of your friends: If you propose a clear-cut competition between your pals to determine which one is your best friend, some of them will stop being your friend just for proposing it, and others will become worse friends precisely for trying to answer the defined criteria.
If you have to rate, think about evaluating team mates individually rather than comparatively. May show you facets of your team you hadn’t notice. A conversation with each tester about his own performance can do wonders, too.
And do you really care which one is more efficient? Would you prefer efficiency over honesty? Over willingness? Over results?
Last, if any of the two testers really tells you he “
covered all the scenarios“, then he is not a very efficient tester… :) An efficient tester knows it is impossible to cover all scenarios, and even more with just looking at a requirement (before actually running the app to feel it).
Well, that’s my point of view, and I invite dissent and discussion.
Write your own opinion below :)