Bloggo back to the blog
Testing waterfallacy (1 of 3)-->
In talking with other testers about their experiences on projects that apply iterative methods, I often hear a lot of friction in aligning their work with that of developers. Old habits of Us and Them do not disappear overnight, by placing all disciplines in the same team. Testers struggle to get a testable product in the first stages of an iteration, and have a hard time catching up near the end.
It doesn’t have to be like that at all. Collaboration is matter of give and take. Showing an interest in each other’s work is a necessary first step. Second is the realization that openness in a team is a must. Too often I hear that testers have limited access to unit tests. At the very least, one should be able to read all test code, to minimize overlap in test effort.
Code should be readable, even for a tester without a programming background, given a short walkthrough by the programmer. (If not, then it’s probably not going to be easy to maintain the code at a later date.) Any tester worth his or her salt should be able to review whether the tests are adequate in terms of covering decision paths and boundary values. Quite likely, the tester can add value by suggesting improvements and additional test cases. If those were adopted into the project, to be run at every build of the system, it would strengthen the base of the so-called testing pyramid, and improve testing overall in the project.
In an ideal world, testers and developers make a habit of designing tests together, possibly pair programming from time to time. In my experience, testers often have to prove themselves to the team, before they are consulted early enough in the design of tests at the unit and integration levels.
Especially the integration (unit integration and system integration) levels are interesting for testers. Most business logic will be checkable at this level. It would be a shame if it were excluded from automated testing in the style of unit testing, just because it involves more than a single unit. Unit testing is just a label; you can use the tools for it at higher levels as well. Experience has shown that numerous defects are inserted precisely at this integration level, and that it is hard and time consuming to expose them by end-to-end testing or through a GUI. Even worse, such tests tend to be very brittle, and hard to maintain.
Now suppose you are a tester in a team that does not automate many tests at the integration level. My best advice is to discuss this in the team, and not to accept quick fixes. It’s very frustrating for a tester to hear that a bug was already fixed, and no test was added to the project. That means that you would need an old version of the product to reproduce the defect, and that manual testing of the fixed application only proves that you cannot easily reproduce the aberrant behavior. It’s much, much better for everyone’s confidence in the solution if your can agree on something that is called ‘defect TDD’:
1. When a defect is found, tester and developer discuss what the most appropriate spot is for a test to expose the defect. Quite often, the defect can and should be caught at the unit / unit integration level.
2. The new test is written, and it is run to see that it fails.
3. The code is fixed and the test passes.
4. Refactor if possible, and re-run the tests.
Unlike regular TDD, this approach for defects comes with very little overhead. (Because the code is already complete, there is no need to write mocks for classes that are about to be written. As noted above, it’s okay if the test involves more than one class.)
Once a team is accustomed these practices (sharing information about unit and integration tests, securing that a defect stays fixed, placing more and more tests at the integration level, etc), testers will find that they have more control over their situation. There should be no debug cycle near the end. In fact, it ought to become harder to find defects.
Surely, a situation may still arise where a large part of the functionality is declared “ready for test” in the last week of the iteration. The point is that such functionality will already have undergone a lot of testing. Also, the team will be more quality-aware, and may see testing more as a team responsibility. With the provision that no-one tests their own work, developers can certainly chip in to reduce the testing workload. Developers will be able to do some non-confirmatory testing. Especially if their quality awareness has reached a point where they take pride not just in their code but also in the unit and integration tests that come with that code.
As I said, collaboration is a matter of give and take. In the next part of this series, I will discuss ways in which we as testers can make life easier for developers and designers.