go back to the blog

Testing waterfallacy (2 of 3)

  • 10/09/2010
  • no comments
  • Posted by EuroSTAR

Make no mistake about the title. The style and attitude of testing that I advocate and demonstrate, is a cooperative one. I strongly believe in a multi-disciplinary interpretation of testing. In my first post in this series, I said that showing an interest in each other’s work is a necessary first step. I will clarify this, and take it a bit further.
As a tester, I have done all of the following:

• Provide input and feedback to improve requirements;
• Review source code, unit tests, designs and architecture;
• Analyze and fix defects;
• Write unit and integration tests;

Oh yeah, I’ve also done some system testing. My point here is that there is no reason to limit yourself to what is perceived to be a small part of the software development process.

We want developers to a lot of testing. In return, people whose job title may have the word ‘Test’ in it, ought to help colleagues without that label in their work.

Testers can and should involve themselves with:

• Gathering and improvement of requirements before they enter the iteration
• Application architecture
• Causal analysis that goes beyond reporting a defect

Of course, it’s easy to oppose such a broad interpretation of the testing field. People are prone to stick to what they think they are good at. However, with the fast pace of projects that we see today, it is of the utmost importance for anyone on the team to pick up those tasks that add the most value to the team at any given point in time.

Improving requirements

Testers tend to be good at this, as it involves analyzing documents and focusing on the bigger picture. In waterfall-style projects, testers find defects on documents that have already been processed into software. Such late feedback is rarely seen as helpful. In an agile context, where feedback is given prior to, or in the early stages of development, this timely feedback actually saves a lot of time.

On a past project, I saw a requirement for some visible cue if the first person handling an application had spotted signs of possible fraudulent behavior. Technically, it would have been possible to implement and test what that requirement specified. But would it really be desirable for the business? We were developing a workflow system. I proposed to handle the situation as a special part of the workflow, to ensure accurate handling of the situation.

Application architecture

In a similar fashion, the ‘helicopter view’ of testers can add great value when changes in application architecture are discussed. Consistency, testability, configuration management and expected time behavior are things to consider from a technical point of view. End user experience is another. For each of those five attributes, I have been in situations where I had to speak up to guard against a drop as a result of a proposed change. A quick fix can have a dramatic effect that is easily overlooked.
For example, I recently spoke with an architect about a performance hack. An object would be referenced by a table updated in a nightly batch. So, I asked him, if I add a new item I can’t find it back until I open the screen on the following day? That’s not what we want, he said, quickly sketching an update-action for the table.

Causal analysis

Far too many testers consider it to be ‘not their responsibility’ to think about the causes behind defects. Quite likely, their managers feel that way too. However, in many cases just reproducing a defect and logging it accurately already requires some analysis anyway. If you see something go wrong in the user interface, you might be able establish quickly whether or not the problem also exists at the database level. By sharing such knowledge, or at least your suspicions, you help save time for your colleagues.

Black box testing?

I rarely test an application without having seen at least something of the code. Of course, looking at the code and the unit tests may already be enough to spot some defects.
Do not fear that seeing the actual code will disable you from doing proper “black box testing” afterwards. Knowledge about the chosen implementation often helps me to refine my test cases and reduce duplication with developer tests. When pressed for time (aren’t we always?), I will allocate less time for tidy code, and more time for convoluted implementations.

Testing is risk based, and a process of learning

It makes perfectly good sense to allocate one’s time based on an assessment of the technical risks and the value to the business. We make an initial assessment of those risks during iteration planning. However, our assessment of the risks evolves over time, when we learn more from the people of the business, or when we learn more about how the team has tackled technical complexity.
In short, I base many of my tests not on the initial assessment of the risks, but on a view of how things have turned out.

Keep talking

Occasionally, discussing my findings about the risks have really helped my colleagues. New explanations from people on the business side revealed inaccurate assumptions that had been made. Comments I made about complicated implementations reminded others of their intention to refactor the initial solution.


As a tester, it pays to do stuff outside your field of expertise. You may lighten the load of others, and gain respect. Knowing how the developers have tackled a problem will help you in designing your own tests more effectively.

In the last episode of this series, I will explore ways in which organizations can facilitate agile testers.

Blog post by

go back to the blog


Leave your blog link in the comments below.

EuroSTAR In Pictures

View image gallery