Blog

go back to the blog

Re-Thinking System-Level Testing – Introduction, Paul Gerrard

  • 06/10/2010
  • 7388 Views
  • no comments
  • Posted by EuroSTAR
-->

This is the first in a series of short essays in which I will set out an approach to test design, preparation and execution that involves testers earlier, increases their influence in projects, improves baseline documents and stability, reduces rework and increases the quality of system and acceptance testing. The approach needs automated support and the architecture for the next generation of test management tools will be proposed. I hope that doesn’t sound too good to be true and that you’ll bear with me.

Some scene-setting needs to be done.

In this series, I’m focusing on contexts (in system or acceptance testing) where scripted tests are a required deliverable and will provide the instructions in the form of scripts, procedures (or program code) to execute tests. In this opening essay, I’d like to explore why the usual approach to building test scripts (promoted in most textbooks and certification schemes) wastes time, undermines their effectiveness and limits the influence of testers in projects. These problems are well-known.
There are two common approaches to building scripted tests:

1. Create (manually or automated) test scripts directly from a baseline (requirement or other specification documents). The scripts provide all the information required to execute a test in isolation.

2. Create tabulated test cases (combinations of preconditions, data inputs, outputs, expected results) from the baseline and an associated procedure to be used to execute each test case in turn.

By and large, the first approach is very wasteful and inflexible and the tests themselves might not be viable anyway. The second approach is much better and is used to create so called ‘data-driven’ manual (and automated) test regimes. (Separating procedure from data in software and tests is generally a good thing!) But both of these approaches make two critical assumptions:

• The baseline document(s) provide all the information required to extract a set of executable instructions for the conduct of a test.

• The baseline is stable: changing requirements and designs make for a very painful test development and maintenance experience; most test script development takes place late in the development cycle.

In theory, a long term, document-intensive project with formal reviews, stages and sign-offs could deliver stable, accurate baselines providing all the information that system-level testers require. But few such projects deliver what their stakeholders want because stakeholder needs change over time and bureaucratic projects and processes cannot respond to change fast enough (or at all).

So, in practice, neither assumption is safe. The full information required to construct an executable test script is not usually available until the system is actually delivered and testers can see how things really work. The baseline is rarely stable anyway: stakeholders learn more about the problem to be solved and the solution design evolves over time so ‘stability’, if ever achieved, is very late in arriving. The usual response is to bring the testers onto the project team at a very late stage.

What are the consequences?

• The baselines are a ‘done deal’. Requirements are fixed and cannot be changed. They are not testable because no one has tried to use them to create tests. The most significant early deliverables of a project may not themselves have been tested.

• Testers have little or no involvement in the requirements process. The defects that testers find in documents are ignored (“we’ve moved on – we’re not using that document anymore”).

• There is insufficient detail in baselines to construct tests, so testers have to get the information they need from stakeholders, users and developers any which way they can. (Needless to say, there is insufficient detail to build the software at all! But developers at least get a head start on testers in this respect.) The knowledge obtained from these sources may conflict, causing even more problems for the tester.

• The scripts fail in their stated objective: to provide sufficient information to delegate execution to an independent tester, outsourced organization or to an automated tool. These scripts need intelligence and varying degrees of system and business domain knowledge to be usable.

• The baselines do not match the delivered system. Typically, the system design and implementation has evolved away from the fixed requirements. The requirements have not been maintained as users and developers focus on delivery. Developers rely on meetings, conversations and email messages for their knowledge.

When the time comes for test execution:

1. The testers who created the scripts have to support the people running them (eliminating the supposed cost-savings of delegation or outsourcing).

2. The testers run the test themselves (but they don’t need the scripts, so much of the effort of creating test scripts is wasted).

3. The scripts are inaccurate, so paper copies are marked up and corrected retrospectively to cover the backs of management.

4. Automated tests won’t run at all without adjustment. In fixing the scripts, are some legitimate test failures eliminated and lost? No one knows.

When testers arrive on a project late they are under-informed and misinformed. They are isolated in their own projects. Their sources of knowledge are unreliable: the baseline documents are not trustworthy. Sources of knowledge may be uncooperative: “the team is too busy to talk to you – go away!”

Does this sound familiar to you?

That’s the scene set. In the next essay, I’ll set out a different vision.

Blog post by

go back to the blog

eurostar

Leave your blog link in the comments below.

EuroSTAR In Pictures

View image gallery