Blog

go back to the blog

The risk of relying solely on automated checking in release decisions

  • 15/08/2013
  • 7083 Views
  • no comments
  • Posted by EuroSTAR
-->

In a project that chooses to use Continuous Delivery there is an emphasis on automated checking to determine whether a build is eligible for release. The idea that a release decision can be made using only the output of a tool, which presents a colour coded key to show whether a build has passed or not, is very appealing to the person that usually decides whether or not the product will be released.

katrina blog picture_500x311

The risk in relying on these tools for information seems to be poorly understood. Where management have become starstruck by the definitive and instant assessment of the build server, it is important for the tester to offer a voice of reason; questioning the quality of information available from the tool and educating their team in a healthy level of scepticism about successful results.

 

Automated checks are consistent, repeatable and unsurprising. They excel in quickly identifying functional regressions that are a clear reason to reject a build from the delivery pipeline. In a well-written automated suite, failure is meaningful; a lack of faith in failures, due to instability in execution, poor coding or inept test selection, must be investigated and resolved. A failed build should never be eligible for release, so it is important that failures are reported correctly.

 

Though the automated rejection of a build should be taken at face value the same cannot be said for automated acceptance. It is important to understand what a pass really means. Questioning what the build is actually checking is important in gaining a solid base from which to start testing. With an understanding of what is present in automated checks the tester can start to identify what is not. The interesting part of testing is identifying and exploring these gaps, asking questions and evaluating the application in ways that a machine cannot.

 

Since an automated check will be executing a specific path through the application, its success does not mean that the functionality works in its entirety. For example, a check that adds a user will show that, for a particular set of data, a user can be added. The success of this check is based on the verification points that the automated check executes; perhaps that the screen displays a “User saved successfully” message and that a new record is present in the database. If the the screen turns purple when the save button is clicked this will not be recognised as a failure, as the check has not included the colour of the screen in its success criteria. Perhaps the addition of a new user causes an existing user to be locked or removed due to licensing restrictions. This is not recognised as a failure either, though clearly both behaviours would be undesirable in the released application.

 

In addition, few applications are devoid of non-functional requirements, which are often overlooked when creating automated suites as they call for assessment based on skilled observation. Security testing requires creativity in interactions with the application. Usability testing demands knowledge of human behaviour and how this is applicable in the design of good software. Performance testing may be aided by tools, but the identification and resolution of issues requires critical thinking. These areas cannot be covered by automated checking alone and consequently a release decision made purely on the output of automated results fails to properly consider these aspects of the application.

 

A successful build should provide some confidence in the behaviour of essential functionality, but it’s important to remember that an automated check is only looking at what it has been told to look at. Release decisions made solely on successful automated results expose the organisation to risk that the tester is responsible for questioning. In some organisations that adopt continuous deployment these risks may be acceptable, but for many they are not. Be sure that you know what your checks cover and communicate this to management.

What are your thoughts on these risks? How do you cope with them? Leave a comment below and share your thoughts!

Biography

katrina edgarKatrina Edgar is a software tester from Wellington, New Zealand. She is a regular participant of KWST, co-founder and organiser of WeTest Workshops, and has recently been convinced to start blogging and tweeting about testing.

Blog post by

go back to the blog

eurostar

Leave your blog link in the comments below.

EuroSTAR In Pictures

View image gallery