Bloggo back to the blog
From the webinar: Acceptance Testing in Agile – What Does it Mean to You? with Fran O’Hara-->
Below are Fran’s responses to questions posed at the webinar ‘Acceptance Testing in Agile – What Does it Mean to You?’ which took place on Monday 27th May 2013.
If you missed the webinar you can view the recording in the EuroSTAR Webinar Archive here.
Q:If we don’t follow the agile process should we still need acceptance testing? Can you tell us the benefit of acceptance testing?
Slides 8 and 9 show typical test levels in a sequential lifecycle. The most common type of acceptance testing there is User Acceptance testing but you may have variations of this as shown in slide 9. The main benefit of these forms of acceptance testing is getting a business input to testing.
Acceptance testing in agile has a different (broader) meaning that does not apply well to sequential. The benefit of acceptance testing as defined in agile depends on how you define the scope of ‘acceptance testing’ in agile as this can vary from story based testing to all of Q2, Q3 and Q4 testing.
Q: If we don’t flow the agile process should we still need acceptance testing?
See above answer.
Talmon Ban Cnaan:
Q: When moving from waterfall to agile, what training is needed to testers, in addition to learn what is agile?
Training in agile testing would be helpful particularly Q2 and Q3 type testing if they are functional testers. Also training in any automation framework in place would be useful. Training in exploratory testing (e.g. session based testing) also helpful.
Q: What is the cost of UAT in terms of % of project’s costs?
In sequential lifecycles I have often seen figures of 10-20% of total testing cost/schedule on projects spent on UAT. Again will depend on your context.
Q: What are the Major differences in practising the BDD and ATDD automation methodology?
BDD uses scenarios with the Given When Then structure as part of the acceptance criteria of user stories. Frameworks such as Cucumber help to create automated tests for these scenarios in a form that is readable/understandable by all project stakeholders. ATDD also creates automated tests based on the acceptance criteria of user stories but the acceptance criteria do not have the scenario structure above.
Q: what about considering sprint demos as a type of AT?
Yes, hands on demos by customer representatives/product owners provide valuable feedback and find bugs. In some cases the product owner may be even happy to accept stories as being done based on the acceptance Q2 tests run by testers in the agile team and on successful demos – it will depend on the context.
Q: Can you please explain why you include security testing in Tools section (Q4).
Because technical security testing is non-functional in nature, focuses on the product, is ‘technology-facing’ typically using tools. Functional security testing like testing access levels could be done as part of Q2 testing.
Q: Why we are not include short (only changes)mannual testing before TDD?
TDD at the unit level develops automated unit tests before code is written. ATDD develops acceptance level automated tests based on the stories even before that. So you can do manual testing as part of your agile strategy (typically done by testers in the team either as documented manual testing or exploratory testing or both) but this would come after the automated testing. The automated testing provides an enhanced definition of the specification (ATDD augments the stories and their acceptance criteria for example) as well as providing rapid initial feedback on the implementation. Automated tests are then also used for regression purposes moving forward to maintain stability.
Q: how do you initiate testing in the beginning , first sprints, if you are not engaged by the developers or team leaders. How can we stress the importance of testing in the early stage rather than at the end if they are use to waterfall methodology.
In agile, the tester should be part of the team right from the start. If you are not engaged initially and performing testing later then you are less ‘agile’ in your approach as there is less focus on working software. XP has TDD to emphasise the importance of early testing, Scrum has a cross-functional team with all skills needed to deliver the product (including testers). The agile manifesto talks about the importance of ‘working software’. So to be agile is to test early and throughout. In slide 15 (Evolving from sequential to iterative) you want to be in scenario C rather than scenario A!
In waterfall we can still do test analysis and design early as soon as the baseline spec becomes available. Again this is a principle in ISTQB, it was emphasised in Bill Hetzels book on software testing many years ago and indeed since in numerous conferences such as EuroSTAR. Simple metrics can help convince managers of the value of early testing as the earlier you find bugs the cheaper they are to fix. Software Productivity Research collects industry metrics and have reported that static testing like formal reviews have been shown to have one of the highest return on investment of any engineering practice. So early defect detection and prevention are good economics.
Q: Is each Quadrants are inside of each sprint?
Ideally yes but usually there are test activities that either can only be partially done within sprints and therefore have to be completed after the last sprint (e.g. performance testing) or because of some constraint have to take place later prior to release (that is why some organisations define different levels of done). Slides 23 and 24 talk to this point. For example in slide 24 (Large/multiple teams) Janet Gregory talks about the ‘end game’ which would include any testing required that is not feasible to do within sprints. My advice would be to strive to do more of the testing within sprints rather than take the easy option of leaving hard stuff until the end.
Q: The QA should know when and on what will be the demo to the customer, right? and the QA should be involved at the begining of
the “iteration 0 “… right?
QA are part of the agile team. The team runs the demo with the customer/product owner. Yes QA should be involved from sprint 0. Test strategy will affect Release planning so testing starts at the beginning not when the first code is cut. Testers are usually good at supporting the creation of acceptance criteria with product owners, etc.
Q: Hi, I have a question: how do we incoporate the priciples of Risk Based Testing into our agile acceptance methodologies? thanks in
I have seen some attempts to do this in a structured way with documentation, traceability etc between stories and prioritised risks etc. but in my opinion this does not appear to be effective as agile is too dynamic creating too much overhead to maintain this documentation. However, risk based testing and going through the thought process and having conversations around product risk are all totally relevant to agile. Discussing risks around stories for example with product owners and developers should help formulate acceptance criteria and the associated tests. Thinking and discussing risk at the epic level should help identify higher level testing needed. In fact slide 23 (Sprints and testing strategy) is really the output from discussions about risk in your context.
Q: And when releasing in every Scrum Sprint, any stategy to do the Acceptance testing to the final product (full working software) from
different teams and only one QA owner per release?
Slide 24 (Large/multiple teams) seems to be relevant to the question in that you try to get as much testing done within the sprints as part or and supported by Continuous Integration. So teams need not only to focus on their scope but its integration with other teams. QA can help play a co-ordinating role here. You may start off with all integration between teams (and therefore integration testing) happening after the sprint and performed by a separate QA role…. but the agile approach would be to try to have more of this happening as you go rather that storing this risk until the end.
Also if there is only one professional tester shared between multiple teams of developers I would suggest you are under-resourced in testing!
Fran O’Hara is Director and Principal Consultant of Inspire Quality Services. With over 28 years’ experience in the software industry, he specialises in pragmatic approaches to lean/agile, software process improvement, quality/testing, and associated practices. For the last 5 years, his main focus has been providing agile/lean coaching, training and support to organisations transitioning to agile/lean particularly with Scrum/XP and Kanban. Fran is a regular speaker at agile, process improvement and quality/testing conferences. He is a certified ScrumMaster, an ISTQB testing tutor at advanced level, a CMMI lead assessor, a fellow of the Irish Computer Society, a director of the TMMi Foundation, a trained TMMi Assessor and co-founder of the Irish SIG in Software Testing – SoftTest.