Blog

go back to the blog

Q&A from the webinar ‘Let Test Automation Play the Role it Deserves’ with Ruud Teunissen

  • 13/08/2013
  • 7067 Views
  • no comments
  • Posted by EuroSTAR
-->

Q from David Baak: What is the most common pitfall you encountered in test automation?

A: There are a few common pitfalls. One is that test automation is tool driven instead of goal driven. Two is that the actual goals and expectations for test automation are not clearly defined upfront. Three might be that it is not a separate project (but part of another project with a different goal). Last but not least: the wrong resources (manual testers that have to automate).

 

Q: Can you tell us about your experience in changing the client’s mind from tool orientation to goal orientation?

A: Funny thing is, that the actual client (business and/or project management) are goal driven. They have a clear target they want to achieve by test automation. It is usually the test automation team and their management that we need to convince that the tool is just an aid, not the goal itself. What a great approach can be is to adopt a bottom-up approach: show that it works with your set of tools and in that way convince management and team that your way is the way!

 

Q: How much attention do you pay to testing the automated tests themselves?

A: Great question! It depends on the tooling and the maturity of the (technical) environment. We do prefer to work in teams such that close cooperation and peer reviewing can take place. If it is critical we sometimes consider pairwise programming as well. Next to that, one of the great things of working in a SCRUM fashion is the DEMO at the end of each sprint. It’s a natural trigger for the test automation team to ensure that what they create works. And therefor is tested. By the way, the best test for test automation is running it and thus proving that it works.

 

Q from Mikhail Vassiliev: How to achieve reliability in test automation? When a lot of tools involved into an automated process, the process becomes unreliable and it takes a lot of time to make it robust.

A: Agreed, this is a challenge. For me, robust means that the test automation solution can handle (un)expected changes to the System Under Test. Therefore, we strongly focus on a solid test automation architecture. And test automation standards and guidelines. These will incorporate not only test automation itself, but also the tooling and any tools it needs to integrate with. Robustness, in our experience, highly depends on the stability of the tooling you use and the way you create your test automation solution. Consider using element polling and retry mechanisms.

 

Q from Raska Khandker: Sometimes concurrent scenarios happens in users context that is not reproducible in testing context cause testing contexts are having limitation, also testing load and user load is not same. Can automation help catching those bugs?

A: Automation can help but is limited by the environment you have available. If your environment does not allow you to reproduce or simulate the actual load/performance/circumstances, test automation won’t be able to do it either. And of course, test automation can only “test” what you “tell” it to test. And therefore, it is not able to “do” what a common user does. Thus: run your automated tests in isolation and spend your freed up time on exploratory testing the edge cases.

 

Q from Mohsin Ali: Can you prefer any mobile application testing tool performance / functional Testing?

A: The tooling all depends on the context, the goal, the scope, etcetera. Therefore, we do not have one preferred tool or set of tools for mobile application testing.

 

Q: If you have a software which is frequently changing and for that you have to change the automation script..which is time consuming.. Then what you will do?? Any suggestion??

A: Always look at the business case: how much effort/time/money is needed to keep test automation up-to-date and running versus how much effort/time/money is saved? And of course, a solid test automation architecture, good coding standards etcetera help to make maintainability of automated tests more efficient. Consider applying design patterns! Maybe it is possible to use the refactoring capabilities of your IDE (Integrated Development Environment). Try to avoid test scripts written in flat text files or excel or wiki. Another idea is to analyse what part of the system changes frequently. In our experience, it usually is the user interface. So a good option is to automate testing the API and data / processing “below” the UI automatically and the UI manually.

 

Q from Nuno Vieira: Usually Test Automation is used with stable versions, or on regression tests. Do you think test Automation should start at the beginning of every project/development or on a later stage?

A: Another great question. Traditional test automation starts once the system is stable and regression testing is required. However, in the current context – Agile, SCRUM, Continuous Integration – test automation should start at the beginning of the project. Especially in SCRUM/Agile, test automation is a prerequisite to enable continuous delivery of features at the end of each sprint. bY the way, You can even implement your automated test scripts based on screen designs (if well prepared).

 

Q from Li Fang: Can you describe the strategy or approach to verify & test the Test SoftWare/code?

A: A bit similar to an earlier question. Like stated before, it depends on the tooling and the maturity of the (technical) environment. The test strategy for testing the test automation should of course be risk based. We prefer techniques like pairwise programming as well as peer reviewing.

 

Q from Moshie George: Hi Ruud, IMHO test automation patterns are also important, I use page object model. What is your opinion regarding automation patterns.

A: When using patterns in a well-organized manner, the scripter will create a model. This can then be used by a navigator to create test cases within this model. Reuse is a great advantage in those situations, thus preventing making mistakes multiple times.

 

Q from Mohinder Khosla: What numbers and colour highlights 1-4 stands for in the roadmap checklist table?

A: The numbers in the table represent questions/checkpoints. Each key area has 3-4 questions/checkpoints per maturity level. The colours represent the outcome of an assessment and is included as an example. Blue describes the current situation, Green describes the required situation.

Q: Are your experiences are with vendor tools or open source tools such as Selenium?

A: Our automation experiences are based on the use of commercial tools as well as open source tools.

 

Q from Arshad Mohammed: Can we be assured that the intended scope is fully covered by the automation test and no need to inject a bit of manual testing in that area?

A: Let’s be clear and honest: you cannot automate all (regression) tests. Therefore, manual testing and checking of results will always be required. An automated solution always acts as designed and programmed, a human tester will (slightly) deviate and learn and adapt his way of working based on what he sees. A good idea is to relate the automated test scripts to the requirements or to the test case management tool! And automatically update the results so you can always see what is coverd by automated test scripts and what you need to cover manually.

 

Q from Roy Smith: Time savings seems to be a very easy measure for analysing the benefits of test automation, but sometimes this alone doesn’t allow for a good ROI on the automation of a set of tests. What other business benefits do you usually like to measure with Test Automation?

A: The ROI is always related to the goal of test automation. You’re right, saving time is one of the most obvious ones to measure. But maintaining a (high) level of quality, preventing regression, is also measurable: monitor the number of defects found and prevented by test automation.

 

Q for Bob Marius: did you apply this roadmap for every project? These are some best practices you’ve experienced whilst working in several projects?

A: The approach is based on experiences (successes and failures) within multiple organizations. We decided to consolidate our experiences and combine with our experiences in test improvement. The approach I presented, was the result.

 

Q from Arli Türk: Do you prefer that test-data is created by scripts straight to database when tests are started or that test-data is created by tests themself while they are running?

A: What we learned is that test data is crucial. So especially for regression testing at a system and/or End-to-End level, it needs to be in line with what the system will be using in production. So We tend to create scripts that create the required test data through the system. For unit testing for example, using data created straight in the database can be a great solution. Another option we sometimes use is defining Pre and Post conditions of each test script. This enables you to implement @Before and @After methods which run before and after the test script. Thus ensuring the right data is available before and a clean system is available after test execution. At one of customers, we adopted a few basic rules: 1) scripts and cases are independent from each other (also w.r.t. data); 2) test data is queried from the database; 3) test data needs to be suitable for the test case, but the query needs to be reusable; 4) test data requires domain knowledge so ensure availability to check and discuss; 5) database knowledge is a necessity! These rules enabled us to support automation for several systems throughout the customer application landscape.

Q from Jamie Bowdidge: Do you find that sometimes software is developed quicker than you can write the automated tests? If so how do you cope with this?

A: When you’re talking about automated unit and integration tests that are falling behind, you should reconsider your planning! Get the developers involved, better even, make them responsible for automating these tests. When you’re looking at system and/or End-to-End tests, you should always go back to your test automation strategy and the business case: is it worthwhile keeping up with development or should we only automate the core and essence of the regression test? And using the right patterns will help you as well.

 

Q from Dakshesh Shah: What does this 1,2,3,4 mean in the checklist?

A: The numbers in the table represent questions/checkpoints. Each key area has 3-4 questions/checkpoints per maturity level.

 

Q from Rainer Deussen: Is the TI4Automation matrix which looks similar to TPI maturity matrix available online with categories and quick points?

A: No, not yet. We’re thinking about making it available through our website. We’ll keep you posted about this through twitter and of course this website.

 

Q from Crisley Espoza: Maybe I missed it. But if the client already a test automation tool. How do you work with that?

A: Good question! Part of the approach is finding out what (kind of) tool is required to achieve the set goals. This will be done, even if the client already is using a tool or set of tools. Instead of a selection process, an evaluation process will follow: find out what can be achieved with the current tool set. Especially when commercial, relatively expensive tools have been implemented, you might need to adjust your goals to what can be achieved by using the available tooling.

If you have any other questions for Ruud leave them as a comment below!

 

Biography

ruud tuenissen_web_100x134Ruud Teunissen is best described as a passionate software tester. Throughout his career he has played almost any possible role in testing (tester, test manager, test trainer, coach, sales, manager, test consultant …) in a variety of environments and companies. Ruud is coauthor of several books on structured testing, including Software Testing: A Guide to the TMap® Approach. Currently he is senior test consultant with Polteq Test Services BV and focuses on test improvement and management in any context. Ruud is frequently invited to speak at conferences. Within Polteq, Ruud is responsible for TI4Automation, the approach for successful implementation and improvement of test automation based on hands-on experiences and good practices.

Blog post by

go back to the blog

eurostar

Leave your blog link in the comments below.

EuroSTAR In Pictures

View image gallery