Blog

go back to the blog

Test Leaders Live Webinar Q&A – Simon Stewart

  • 17/10/2012
  • 8724 Views
  • no comments
  • Posted by EuroSTAR
-->

I was lucky enough to lead a webinar about how to reduce flakiness in Selenium tests. There was some time for answering questions, but not enough. Fortunately, Paul, the moderator, was kind enough to collect some of the unanswered questions and send them to me. The questions fell into a small set of groups. I’ve picked out representative questions and am answering them here. Hopefully, this will leave you with all the answers you need, but if it doesn’t, you can always join the selenium-users Google Group and ask there!

Q: We have full automation suite on Selenium RC, what are the advantages of moving to webdriver?

Q: Will the Selenium RC still be maintained? or do we have to move to WebDriver in the near future?

These questions are essentially about the future of the project, and understanding that future makes it easy to understand what we’re doing now and why. The short version is that the WebDriver API, introduced in Selenium 2.0, is being turned into a W3C standard. Despite the fact that the standard isn’t complete, Opera and Chrome already ship with WebDriver support built-in, and Mozilla are working hard on their Marionette project, which will enable WebDriver support natively for Firefox and their new mobile OS: Boot to Gecko.

The future we’re working hard towards is that browser vendors are responsible for implementing a standard based on Selenium WebDriver, and that future is already partly here.

This has several consequences for our users. The first is that if you’re working on new code, it’s best to use Selenium WebDriver. The team of Open Source developers working on Selenium are also focusing their efforts on the WebDriver API, which is why we announced at Selenium Conf ’12 that the existing Selenium RC code was being put in maintenance mode. That means that we’re not working on new features for RC, but we are working on keeping existing functionality working as well as it does now.

As the first question mentions, however, many teams and projects have a heavy investment in RC. It’s completely unreasonable to expect those teams to rewrite their existing tests with this new API, so how are they meant to move to this promised future?

The main approach is documented in the Selenium docs, but the short version is that the Selenium project provides an implementation of the RC API that makes use of WebDriver which can be used from any of Python, Ruby, C# or Java. This allows teams to migrate piecemeal from RC to WebDriver; the advice I give teams is to make use of the Page Object pattern, and to rewrite eachPage Object using the webdriver API as and when they’re modified in the course of normal development.

This approach may sound daunting, but I’ve had the good fortune to work with several teams with very large suites of tests to do this migration. Normally, the switch from the original RC implementation to the one backed by WebDriver is the most painful part of the process but we’ve done our best to capture the common problems in our migration doc too.

In summary, therefore:

Write new code using the webdriver API, as this is what’s being turned into a W3C standard, and is already supported directly by several browser vendors
We are maintaining the existing RC API, but are not working on new features for it.

It’s possible to migrate from Selenium RC to Selenium WebDriver using a “webdriver-backed selenium”.

Q: In webdriver, we don’t have waitforpagetoload now. How do I tell if a page is fully loaded after for example I click a button? The page may have a lot of Javascript activity.

Q: Do you have any suggestions on how to wait for widgets to load in AJAX heavy GWT applications?

Ahhh… the horror of the modern web application. Before every browser implemented XMLHttpRequest, it was very easy to tell that a page had finished loading, as the “onload” event would have fired. Nowadays, the “onload” event may just be the beginning of things, as a slew of additional requests are fired off and the UI constructed dynamically.

Alan Richardson and I will be offering some practical tips in our Selenium Clinic, but if you can’t attend that’s not much help. I’ll try and offer some here and now.

Firstly, be aware of how the UI will change. Use an explicit wait to pause until that change to the UI is complete. Selenium offers the WebDriverWait and some helper methods for dealing with the common cases, but writing your own is a simple matter.

Secondly, if your UI does not change, but you can modify the AUT, then it’s reasonable to add hooks to simplify testing. I refer to these as latches, and they commonly take the form of javascript variables that I can query from the test. An example variable might be one representing the part of the UI that the application thinks it is currently displaying. Again, you can use the WebDriverWait to handle this case.

If you don’t have the ability to change the AUT, and the UI doesn’t provide any hints about whether or not an action has completed, you may be forced to use more than just the UI, perhaps dipping into the database to verify that some action or another is complete.

One common problem is that people forget that the tests will do exactly what they’re told to, whereas a human would pause occasionally without being told. As a rule of thumb, if there’s a round-trip to the server, either as a full page reload or via an AJAX request, or if there’s a significant chunk of javascript work to be completed, then it’s best to use an explicit wait.

We added implicit waits to the API when it became clear that many testers (and, to be fair, many developers too!) are unsure about when a round-trip to the server is going to be made. The implicit waits provide a way of giving your tests some leeway, waiting a user-defined time until an element is present before using it, which allow you to paper over some of these cracks. I tend to dislike using them mainly because I abhor the thought of “papering over” cracks. You may feel differently.

Q: Do you think that it is a good idea to query the database directly within tests. Could you explain more. because we intend to isolate our tests from app code. thanks.

Q: Did I understand correctly, that it’s better to verify the system state from its API, than checking some elements from UI proving the particular state?

An end-to-end test requires the entire system to be functioning flawlessly. As applications get larger and more complex, there are more moving parts (databases, message queues, SMTP servers, and so on) The law of averages leads to the obvious conclusion that these e2e tests are going to be the most prone to flakiness in your suites, as your more focused tests will use less than everything in your app. Put another way, every time you interact with your app in an e2e test, you’re gambling that everything works. The less you hit the front end, the less you gamble, and the less likely you are to lose that gamble.

How do you avoid thumping into the front end to verify that actions have been completed successfully? Think about tests for adding, removing and modifying items in a hypothetical shopping basket. You will need one test to verify that the UI reflects a change, but your suite probably contains far more tests than just those. For those additional tests you have a choice: do you verify changes through the UI or by directly contacting the back-end used to store the data? As we’ve seen, hitting the UI involves playing the odds. I’d rather not play the odds — I’d rather just hit the backend store.

Now, there is an argument that says that everything should be working flawlessly all the time. Even if this were the case, there’s another reason to avoid querying the UI for verifications that can be done by some other mechanism. That reason is “speed”. Accessing the UI of your app is the slowest way there is of verifying results. Accessing the backends directly can be significantly quicker.

Q: Do you automate stories in isolation and independently or do you write end-to-end scenarios with many verification points incorporating many stories along the scenario?

Q: when working in an iterative agile environment, the functionality of the app ‘changes’ as more stories are worked on. How can we get around having to constantly change existing scripts?

I tend to write tests of two varieties. I think of the first as “scaffolding”, and they’re relatively short-lived tests that I use when implementing a story. Just as with scaffolding when putting up a building, once the story is complete, I pull down the scaffolding by deleting the tests, but only once I’m sure that the story is properly covered in a series of smoke tests (smoke tests being the second kind of selenium test I write). These might not be so detailed as the “scaffold” test, but they’re fewer in number and therefore cheaper to maintain. If I’ve been doing TDD, this will be acceptable: my e2e tests will have spawned a series of tests that verify that the layers interact properly, and these will in turn have caused me to write many more unit tests around individual pieces of code. The end result? Even though the scaffolding is gone, the test coverage remains very high and my application stable.

One common mistake I see is for teams to simply keep all the “scaffolding” tests and use those as their regression test suite. While this works in the short term, in the long term it causes a high price to be paid. Sometimes that price is so high, it’s better to delete all the e2e tests as they’ve become impossible to maintain and too hard to keep stable. What a sorry state of affairs!

There are some guidelines to follow when writing e2e tests. One of these is that seeing the selenium API in a test is normally a warning sign, as it suggests that there is poor encapsulation. Worse, “click this, then wait a bit, click that, type this and hit that button” is a lot less informative to someone not familiar with your app than “log in as this user”. If the intent of an automated test isn’t clear, then it’s going to be a nightmare to maintain.

A common form of abstraction that I advocate people use is the Page Object pattern. This is documented on the project wiki, and it’s well worth your time to have a read of this if you’re unsure about what it is.

Q: Are there any good strategies to deal with ‘flakiness’ introduced by running in different browsers? e.g. a test will always run fine in Firefox, but fail sometimes in IE.

There are! The best bit of advice I can give you is to endeavour to always run your tests in as many different browsers as possible. If all your testing relies on a single browser until the last minute, it’s far too easy to tailor your test suite to that browser. That’s fine until it’s not!

Assuming that you’ve been running tests in different browsers but are still seeing flakiness, the next thing to do is be aware of how the drivers work. In particular, be aware that native events may mean that actions you think you think are fully completed before control is returned to the user, such as “click”, may, in fact, not be fully completed. This is most often made clear when someone takes a suite of tests written using the firefox driver and tries to use the IE driver. Normally the solution is to add an explicit wait, as there’s some processing to be done in the browser, or there’s a round-trip to the server to be made.

My final suggestion to reduce flakiness is to avoid “sharing” a machine with multiple browsers running at the same time. You get better isolation, and therefore more stable tests, using a virtual machine per browser. This advice may be hard to follow, as testers’ machines can be underpowered in the first place, so it may be worth investigating Selenium Grid or commercial solutions such as that offered by Sauce Labs.

Blog post by

go back to the blog

eurostar

Leave your blog link in the comments below.

EuroSTAR In Pictures

View image gallery