Automated checking at the GUI level is tricky. Products and services intended for human users are sometimes not so friendly to machines. It can take plenty of time to write code that successfully works around the design of the product — for a while. Getting machines to press their own buttons looks impressive, but the time and effort required may displace the search for problems that matter more to people than to machines.
When we can get the automation working at all, we’ve feel like we’ve achieved something. It’s easy to take a breathe a sigh of relief, install a check that takes a snapshot of a single factor, then move on to the next screen. Having gone to all the trouble, why not use the power of tools to take more than one picture, changing the perspective, the zoom, or the focus? Could we record a moving picture, rather than a static one? How about combining power of the machine with the human capacity for recognizing new problems?
How can we tell whether we’re mining for gold or stuck in a rabbit hole? What can we use tools effectively to lower cost, raise value, and increase test coverage? What help do we need to get the job done more quickly, and how do we ask for it? In this talk, we’ll examine these questions and develop answers to them.