Blog

go back to the blog

From the webinar: Q&A with Kristian Karl

  • 13/12/2013
  • 8011 Views
  • no comments
  • Posted by EuroSTAR
-->

Below are Kristian’s responses to questions posed at the webinar ‘ Experiences Of Test Automation At Spotify‘ which took place on Tuesday 10th December 2013.

If you missed the webinar you can view the recording & the slides in the EuroSTAR Webinar Archive here.

Q: Does a python implementation of GraphWalker indicate that the Java version will be deprecated at some point in the future?

A: No. The Python implementation is an addition the GraphWalker echo system. See https://github.com/spotify/python-graphwalker

Q: What is the level of granularity that you use in your models? Do you model the entire system with a single model, or do you have multiple models that handle specific functionality/areas?

A: We do use multiple models that handle specific functionality. For example, there’s one model for the Login, another for Search and a third for Playlist, and so on. We use the keyword GraphWalker key word SWITCH_MODEL in order to jump between the models. Seehttp://graphwalker.org/documentation/model-switching-switchmodel/

Q: Do you encounter any disadvantages of using non-deterministic EFSM [Kristian: last words was missing in this question, I added as I remembered it (in blue)]

A: Not really. Of course, one has to take care when modeling. It’s not that hard to design impossible models, and especially with EFSM models. So, learning how to model is important, that’s why we also have peer reviews of models when needed.

Note: ESFM – Extended Finite State Machine, is a model with internal states. Like in the Login example in the webinar, where the state of the “Remember Me” is an internal state. Those [internal] states are used with triggers, or guards, that are in essence IF-statements in a model, enabling or disabling edges.

Q: Do all squads at Spotify use test automation?

A: No. I would guess 1 out of every 6 squads have Automated End User Acceptance tests.

Q: Is one test automator/squad enough to keep up with the pace of 7-8 developers?

A: For most of the times, yes. But, in order to scale properly, we need to move parts of the automation to the developers. They help out with testability stuff. For example, take our Test API, where developers implement the client side of the API.

Q: Is the Squad, Tribe, Guild structure something new or an existing strucutre at Spootify? Who implemented this? Its a good structure

A: It’s something new we came up with during the last 2 years.

Q: Do you Monitor the testing continously or the Application continously eg Memory etc?

A: No, we don’t do much resource metric collection, or code coverage measurements when running GUI tests. We have non-functional tests, but very few.

Q: During your test design, what format are your test ideas written in before you get your automation tests developed

A: GraphWalker requires the models written using the GRAPHML file format. The editor that works best with GraphWalker is yEd from yWorks. See: http://www.yworks.com/en/products_yed_about.html

Q: You mentioned that an application communicates with Jenkins which runs the test cases. What is the name of that application ?

A: So, that would be Deployify, which is our customized version, built on the Open Source project Dreadnot, from RackSpace. See: http://www.rackspace.com/blog/rackspace-open-sources-dreadnot/It’s one of our Continuous Delivery/Deployment tools.

Q: a question for later on… ‘How are test automators motivated and rewarded to do their best – when compared to developer colleagues/teams – as they have different individual drivers – despite a common product improvement goal !?’

A: So, they all work in the same quad. It’s a team effort. The test automator is sitting next to the developer, they have the same goals. The TA’s are embedded in the squads 100% of their time, as are the testers and the developers. Their individual drivers tend not to be so different from those of the developers.

Q: Who do test automaters report to? Squad Lead or is their a Guild Lead?

A: Their hiring manager is typically a QA Chapter Lead, as I, but it could also be a Developer Chapter Lead. (Lead and Manager is the same thing at Spotify)

Q: If squads are autonomous, what happens when two squads do something differently, e.g. different ideas of what is good code coverage? How is that resolved?

A: Nothing really. There’s nothing to resolve. The squads define their own goals, where good code coverage could be defined differently between squads. That said, there are exceptions. The test automators has chosen to define a common code coverage definition within the Guild.

Q: This is a question regarding the models. How granular do you make the models? To me the example you showed is a very simple workflow. But I can imagine a real user would come up with more complex flow scenarios. How do you handle complex scenarios? Are there any scenarios, where you don’t use models?

A: Think of the models as the validation of basic business work flows. They are for business rules like Unit Tests are for code. So they tend to get very basic. When needed, we do have some complex models. But to answer your question, yes, there are scenarios, where have other approaches. Like our Monkey Tests. Those tests hammers the UI and does all sorts of random clicking and selecting of items in the client. Very effective on finding random crashes, we use that to stabilize unstable clients :-)

Q: Do you have random failures due to automation tools that you use or pages loading slowly and how to you cope with that?

A: Yes, we have those failures. Pending on the situation, they are handled differently. If the failure occurs during a Continuous Delivery sequence, that failure will be scrutinized intensely. Those tests are verifying critical paths, they must work. Otherwise, we try to see pattern. Are they systematic errors? Do we need longer timeouts? But most important, is it flaky tests or is it the system under test?

Q: How is communication between chapters and guilds managed when a squad has deadlines to meet? Does this happen at allocated times? And does this ensure approach to your automation is consistant accross the squads?

A: There is a built-in communication between chapters and guild, in the sense that TA members in all chapters, also are members in the TA guild. The TA guild is very important in when sharing information, best practices, and highlighting problems. Please note that we do not always strive to work consistent across the squads.

Q: In addition to the Models (state diagrams), is any other form of requirements created?

A: We don’t use the models as requirements. But if you perhaps mean other type of test methodologies. no. That said, I would not be surprised if we also would be using RSpec or Cucumber or such in the future.

Q: Sorry, flaky test vs flaky SUT… could you little more explain that?

A: SUT, System Under Test, is not just the compiled code base of a product. It’s the configuration of it when lunched, the servers it runs on, the data it uses etc. So, you can have a perfect test, that fails due to some incorrectness in the setup. This sometimes can reflect bad upon the tests themselves. Now the key thing I believe, is not to start a blame game. I prefer: “If a test fails, something is broken, we need to fix that!”

Q: Who is the line manager for the squad and tribe members, the product owners?

A: No. The Chapter lead is the line manager for the tribe members. The squad does not have a manager.

Q: When are the automated tests executed? nightly, on demand or just when needed ?

A: Both. We have 24/7 tests that runs continuously around the clock. The on demand tests are run in conjunction with Continuous Delivery/Deploys.

Q: does the Mobile app testing integrated with GraphWalker ?

A: Yes it does. There’s no difference in the test architecture compared to web or desktop testing. The only thing that differs, is how we interact with the SUT.

Q: How do you deal with parallel test execution ? do you use grids ?

A: No. We use Jenkins to do that, and we have an internal Test Data Service, that handles a locking mechanism to prevent the same data to be used simultaneously.

Q: how and where do you store test results

A: In our Test Result Service. It’s our own thing that is based on a key/value store database, that collects test results by tests pushing the results continuously using a simple REST API. It has also a web based UI, with dashboards and drill-down capabilities.

Q: could you give a bit more details on Chapters vs Guilds?

A: A Chapter belongs to a specific Tribe, and handles people. Guilds are defined as “Communities of Interest” and are spread over Tribes.

Q: Is there any risk analisys in your projects ? Who make s them ?

A: Yes, and it’s done in the squads themselves.

Q: Do you have any overlap between testers (who design the models) and test automators (pro developers who implement the models) – do you have people performing both roles. or are they completely separate?

A: No, they are not separated. It’s actually more common that the test automators create and maintain the models. The good thing with the models, which are visual, is that the testers easy can review the models.

Q: Do the Model-Based Tests augment or take the place of cucumber (BDD) tests?

A: No really. In essence, MBT is the same thing. Describing the expected behavior of the system under test, in a domain specific language. In the MBT case, the DSL is a finite-state diagram. A spin-off with MBT is that we can create different permutations when we generate tests from the models.

Q: When will the model viewer be included in GraphWalker?

A: There is already a read-only model viewer in GraphWalker. But it’s only available during execution of a test. See http://graphwalker.org/documentation/using-the-web-renderer

Q: Could you explain more what agile coaches do within in the teams? What are their main role? Do they need to understand what the team do?

A: Sure! But I would like to refer to presentation made by Kristian Lindwall and Breand Marsh, it’s 40 minutes video: http://www.ustream.tv/recorded/41394346/highlight/442656

Also, have a look at another agile coach at Spotify, Joakim Sundén and his blog:http://joakimsunden.com/

Blog post by

go back to the blog

eurostar

Leave your blog link in the comments below.

EuroSTAR In Pictures

View image gallery