In this blog, the 2019 committee have recommended for an experienced testers. This journey is for people who want to look outside of “standard testing”, someone who has been around long enough and has seen lots of test design techniques, exploratory testing etc.
Monday
Tutorial D – Designing for Inclusiveness – Workshop on Accessibility by Parimala Hariprasad & Jyothi Rangaiah.
Does your website or mobile app create an inclusive experience for people? Does your organization incorporate accessibility standards for all its products? Did you know that accessibility is a legal mandate and a fundamental expectation in many countries? If the answer is yes, this workshop is for you.
The main objective of the workshop is to make testers become efficient in solving accessibility problems for differently abled people using technology in varying capacities in their daily lives.The workshop includes helping the participants recognize problems, glorify their strengths, acknowledge their weaknesses and focus on problem solving for differently abled people in the technology world. This can be achieved by a combination of training, demos and hands-on testing exercises.
Alternative: Optimising agile testing to your context by Fran O’Hara
Tuesday
Tutorial J – Working Well with PCT by Rik Marselis
In my opinion testing conferences should be about … testing. Most testers know how to (or instinctively do) apply boundary value analysis and some other basic approaches and techniques. But what about some more advanced test design techniques?
This half-day tutorial is dedicated to just one test design technique: The Process Cycle Test. This technique is very useful when testing business processes, for example in an acceptance test. The test cases that you create can be proven to test every path in the business process. Or if you want to achieve better coverage (also called a higher test-depth-level) you can test all combinations of 2 (or 3 or 4) consecutive paths.
Alternative: Using quality characteristics for powerful testing and reporting by Henrik Emilsson and Rikard Edgren
Tuesday Track 2 – Immanual Kant and deep rationality of testing by Anders Dinsen
Interacting with things always involves the complexity of experiencing and learning.
Kant’s philosophy supports the thinking, imagining, and rational tester to collaboratively engaging the team to educate people who matter on things that matter. The outcome we search for is driving collaboration and development in the organization.
After the talk, we’ll be able to say “transcendental knowledge” and understand what it means; we’ll understand the basics of Kant’s philosophy; we’ll know why intuitions sometimes matter more than facts when people collaborate about experiencing and learning; finally, we’ll see the power of narratives as the rational way to express what’s real and what’s not and helping stakeholders gain trust and good gut feeling about the system.
Tuesday Track 8 – How To Become As Agile As Your Team by Maud Lundh
What happens when your working method and processes are turned upside down? A few years into my career, I was the test manager for a new project labeled as “extremely agile”. What I learned early on was that the way to develop a project from start to finish had changed.
The project used a new process called “Conceptual Development”, a design-driven, lean development process. “Conceptual Development” focuses on customer collaboration and pushing through frequent deliveries from concept through design all the way to development. With a more traditional agile process, requirements would have first been planned and developed, before being tested near the end of a delivery. Now we tested concepts and prototypes first, and every prototype and design was its own delivery. This new process opened up new possibilities of testing since everything was testable from the get-go.
With “Conceptual Development” it is important to fail quickly and test with actual users before a single line of code is written. This new way of lean startup created new challenges for our team. Everyone needed to implement “Design Thinking” in their work. So how did we approach this new way of working as a team? Did we learn new skills and techniques, or just adapt those we had in a new environment? And what were the results?
Maybe more importantly, what did we learn?
Wednesday
W5 – Story Telling for Testers; A Crash Course by René Tuinhout
Why is one storyteller more effective than another?
And can you learn to tell stories more effectively?
You can indeed!
We’ll start this crash course with a short introduction into storytelling. Then we’ll do a number of exercises to get acquainted with the storytelling basics.
Questions like: How to set up a good story? How to enthrall an audience? and How to create a powerful story for an audience with a varied background? will be covered in this high-speed-talk.
W14 – Reframing Software Testing In The Light of AI by Dr. Chris McKillop
This workshop gives a firm grounding in what AI is, and what AI is not. We explore the deceptive nature of some AI, and the social and personal consequences of poorly designed and implemented AI. AI means that software is moving from deterministic outcomes, to a variable range of outcomes, which adds additional complexities, and responsibilities, on testing in the new future of AI. This tutorial gives you the knowledge and techniques to evolve your practice and update your skill set.
AI is changing the nature of software, as well as the human processes involved in developing it, as immense, rich, and complex data sets become an intrinsic part of the systems we develop. This tutorial will give you the tools you need to improve the efficiency and precision of your testing practice.
Thursday
TH2 – My AI Was Wrong by Laurent Bouhier
“My AI was wrong,” but isn’t it more like “I was wrong about my AI”?
The difficulty in testing AI comes from the ambivalence of this proposal. To differentiate between the 2 assumptions mentioned, the presentation will briefly show what an AI is (historical), how it learns (machine learning) and how it can be misled, voluntarily or not (cognitive bias or learning bias), and in which sectors of our daily lives they intervene today, and all this with playful illustrations. This is to emphasize the difficulty of testing AI, on the one hand because of the unpredictability of the expected result (problem of the “classical” test oracle) and on the other hand by the shortening of the development cycle of all these new features (IoT, Robotics…).
TH5 – When SciFi Becomes Reality: Deep Learning & IoT by Jaroslaw Hryszko
In 1961, Stanislaw Lem, the author of science-fiction novels, raised an interesting problem in one of his books – when the number of computers in the world exceeds the ability of people to control them, the correctness of their operation will have to be verified also by machines: computers will test computers.
In my speech, I would like to present in a simple and funny way the idea of applying deep learning in the process of testing complex IoT systems. I want to explain a general idea and then I will show solutions and tools that allow for effective testing of the Internet of Things, like TensorFlow – an open source library available for everyone.
Th11 – Guess the Number by Rikard Edgren & Henrik Emilsson
In this workshop we will do an exercise called Guess The Number.
It is a simple, but difficult exercise consisting of a Windows executable with a command-line interface, so many of you must bring computers. We will split up in groups, because it is even more difficult on your own. It stimulates logical thinking, critical thinking, and creativity.
Most of you probably won’t solve the exercise in an hour, but we will break at that time so we can do a proper debrief. You will not get actual hints, but might get inspiration about how to think. Some of you will be frustrated, or even upset, but that is not dangerous. It can be solved in 10 minutes; if that happens, we will pat your back and you can watch the others struggle.