EuroSTAR 2026 Call for Speakers:
For three decades, the EuroSTAR Conference has been the leading software testing conference in the world, attracting attendees from 40+ countries. Each on a quest for knowledge, eager to learn, and gain actionable takeaways from each talk.
** The Call for Speakers is now open.**
A Message from Elmar Jürgen, 2026 Programme Chair
At EuroSTAR 2026, we want to explore “Testing at its Best”. It has many aspects.
Key Component of All Talks: You MUST have tangible and clear takeaways for the audience to learn from and be able to apply to their own work.
Theme
The theme of EuroSTAR 2026 is “Testing at its Best”. As testers, and as a community, we have always strived to do our best. This drive has led us to develop numerous different approaches to testing in particular and quality assurance in general. It is also likely the central reason why we have been going to and speaking at EuroSTAR for decades. But how do we actually know how good our testing is and if we are (still) performing testing at its best?
What is Testing at its Best?
For testing in general, it raises the question which combination of approaches is best suited for different project contexts. Which functionality should we test at which level of the test pyramid? Where do e.g. manual tests best fit in there? How did this mix evolve in a particular project over time and which changes have been to the better or worse? For a specific testing approach, for example, this raises the question how to do it best, or which flavour or variant has advantages over alternatives. In other words: Which is the best approach to explorative testing? To UI test automation? To code reviews? And why?
How does a particular project context impact that?
A software project exists within a specific context that includes not only the software and hardware it interacts with but also all human stakeholders involved in its development, from us as testers up to its end users. What makes certain test approaches better suited than others for specific project contexts? What differs between business information systems, apps, or embedded safety-critical software in terms of test approach suitability, what remains the same?
How does AI change this picture?
How do innovative test approaches change this picture? Are they small adjustments that evolve the way we test? Or do they turn our established approaches upside down and really are a revolution? Especially AI, that is transformative or even disruptive for some industries? But do AI-based techniques really make us better at testing because they help us find more bugs? Or find the same bugs faster or cheaper? Or help find the most severe bugs? Or are these techniques overhyped, delivering less than promised?
How can we actually know if our testing is at its best?
How can we assess how good our testing is for ourselves, by honestly reflecting on our work? What is the state of the art, what is the state of our practice? How can we communicate successfully to other stakeholders that we need more budget to do better? Or to defend our work during challenging times if we are under attack for not finding a critical bug that angered users or jeopardized money or lives?
Lastly, how can we be certain that our testing is at its best, when we can only discuss the bugs we found, not the ones we missed and are unaware of?
Talk Types
Topics can include, but are not limited to:
- Which specific variant of a quality assurance approach (such as code reviews, manual explorative testing, automated UI tests, etc) is superior to alternative variants and why? Taking code reviews as an example, lightweight code peer reviews might be better than formal inspections. Or not? What are your experiences, in which project contexts?
- Which testing approach is better than others and why? And for which types of bugs? For example, automated tests have obvious advantages over manual tests in terms of execution costs per test, thereby shortening feedback times if executed more frequently. Manual tests, on the other hand, find bugs that automated tests cannot detect.
- When is a certain approach, e.g. explorative testing, well suited, and when is it not?
- With which test approach do we find how many bugs and in which stage? Which percentage do we find internally, and which do the customers find?
- What are your experiences in employing AI-based approaches to different testing tasks? Did it make you more efficient or more effective?
- Which process change improved our testing? Which made it worse?
- How do we find the most severe bugs?
- How do we measure our testing? Which measures make sense, which are useless, even if popular among testers or managers?
How to (not) use GenAI for your submission
Generative AI is great at many tasks, especially if they involve text. Should generating a conference talk submission then not be a task where it can shine and make it much easier for us to create great talk abstracts?
Unfortunately, no. AI generated submissions are typically really, really poor and get rejected. Not because they are AI-generated, but because they are bad.
We want your story, not GenAI’s story. Your personal experience, your successes, and failures. They are most convincing in your own voice. GenAI has neither your experiences, nor your voice. As a non-native speaker, I like to use Gen AI to proofread my English and make improvement suggestions. Then I hand-select which ones I incorporate into my text. This is a perfectly valid use case for GenAI, as long as you do not allow it to erase your voice in the process.
But while this helps me to avoid spelling mistakes and clumsy language, I still have to do the thinking that goes into designing a compelling abstract. And while this is hard, it is also essential for a great talk. I am convinced that we simply cannot delegate this to GenAI.
In other words, use whichever tools help you, including GenAI, but we strongly encourage you not to skip the thought process that goes into writing a talk abstract by asking GenAI to do it for you.
See you in Oslo
We are interested in your experiences, your successes and failures, your projects and lessons learned, your personal stories. See you at EuroSTAR 2026!
Elmar Jürgen, Fiona Østensvig, Sophie Küster, Richard Seidl and Willem Keesman, EuroSTAR 2026 Program Committee .
Stay in The Loop
Subscribe to our newsletter and never miss important announcements, updates and special offers from EuroSTAR.