Last chance for tickets - Groups save up to 25%

Deep Dive, DD1

From AI Mandate to AI Strategy

Katja Obring

*Booked Out*
10:30 - 12:15 CEST, Tuesday 16th June

Your leadership has spoken: “We need to adopt AI for testing.” But what does that actually mean? Generate thousands of test cases? Trust AI-written test code? Replace your existing workflow with whatever the latest tool promises?

Without a structured approach, teams either drown in AI-generated noise, waste months on experiments that don’t address real problems, or avoid AI entirely while competitors move ahead.

This tutorial teaches the QED (Question, Evidence, Develop) framework: a systematic method for turning “we should probably use AI” into concrete experiments that prove (or disprove) value in your specific context. You’ll learn to identify which quality problems AI might actually solve, design lightweight metrics that reveal truth rather than hype, and run time-boxed experiments that produce actionable decisions.

Through strategic planning exercises, you’ll design experiments addressing common AI testing scenarios: prompt quality versus test case value, coverage metrics versus actual defect detection, AI-assisted versus manual approaches, false positive costs, and signal-to-noise ratios in AI-generated test suites.

You’ll leave with experiment designs ready to run in your first sprint back, decision criteria for scaling or abandoning AI approaches, and a framework for making evidence-based choices about AI adoption rather than reacting to vendor promises or executive pressure.

This tutorial gives you the capability to answer whether AI helps your testing scientifically, in your environment, with your constraints, for your actual quality challenges.