Year End Offer - Save up to 50% on tickets - Book Now

Track Talk F2

Adversarial Testing for AIs: Get Ready to Fuzz Things Up!

Benjamin Johnson-Ward

09:00-10:00 Friday 14th June

Today’s applications are AI-driven, and AI-built. They challenge established testing concepts like expected results and coverage. How do you “test” an infinitely large set of inputs, with non-deterministic outcomes that can change every time? How do you test applications that rely on external language models that are costly to invoke? What about chains of them, and biases in their training data?

This crash course will teach you to test AI-based systems, pipelines and models. You will learn how to scope your testing, including when to test an AI directly or not. I will then highlight forgotten testing techniques that offer advantages for AI, including fuzzing, metamorphic testing and property-best testing. Next, I will help you upskill in new AI testing techniques and failure types, including adversarial testing using glitch tokens. This will further highlight how you can repurpose current testing methods. I will discuss mocking for testing AI-driven services, profiling training data to optimize quality, and generating synthetic data for testing learning pipelines. I will further offer a “whiter-box” definition of coverage to help you measure your testing of AI models, before calling you to extend Quality Assurance into AI monitoring, engineering and training.

Let’s evolve as AI testers together!