Year End Offer - Save up to 50% on tickets - Book Now

Track Talk T7

Beyond the Wow – Lessons of LLM-aided Test Case Generation

Theodor Hartmann

12:00 - 12:45 CEST Thursday 5th June

In the realm of software testing, the use of Large Language Models (LLMs) for test case generation has sparked both interest and skepticism among professionals. In my talk, “Beyond the Wow – Lessons of LLM-assisted Test Case Generation,” I will share practical insights gained from developing a tool that combines LLMs with a basic Retrieval-Augmented Generation (RAG) approach. This tool generates test artifacts such as test cases and test data by utilizing common knowledge alongside customer-specific documentation, requirements, and user stories. I will discuss the complexities and challenges encountered in implementing and applying LLMs in testing.

This session will focus on the realities of using LLMs in test case generation, specifically addressing challenges that often get overlooked by those outside the testing field. I will cover common issues such as redundancy in generated content, the difficulty of achieving consistent results, and the varying effectiveness of LLMs depending on the type of application. Additionally, I will explore the contrasting experiences many testers face: the “fear of the blank page,” where the pressure to create quality test cases can be daunting, versus the overwhelming nature of generated content that may be verbose, but sometimes nonsensical. I will illustrate the field of tension mentioned above with examples from experience.

In a landscape where non-testers often believe that all aspects of testing can be automated, I will equip testers with compelling arguments to explain why a cautious approach is essential. These points will emphasize the importance of human oversight, the need for contextually relevant outputs, and point at the inherent limitations of LLMs that challenge the notion of complete automation.