Programme launch offer - Save 20% Book Now

Tutorial B

Don’t say Hello to LLM: Dive into the System Under Test

Maryna Didkovska

9:300 - 12:30 CEST, Tuesday 3rd June

This tutorial is designed to demystify the core workings of Large Language Models (LLMs), focusing on the process of converting human language into machine-understandable formats, the key components of neural networks used in LLMs, and their applications in real-world AI systems. Building an efficient testing strategy for AI-infused applications is impossible without a thorough understanding of the system under test.

Participants will dive deep into core components of the flow, such as tokenization, the embedding process, and the transformer block with the self-attention mechanism and MLP. On a practical level, you will see the impact of both good and bad prompts and how they can influence your budgets.

You’ll also need to recall from your university background how to multiply matrices. We will conduct a thorough analysis of the nature of LLMs, revisiting the math under the hood to reveal that there is no magic behind them. There is no “God in the machine,” but rather a complex network of connections, vast amounts of data, and biased datasets driving their behavior.

Finally, we will discuss how to build an effective test strategy for AI-infused applications, those with a natural language layer. The focus will be on testing approaches that are specific to these types of applications, highlighting  their challenges and  limitations.

This session is ideal for professionals with a background in software development, AI, or quality engineering, who are looking to deepen their understanding of LLM architectures and applications.