Programme Launch Offer: Save 20% - Book Now

Track Talk, T21

Securing LLMs: Insights into OWASP Top 10

Maryia Tuleika

16:45 - 17:30 CEST, Tuesday 16th June

What if I told you that you can trick an LLM into revealing secrets, making bad choices or even acting against its own rules? AI may seem like a black box, but when you start testing it like any other system, surprising weaknesses start to appear.

Together with me, I want you to take a look into how the OWASP Top 10 (a well-known list of the most common security issues in software) can help to understand and mitigate the risks linked to LLM applications. We’ll go beyond theory and explore real-world examples both from the news and from my experience at ongoing projects and see how these vulnerabilities play out in AI-driven systems.

We’ll check how poorly crafted prompts, weak security settings, biased training data, and lack of proper safeguards can lead to serious security flaws. How easily can an attacker manipulate an LLM? What happens when sensitive data leaks through unintended AI behavior? Can a seemingly harmless chatbot be turned into a security risk? I’ll answer these questions while using my own cartoons to illustrate key risks in a fun and easy-to-understand way supported by examples from my projects.

The good news? You don’t need to reinvent the wheel to test AI. Strong system thinking, traditional testing techniques, and a critical mindset are already powerful tools for uncovering vulnerabilities. The same skills used to break and improve software (like exploratory testing, risk analysis, extensive logging and monitoring) can help make AI systems safer and more predictable.

By the end of this talk, you’ll have a fresh perspective on AI risks, practical strategies to make your LLM integrations more secure, and, of course, a few laughs along the way.