11-30, 09:50–10:20 (Europe/Amsterdam), Auditorium
The best way to learn how to secure a system is to know how it breaks! In this talk we will take on the role of the Red Team, try to break Large Language Models using prompt injection, and learn about the kind of havoc we can cause. Learn how to safeguard your LLM applications by understanding their weaknesses.
In my work as AI Engineer I regularly talk with customers who would like to integrate large language models into their products or services. Yet there are novel cyber security risks involved in language models, which they can seriously hinder the value proposition of LLM based applications. I have taken some time to showcase examples of prompt injection, a failure mode uniquely associated with large language models, in an attempt to start a public conversation and spread some awareness on the issue. We might as well have some fun while we’re at it ;).
No previous knowledge expected
I'm Mickey Beurskens, an AI engineer with a strong foundation in machine learning, robotics and software engineering. I have a free-thinking and creatively inspired approach, and I'm especially interested in deepening my understanding of autonomous agents with a focus on AI safety. Currently I work independently through through Forge Fire AI Engineering.