Prompt injection is one of the fastest-growing risks in AI applications. In this lightning talk, we’ll cover some practical techniques attackers use to manipulate LLMs, from simple instruction overrides to more advanced escalation tricks.
You’ll get a quick, no-nonsense overview of how these attacks work and why they matter, plus a set of defenses you can start applying right away.
You’ll get a quick, no-nonsense overview of how these attacks work and why they matter, plus a set of defenses you can start applying right away.
Brian Vermeer
Snyk
Staff Developer Advocate for Snyk, Java Champion, and Software Engineer with over a decade of hands-on experience in creating and maintaining software. He is passionate about Java, (Pure) Functional Programming and Cybersecurity. Brian is a JUG leader for the Virtual JUG and the NLJUG. He also co-leads the DevSecCon community and is a community manager for Foojay. He is a regular international speaker on mostly Java-related conferences like JavaOne, Devnexus, Devoxx, Jfokus, JavaZone and many more. Besides all that, Brian is a military reserve for the Royal Netherlands Air Force and a Taekwondo Master / Teacher.
