Chapter 10. Learning from Future History

The function of science fiction is not always to predict the future but sometimes to prevent it.

Frank Herbert, author of Dune

While AI isn’t a new field, it has recently advanced to the point where today’s innovations often collide with yesterday’s science fiction. In this book’s previous chapters, we’ve reviewed many real-world case studies of security vulnerabilities and incidents relating to LLMs. However, how can you stay ahead of the game when you’re working in a field that’s moving so fast? One way is to see what we can learn from scenarios that haven’t yet happened. And, hopefully, if we do our job, these scenarios may never happen.

In this chapter, we will evaluate two famous stories (both told in blockbuster science fiction movies) where LLM-like AIs have had their security flaws exploited by villains or heroes. The stories are fictional, but the vulnerability types are very real. We’ll summarize the stories and then review the events that led to the security crises. To help ground us, we’ll do this through the lens of the OWASP Top 10 for LLM Applications.

Reviewing the OWASP Top 10 for LLM Apps

In Chapter 2, we discussed creating the OWASP Top 10 for LLM Applications, but we didn’t get into the specifics of the list. In this chapter, we’ll use the taxonomy presented by the OWASP Top 10 for LLMs to dissect our two sci-fi examples. Before diving into those examples, let’s briefly review the OWASP list and tie it to the topics ...

Get The Developer's Playbook for Large Language Model Security now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.