Chapter 11. Trust the Process
If you can’t describe what you are doing as a process, you don’t know what you’re doing.
W. Edwards Deming
We’ve spent most of this book exploring the dangers of applying LLM technology in production. While there is great power in technology, there are many risks. Security, privacy, financial, legal, and reputational risks seem to be around every corner. With that understanding, how can you move forward with confidence? It’s time to talk about actionable, durable, repeatable solutions. While we’ve discussed practical mitigation strategies for each risk, tackling them individually as a patchwork isn’t likely to cut it. You must build security into your development process to ensure your success.
This chapter will discuss two process elements that have emerged as key ingredients in successful projects. First, we’ll discuss the evolution of the DevSecOps movement and how it’s become central to application security for any large software project. We will examine how it has evolved to encompass specific challenges with AI/ML and LLMs. As part of this discussion, we’ll look at development-time tools to scan for security vulnerabilities and runtime tools (known as guardrails) that can help protect your LLM in production.
We’ll also look at how security testing has evolved and the emerging field of AI red teaming. Red teams have been around for a long time in cybersecurity circles, but AI red teaming has recently gained more prominence as specific techniques ...
Get The Developer's Playbook for Large Language Model Security now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.