June 2026
Intermediate
392 pages
11h 24m
English
In cybersecurity, a red teaming exercise is a form of security testing that simulates the kinds of tactics and techniques a real adversary might use to compromise computer systems and networks. In the AI security community, you’ll hear the term used frequently to refer to the practice of testing an AI system as though a real adversary were trying to break it, then proactively mitigating any security issues found along the way. Thanks to the proliferation of language models and chatbots, AI red teaming is more accessible than ever before, and there have been numerous examples of red teaming discoveries in the press.
In 2025, ...
Read now
Unlock full access