AI and LLM Cyber Risks and Mitigation
Published by Pearson
A hands-on approach to safeguarding AI systems and managing vulnerabilities
- Explore the emerging challenges of securing Language Model Models (LLMs) in the AI landscape
- Gain unique insights into protecting the very heart of AI systems
- See hands-on demos and get practical experience with real-world threat mitigation
AI and LLM Cyber Risks and Mitigation is a comprehensive exploration of current critical security challenges and solutions for mitigating risks and threats. Best-selling author and speaker Omar Santos will discuss top threats against LLM implementations such as GPT, Bard, DALL-E 3, Midjourney, Stable Diffusion, and open source models such as LLaMA2, Falcon, WizardLM, Gorilla, and more. You will learn about Prompt Injection, Model Denial of Service, and Supply Chain Vulnerabilities.
The course will review several best practices when using open source models from Hugging Face and other tools. We will also cover Insecure Output Handling, Training Data Poisoning, Sensitive Information Disclosure, Insecure Plugin Design, Excessive Agency, Overreliance, and Model Theft. You will not only gain a deep understanding of these threats but also acquire practical skills to identify, mitigate, and respond to them effectively.
The importance of this course cannot be overstated in today's AI-driven world. As AI and LLMs become increasingly integrated into many industries and applications, the risks associated with their misuse or exploitation are also on the rise. It is crucial to learn best practices when training and fine-tuning models using Amazon SageMaker, Microsoft Azure AI Services, and other environments. This course provides the knowledge you need, including how to protect Retrieval Augmented Generation (RAG) implementations and how to use solutions like LangChain, Chroma DB, Pinecone, and other vector databases safely, equipping you with the knowledge and skills to safeguard AI systems so you can maintain data privacy, prevent malicious attacks, and ensure the ethical and secure use of AI technology. This course empowers you to be at the forefront of AI security.
What you’ll learn and how you can apply it
By the end of the live online course, you’ll understand:
- The OWASP Top-10 Risks for Language Model Models (LLMs)
- Practical skills in identifying vulnerabilities and conducting security assessments specific to LLMs
- Strategies for securing LLMs against many attack vectors and maintaining model integrity
- ChatGPT Plugin Vulnerabilities, Prompt Injection using PDFs, and how to threat-model LLM applications
And you’ll be able to:
- Perform a security assessment of vulnerabilities in modern AI implementations. Learn how to integrate the "human in the loop" concept into their AI systems, especially for privileged operations. You will be able to design applications that require user approval for critical actions, reducing the risk of unauthorized actions carried out by LLMs.
- Implement practical defense strategies to protect LLMs from the OWASP Top-10 Risks and other potential threats.
- Effectively segregate and denote untrusted content within user prompts to limit its influence on LLM responses. You’ll learn best practices when training and fine tuning models using Amazon SageMaker, Microsoft Azure AI Services, and other environments.
- Understand techniques such as using ChatML for OpenAI API calls to specify the source of prompt input, enhancing the security of AI interactions by reducing the risk of indirect prompt injection attacks.
This live event is for you because...
- You want to learn about security in AI and ML systems.
- You are a developer, data scientist, or engineer looking to build secure and ethical AI and ML applications while considering privacy aspects.
- You are an IT professional, security specialist, or privacy officer interested in understanding the unique challenges posed by AI and ML technologies.
- You are a product manager, team leader, or executive looking to integrate secure and responsible AI and ML practices within your organization.
Prerequisites
- Basic awareness of ML and AI implementations such as ChatGPT, GitHub Co-pilot, DALL-E, Midjourney, DreamStudio (Stable Diffusion), and others.
- Familiarity with computer science concepts: Basic knowledge of data structures, algorithms, and computer systems will be beneficial in understanding the underlying mechanisms of AI and ML algorithms and their security implications.
- Curiosity and willingness to learn: A strong desire to learn about AI, ML, security, ethics, and privacy, and the ability to think critically about the implications of AI and ML technologies on society is crucial for making the most of the training.
Course Set-up
- You can follow along during the presentation with any Linux system with Python 3.x installed.
Recommended Preparation
- Attend: “AI-Enabled Programming, Networking, and Cybersecurity” by Omar Santos
- Read: Deep Learning Illustrated: A Visual, Interactive Guide to Artificial Intelligence by Jon Krohn, Grant Beyleveld, Aglaé Bassens
- Watch: “The Complete Cybersecurity Bootcamp, 2nd Edition” by Omar Santos
- Watch: “Introduction to Transformer Models for NLP: Using BERT, GPT, and More to Solve Modern Natural Language Processing Tasks” by Sinan Ozdemir
Recommended Follow-up
- Watch: “Red Team and Bug Bounty Conference” by Omar Santos
- Attend: “AI, ChatGPT, and other Large Language Models (LLMs) Security” by Dr. Petar Radanliev and Omar Santos
- Watch: “Deep Learning for Natural Language Processing, 2nd Edition” by John Krohn
- Watch: “The Art of Hacking (Video Collection)” by Omar Santos
- Explore: “Ethical Hacking Labs” by Omar and Derek Santos
Schedule
The time frames are only estimates and may vary according to how the class is progressing.
Segment 1: Introduction to AI Threats and LLM Security (45 minutes)
- Course overview and objectives
- Understanding the significance of LLMs in AI landscape
- Surveying the OWASP Top-10 Risks for LLMs
- Exploring the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems)
- Q&A and Discussion (5 minutes)
- Break (10 minutes)
Segment 2: Understanding Prompt Injection & Insecure Output Handling (45 minutes)
- Defining Prompt Injection Attacks
- Exploring real-life Prompt Injection Attacks
- Using ChatML for OpenAI API calls to indicate to the LLM the source of prompt input
- Enforcing privilege control on LLM access to backend systems
- Best practices around API tokens for plugins, data access, and function-level permissions.
- Understanding Insecure Output Handling Attacks
- Using the OWASP ASVS (Application Security Verification Standard) guidelines to protect against Insecure Output Handling
- Q&A and Discussion (5 minutes)
- Break (10 minutes)
Segment 3: Training Data Poisoning, Model Denial of Service & Supply Chain Vulnerabilities (45 minutes)
- Understanding training data poisoning attacks
- Exploring model denial of service attacks
- Understanding the risks of the AI and ML supply chain
- Best practices when using open source models from Hugging Face and other sources
- Introducing AI Bill of Materials (AI BOMs)
- Best practices when training and fine tuning models using Amazon SageMaker, Microsoft Azure AI Services, and other environments
- Protecting against queries that lead to recurring resource usage through high-volume generation of tasks in a queue with implementations such as LangChain or AutoGPT
- Sending queries that are unusually resource-consuming, perhaps because they use unusual orthography or sequences
- Q&A and Discussion (5 minutes)
- Break (10 minutes)
Segment 4: Sensitive Information Disclosure, Insecure Plugin Design, Excessive Agency, Overreliance, and Model Theft (45 minutes)
- Integrating adequate data sanitization and scrubbing techniques to prevent user data from entering the training model data
- Implementing robust input validation and sanitization methods to identify and filter out potential malicious inputs to prevent the model from being poisoned
- Protecting Retrieval Augmented Generation (RAG) implementations
- How to use solutions like LangChain, Chroma DB, Pinecone and other vector databases safely
Q&A, Course Conclusion and Wrap-up (15 minutes)
- Review of Key Takeaways
- Future Trends in LLM Security
- Closing Remarks
Your Instructor
Omar Santos
Omar Santos is a Distinguished Engineer at Cisco focusing on advanced AI security research, cybersecurity, incident response, and vulnerability disclosure. He is the co-chair of the Coalition for Secure AI (CoSAI) alongside leading AI companies such as OpenAI, Google, Anthropic, and NVIDIA. Omar has served in the board of the OASIS Open standards organization and is also the chair of the OpenEoX and the Common Security Advisory Framework (CSAF) technical committee. His work led the creation of the CSAF ISO standard. Omar's collaborative efforts extend to numerous organizations, including OWASP, FIRST, and he was the lead of the DEF CON Red Team Village for several years. Omar is the author of over 25 books, 21 video courses, and over 50 academic research papers. Omar is a renowned expert in ethical hacking, vulnerability research, incident response, and AI security. Omar's work in cybersecurity is also recognized through multiple granted patents. Prior to Cisco, Omar served in the United States Marines focusing on the deployment, testing, and maintenance of Command, Control, Communications, Computer, and Intelligence (C4I) systems.