Security Superstream: Secure Code in the Age of AI
Published by O'Reilly Media, Inc.
Navigating Risk to Build Safe and Reliable Systems
AI tools are transforming the ways that we write and deploy code, making development faster and more efficient, but they also introduce new risks and vulnerabilities. To protect organizations, security must remain a paramount concern across the entire AI ecosystem.
Join top security professionals, software engineers, developers, data scientists, and AI specialists as they share practical insights, real-world experiences, and emerging trends to address the full spectrum of AI security. Whether you’re focused on secure coding practices, building and deploying secure models, or protecting against AI-specific threats, this event offers valuable perspectives on ensuring that your systems remain secure in an increasingly AI-driven world.
What you’ll learn and how you can apply it
- Understand and apply AI security frameworks such as MAESTRO and the Databricks AI Security Framework
- Test the security of AI systems with AI red team best practices
- Defend AI systems from the biggest threats with MITRE ATLAS
- Explore AI safety risks and how to mitigate them
Recommended follow-up
- Take LLM Safety and Security (live course with Thomas Nield)
- Read Not with a Bug, But with a Sticker (book)
- Read The Developer’s Playbook for Large Language Model Security (book)
Schedule
The time frames are only estimates and may vary according to how the class is progressing.
Introduction – Chloé Messdaghi (5 minutes)
Chloé welcomes you to the Security Superstream.
MITRE ATLAS: Community-Driven Tools for AI Security and Assurance – Christina Liaghati and Walker Dimon (35 minutes)
MITRE ATLAS (atlas.mitre.org) is a public knowledge base of adversary tactics and techniques based on real-world attack observations and realistic demonstrations from artificial intelligence red teams and security groups. A growing number of vulnerabilities can arise with the use of open source AI models or data, and ATLAS raises community awareness and readiness for these unique threats in the broader AI assurance landscape. Christina Liaghati and Walker Dimon of the ATLAS team discuss the latest MITRE ATLAS-community efforts to capture and share cross-community data on real-world AI incidents and to develop mitigations to defend against AI security threats and vulnerabilities.
From Risk to Resilience: Empowering Security to Unlock Enterprise AI – Omar Khawaja (35 minutes)
While business and data teams race ahead with AI, security and governance leaders are often hesitant, slowing enterprise AI adoption. How can organizations rapidly and confidently adopt AI while managing security risks? Omar Khawaja, who leads Databricks’ Field Security practice, introduces the Databricks AI Security Framework (DASF), an operational guide that helps bridge the divide between enterprise AI enthusiasts and security professionals. You’ll learn the 12 components of a modern AI system and how its four subsystems interact, as well as how to identify the 62 risks and threats at each layer and map them to 64 actionable controls.
Break (5 minutes)
MAESTRO: A Novel Threat Modeling Framework for Agentic AI – Ken Huang (35 minutes)
Traditional threat modeling frameworks are not sufficient to address the unique security challenges posed by agentic AI systems. Ken Huang, CEO and chief AI officer of DistributedApps.ai and a research fellow at Cloud Security Alliance, introduces MAESTRO, a new seven-layer threat modeling approach designed specifically for agentic AI. You’ll discover the limitations of legacy frameworks when applied to agentic AI systems and get a comprehensive overview of the MAESTRO framework’s layers: foundation models, data operations, agent frameworks, deployment infrastructure, evaluation and observability, security and compliance, and the agent ecosystem. Using a practical example of threat modeling for the Model Context Protocol, Ken demonstrates how to apply MAESTRO to identify and mitigate novel threats. He’ll also preview an open source tool, currently in development, that showcases how MAESTRO can be operationalized to manage threats in real-world, complex AI systems.
Ignore Previous Directions – Ram Shankar Siva Kumar (35 minutes)
Ram Shankar Siva Kumar, author and Tech Policy Fellow at UC Berkeley, provides an overview of the art and science—along with the societal implications—of AI system attacks. He’ll take you through the evolution of adversarial examples to the now-famous jailbreaks, to consider why it’s so darn difficult to secure AI systems from adversaries.
Break (5 minutes)
Resolving Incidents at the Speed of Agents – Ralph Bird (35 minutes)
It’s 3 o’clock in the morning and PagerDuty has woken you up, again. Half asleep, you open up your IDE and ask your coding assistant what’s going on. It gets straight to work, pulling the incident data from PagerDuty, grabbing the latest metrics and logs, and identifying possible causes. Your assistant goes back to PagerDuty to find similar incidents and pull up incident reviews. It proposes a code change before running your test suite and opening a PR with the fix. The incident is over, and you’re heading back to bed. Ralph Bird, staff machine learning engineer at PagerDuty, shows how incident response is being revolutionized by AI agents and MCPs. You’ll see a real-world demo of an agent gathering intel, diagnosing issues, and taking action, all within your IDE. But with the move from demo to production, new challenges emerge, especially around security, reliability, and trust. Ralph shares practical insights into these hurdles and the robust safeguards being built to ensure these solutions solve problems without creating bigger ones.
AI Safety for Businesses – Merve Hickok (35 minutes)
General-purpose AI models amplify known risks such as bias, privacy, explainability, and cybersecurity. They also bring new risks such as hallucinated outcomes, granular surveillance, and adversarial actions via deepfakes. The ability to interact simply with natural language instructions also significantly increases the threat surface for adversarial attacks. Merve Hickok, president of the Center for AI and Digital Policy, discusses some of these safety and security risks and their impact on businesses.
Closing Remarks – Chloé Messdaghi (5 minutes)
Chloé closes out today’s event.
Your Hosts and Selected Speakers
Chloé Messdaghi
Chloé Messdaghi is the founder and principal advisor of Thornbridge Advisory. As a recognized leader in cybersecurity and AI risks & governance, she is a trusted authority for national and industry journalists and a highly sought-after public speaker. Her insights have been featured across major media outlets, and her contributions to the field have earned her recognition as a Power Player by top publications including Business Insider and SC Media.
Christina Liaghati
Dr. Christina Liaghati leads MITRE’s Trustworthy and Secure AI Department and MITRE ATLAS, an ATT&CK-style framework of the threats and vulnerabilities of AI-enabled systems. She passionately drives research and developments in the public interest with a collaborative global community to create and openly share actionable tools, capabilities, data, and frameworks for trustworthy and secure AI. Dr. Liaghati is a 2025 Fed 100 award winner and has worked across the community to improve the common understanding of AI security concerns for many years. Her current focus under ATLAS and across the international community is to build protected mechanisms for increased knowledge and incident sharing across government and industry in both AI security and the broader areas of AI assurance. She also chairs the NATO STO Research Task Group on AI Assurance and Security and the 2025 Symposium on AI Security and Assurance.
Walker Dimon
Walker Dimon is the MITRE ATLAS deputy lead as well as leader of the AI for Cyber group within MITRE’s AI and Autonomy Innovation Center, where he heads up technical and research teams at the intersection of AI and cybersecurity. Through his work as MITRE ATLAS deputy lead, Dimon is focused on advancing security in the evolving AI ecosystem to help enable rapid technology adoption. Additionally, he’s exploring the unique challenges, vulnerabilities, and opportunities that arise when integrating AI technologies into government missions, including AI applications in both offensive and defensive cyber contexts. Dimon also currently serves as lead principle investigator for an internally funded research and development effort investigating the application of agentic retrieval augmentation generation large language model (RAG-LLM) architectures for a variety of US government applications within the cyber domain.
Omar Khawaja
Omar Khawaja leads Databricks’ Field Security practice globally, teaches at Carnegie Mellon University’s CISO and CAIO programs, serves on the boards of HITRUST and FAIR Institute, spent nine years as CISO of a $26B enterprise, and is leading a team that developed two actionable AI risk frameworks—DASF and DAGF—being adopted by large organizations globally.
Ken Huang
Ken Huang is CEO and chief AI officer of DistributedApps.ai and a research fellow at Cloud Security Alliance as well as a distinguished author and expert in AI applications and agentic AI security. He holds multiple leadership roles, including cochair of the AI Safety Working Groups at the Cloud Security Alliance and the OWASP AIVSS project. An adjunct professor at the University of San Francisco, he’s also a core contributor to OWASP’s Top 10 Risks for LLM Applications and the NIST Generative AI Public Working Group. A prolific writer of numerous books on generative and agentic AI, Ken is a sought-after global speaker at major technology and policy forums.
Ram Shankar Siva Kumar
Ram Shankar Siva Kumar is a Data Cowboy who works at the intersection of machine learning and security. He founded the AI Red Team at Microsoft, bringing together an interdisciplinary group of researchers and engineers to proactively attack AI systems and defend from attacks. His recent book on attacking AI systems, Not with a Bug, But with a Sticker, has been called “essential reading” by Microsoft’s CTO and has received wide praise from industry leaders at DeepMind and OpenAI, as well as from policy makers and academics, and he’s donating his royalties to Black in AI. His work on AI and security has appeared in industry conferences and been covered by Bloomberg, VentureBeat, Wired, and GeekWire. He also founded the Adversarial ML Threat Matrix, an ATT&CK-style framework enumerating threats to machine learning. His work on adversarial machine learning appeared notably in the National Security Commission on Artificial Intelligence (NSCAI) Final Report presented to the US Congress and the president. Currently, he’s a Tech Policy Fellow at UC Berkeley and an affiliate at the Berkman Klein Center for Internet and Society at Harvard University, where he is broadly investigating the policy and legal ramifications of AI in the context of security and how to assess the safety of ML systems.
Ralph Bird
Ralph Bird, PhD, is a staff machine learning engineer at PagerDuty, where he leads the development of generative AI-powered features, including PagerDuty Advance. With deep expertise in designing and deploying AI solutions for enterprise environments and drawing on a background that spans astronomy, digital therapeutics, and SaaS, he specializes in taking innovative ideas from prototype to production. He particularly enjoys the challenge of solving complex problems in safe, secure, and ethical ways for the most demanding customers.
Merve Hickok
Merve Hickok is president of Center for AI and Digital Policy (CAIDP) and a globally renowned, award-winning AI policy, ethics, and governance professional. CAIDP educates AI policy practitioners and advocates across more than 120 countries and advises international organizations. Merve has testified at the US Congress, State of California, and New York City and provides AI policy expertise to OECD.AI, UNESCO, GPAI, and the Council of Europe. She’s also the author of From Trustworthy AI Principles to Public Procurement Practices and the founder of AIethicist.org, where her work focuses on the impact of AI systems on individuals, businesses, and society.