Meet the Expert: What to Do When AI Fails with Andrew Burt and Patrick Hall
Published by O'Reilly Media, Inc.
Preventing and planning for AI incidents
Artificial intelligence and machine learning have at least one thing in common with traditional software systems: they fail. AI failures might consist of discriminatory behavior, privacy violations, or even security breaches that can lead to lawsuits, regulatory fines, and more. So what can organizations do to avoid these pitfalls?
Join us for this edition of Meet the Expert with Andrew Burt and Patrick Hall—the cofounders of bnh.ai, a boutique law firm focused on AI and analytics—to learn how to prevent the inevitable failures in your ML systems from spiraling into full-blown AI incidents. As you explore a new approach to incident response specifically tailored to AI, you’ll learn when and why AI creates liability for the organizations that employ it and how those organizations should react when their AI causes major problems.
O'Reilly Meet the Expert explores emerging business and technology topics and ideas through a series of one-hour interactive events. You’ll engage in a live conversation with experts, sharing your questions and ideas while hearing their unique perspectives, insights, fears, and predictions.
What you’ll learn and how you can apply it
By the end of this live show, you’ll better understand:
- Legal and technical risks related to discrmination, privacy, and security in your ML systems
- How to mitigate these risks before you deploy your ML system
- How to plan for inevitable failures and attacks with AI incident response
This live event is for you because...
- You’re concerned about the potential negative impacts of AI and ML, and you want to do something about it.
- You’re interested in the intersection of AI technologies and law.
- You have questions or concerns about an organization’s use of AI and ML relating to discrimination, privacy, or security risks.
Prerequisites
- Come with your questions for Andrew Burt and Patrick Hall
- Have a pen and paper handy to capture notes, insights, and inspiration
Recommended follow-up:
- Read Responsible Machine Learning (report)
- Read “What to Do When AI Fails” (blog post)
- Read “Why you should care about debugging machine learning models” (blog post)
- Read An Introduction to Machine Learning Interpretability, 2nd Edition (report)
Schedule
The time frames are only estimates and may vary according to how the class is progressing.
Thursday, May 13, 2020, at 9:00am PT / 12:00pm ET
- Introduction and presentation (15 minutes)
- Interactive discussion and Q&A (45 minutes)
Your Guests
Patrick Hall
Patrick Hall is principal scientist at bnh.ai. Patrick also serves as a visiting professor in the Department of Decision Sciences at the George Washington University and as an advisor to H2O.ai. Previously, Patrick led H2O.ai's efforts in responsible AI, resulting in one of the world's first commercial solutions for explainable and fair machine learning. He also held global customer-facing roles and R&D research roles at SAS Institute. Patrick studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.
Andrew Burt
Andrew Burt is managing partner at bnh.ai, a boutique law firm focused on AI and analytics, and chief legal officer at Immuta. He’s also a visiting fellow at Yale Law School's Information Society Project. Previously, Andrew was special advisor for policy to the head of the FBI Cyber Division, where he was the lead author of the FBI’s after-action report on the 2014 Sony data breach, in addition to serving as chief compliance and chief privacy officer for the division. A frequent speaker and writer, Andrew has published articles on law and technology for the New York Times, the Financial Times, and Harvard Business Review, where he’s a regular contributor. He holds a JD from Yale Law School.