Preface
The use of AI as a tool to solve real-world challenges has experienced rapid growth, making these systems ubiquitous in our lives. More and more, machine learning (ML) is being used to support high-stakes decisions and being used in applications from healthcare to autonomous driving. With this growth, the need to be able to explain these opaque AI systems has become even more urgent and, in many cases, the lack of explainability is a barrier for applications where interpretability is essential.
This book is a collection of some of the most effective and commonly used techniques for explaining why an ML model makes the predictions it does. We discuss the many aspects of Explainable AI (XAI), including the challenges, metrics for success, and use case studies to guide best practices. Ultimately, the goal of this book is to bridge the gap between the vast amount of work that has been done in XAI and provide a quick reference for practitioners that aim to implement XAI into their ML workflow.
Who Should Read This Book?
Modern ML and AI have been used to solve very complex real-world problems, and model explainability is important for anyone who interacts with or develops those models, from the engineers and the product owners that build these systems to the business stakeholders and the individuals that use them. This book is for anyone wishing to incorporate the best practices of Explainable AI into their ML solutions. Anyone with an interest in model explainability and ...