An Introduction to Machine Learning Interpretability

Understanding and trusting models and their results is a hallmark of good science. Analysts, engineers, physicians, researchers, scientists, and humans in general have the need to understand and trust models and modeling results that affect our work and our lives. For decades, choosing a model that was transparent to human practitioners or consumers often meant choosing straightforward data sources and simpler model forms such as linear models, single decision trees, or business rule systems. Although these simpler approaches were often the correct choice, and still are today, they can fail in real-world scenarios when the underlying modeled phenomena are nonlinear, rare or faint, or highly specific to certain individuals. Today, the trade-off between the accuracy and interpretability of predictive models has been broken (and maybe it never really existed1). The tools now exist to build accurate and sophisticated modeling systems based on heterogeneous data and machine learning algorithms and to enable human understanding and trust in these complex systems. In short, you can now have your accuracy and interpretability cake...and eat it too.

To help practitioners make the most of recent and disruptive breakthroughs in debugging, explainability, fairness, and interpretability techniques for machine learning, this report defines key terms, introduces the human and commercial motivations for the techniques, and discusses predictive ...

Get An Introduction to Machine Learning Interpretability, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.