Chapter 64. The Ethical Dilemma of Model Interpretability
Grant Fleming
Progress in data science is largely driven by the ever-improving predictive performance of increasingly complex “black-box” models. However, these predictive gains have come at the expense of losing the ability to interpret the relationships derived between the predictors and target(s) of a model, leading to misapplication and public controversy. These drawbacks reveal that interpretability is actually an ethical issue; data scientists should strive to implement additional interpretability methods that maintain predictive performance (model complexity) while also minimizing its harms.
Any examination of the scholarly or popular literature on “AI” or “data science” makes apparent the profound importance placed on maximizing predictive performance. After all, recent breakthroughs in model design and the resulting improvements to predictive performance have led to models exceeding doctors’ performance at detecting multiple medical issues and surpassing human reading comprehension. These breakthroughs have been made possible by transitioning from linear models to black-box models like Deep Neural Networks (DNN) and gradient-boosted trees (e.g., XGBoost). Instead of using linear transformations of features to generate predictions, these black-box models employ complex, nonlinear ...
Get 97 Things About Ethics Everyone in Data Science Should Know now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.