Appendix. Taxonomy, Techniques, and Further Reading
To aid you in using this book in the future, we have put together a brief review of the topics and techniques covered in this book. You can use these guides and tables as a reference in the future to help you quickly survey your options for a new problem before diving into more detail.
ML Consumers
There are three types of users who consume and interact with ML:
- ML practitioners
- Data scientists and ML engineers that build, develop, tune, deploy, and operationalize a model.
- Observers
- Business stakeholders and regulators who are not involved in the engineering of the model, but also are not using the model in deployment. They use explanations to validate model performance and build trust that a model is working as expected.
- End users
- Domain experts and affected users who use, or are impacted by, a model’s predictions. They may have a deep understanding of the context the model operates in or may be affected by the result of a model’s prediction, with little background knowledge in ML or the domain.
Taxonomy of Explainability
There are several characteristics that help define the field of explainability. These are:
- Explainability versus interpretability
- Although sometimes used interchangeably in industry, we define explainability as techniques that explain a model based on a prediction (or group of predictions). The technique does not need to understand how the model itself works, although it may rely on aspects of a model’s ...
Get Explainable AI for Practitioners now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.