Chapter 7: Understanding ML Models
Now that we have built a few models using H2O software, the next step before production is to understand how the model is making decisions. This has been termed variously as machine learning interpretability (MLI), explainable artificial intelligence (XAI), model explainability, and so on. The gist of all these terms is that building a model that predicts well is not enough. There is an inherent risk in deploying any model before fully trusting it. In this chapter, we outline a set of capabilities within H2O for explaining ML models.
By the end of this chapter, you will be able to do the following:
- Select an appropriate model metric for evaluating your models.
- Explain what Shapley values are and how they can ...
Get Machine Learning at Scale with H2O now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.