12
Interpreting Neural Networks
When trying to comprehend the reasons behind a model’s prediction, local per-sample feature importance can be a valuable tool. This method enables you to focus your analysis on a smaller part of the input data, resulting in a more targeted understanding of key features that contributed to the model’s output. However, it is often still unclear which patterns the models are using to identify highly important features. This issue can be somewhat circumvented by reviewing more prediction explanations from targeted samples meant to strategically discern the actual reason for the prediction, which will also be introduced practically later in this chapter. However, this method is limited to the available number of samples ...
Get The Deep Learning Architect's Handbook now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.