14
Explaining Graph Neural Networks
One of the most common criticisms of NNs is that their outputs are difficult to understand. Unfortunately, GNNs are not immune to this limitation: in addition to explaining which features are important, it is necessary to consider neighboring nodes and connections. In response to this issue, the area of explainability (in the form of explainable AI or XAI) has developed many techniques to better understand the reasons behind a prediction or the general behavior of a model. Some of these techniques have been translated to GNNs, while others take advantage of the graph structure to offer more precise explanations.
In this chapter, we will explore some explanation techniques to understand why a given prediction ...
Get Hands-On Graph Neural Networks Using Python now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.