April 2023
Intermediate to advanced
354 pages
8h 22m
English
One of the most common criticisms of NNs is that their outputs are difficult to understand. Unfortunately, GNNs are not immune to this limitation: in addition to explaining which features are important, it is necessary to consider neighboring nodes and connections. In response to this issue, the area of explainability (in the form of explainable AI or XAI) has developed many techniques to better understand the reasons behind a prediction or the general behavior of a model. Some of these techniques have been translated to GNNs, while others take advantage of the graph structure to offer more precise explanations.
In this chapter, we will explore some explanation techniques to understand why a given prediction ...