3Graph Neural Networks
Giulia FRACASTORO and Diego VALSESIA
Politecnico di Torino, Turin, Italy
3.1. Introduction
The recent wave of impressive results obtained in fields as varied as computer vision, natural language processing, bioinformatics and many more can be attributed to the advances in training and designing neural networks. A neural network works as a universal function approximator, so that it can use training data to learn complex input-output mappings. However, training is a non-convex optimization problem and it can be challenging to reach a satisfactory local minimum. This is why the careful design of the network, to incorporate as much prior information available for the problem at hand, is crucial to building successful models. The convolutional neural network (CNN) is a successful example of this principle and it has become the workhorse of state-of-the-art models in computer vision, audio processing and more because it explicitly exploits some underlying properties of the data. The convolutional layer leverages the three main properties of much of the natural data that is of interest, such as images: stationarity, locality and compositionality. A signal is stationary when its statistical features do not significantly change over time or space; convolution exploits this property by reusing the same filter weights over different portions of the signal, which also allows the limitation of the number of trainable parameters, preventing overfitting and vanishing ...
Get Graph Spectral Image Processing now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.