3Multilayer Perceptron (in Neural Networks)

Multilayer Perceptron (MLP) is simplest type of artificial neural network (ANN). It is a combination of multiple perceptron models. It is a feedforward class of ANN. Perceptrons are inspired by the human brain and try to simulate its functionality to solve problems. In MLP, these perceptrons are highly interconnected and parallel in nature. This parallelization is helpful in faster computation. Colloquially, MLPs are called as “Vanilla” neural networks, when they have only one hidden layer.

An MLP consists of at least three layers of perceptrons, i.e., an input layer, a hidden layer, and an output layer, which are fully connected. MLPs have the same number of input and output layers but may have multiple hidden layers in between as mentioned in the Figure 3.1. The number of layers is known as the depth, and the number of units in a layer is known as the width [1]. Along with the input, output, and hidden layers, MLPs also have components like weights and biases and activation function. Excluding input layer perceptrons, each perceptron uses a non-linear activation function. Each perceptron in one layer connects with a specific weight to every perceptron in the next layer.

If an MLP has two or more hidden layer, then it is called a deep neural network. Mathematically, the computation of output for the MLP as shown in Figure 3.1 is shown through Equation 3.1 [2, 3]:

where is the output of MLP is a vector of shape (m, 1), W ...

Get Machine Learning for Industrial Applications now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.