January 2018
Intermediate to advanced
470 pages
11h 9m
English
Multilayer Perceptron (MLP) is an example of a DNN that is a feed-forward neural network; that is, there are only connections between the neurons from different layers. There is one (pass through) input layer, one or more layers of linear threshold units (LTUs) (called hidden layers), and one final layer of LTUs (called the output layer).
Each layer, excluding the output layer, involves a bias neuron and is fully connected to the next layer, forming a fully connected bipartite graph. The signal flows exclusively from the input to the output, that is, one-directional (feed-forward).
Until recently, an MLP was trained using the back-propagation training algorithm, but now the optimized version (that ...
Read now
Unlock full access