Knowing about deep architectures

Once you have the perceptron figured out, it would make sense to combine multiple perceptrons to form a larger network. MLPs usually consist of at least three layers, where the first layer has a node (or neuron) for every input feature of the dataset, and the last layer has a node for every class label.

The layer in between the first and the last layer is called the hidden layer. An example of this feed-forward neural network is shown in the following screenshot:

In a feed-forward neural network, some or all of the nodes in the input layer are connected to all the nodes in the hidden layer, and some or all ...

Get OpenCV 4 with Python Blueprints - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.