There's more...

At every neuron/node in the layers of a neural network, a series of matrix operations are performed. A more mathematical way of visualizing the feedforward network is given in the following diagram, which will help you to better understand the operations at each node/neuron:

  1. Intuitively, we can see that the inputs (which are vectors or matrices) are first multiplied by weight matrices. A bias is added to this term and then activated using an activation function (such as ReLU, tanh, sigmoid, threshold, and so on) to produce the output. Activation functions are key in ensuring that the network is able to learn linear as well ...

Get Apache Spark Deep Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.