May 2018
Beginner
490 pages
13h 16m
English
The output layer will now receive the result y = (input * weights+ bias) squashed by logistic sigmoid (y), as shown in the following diagram:

The output layer takes the y output of the hidden layer (on the right-hand side of the graph) and receives the weights and bias to apply to this layer in this model. Once the multiplication of weights and addition of bias are computed, the logistic sigmoid function squashes this output as well. This is a very nice way of keeping values homogeneous over the network. It's a tidy way of always sending small computable values to the next layer as implemented ...
Read now
Unlock full access