December 2018
Beginner to intermediate
684 pages
21h 9m
English
The hidden layer, h, projects the two-dimensional input into a three-dimensional space using the weights, Wh, and translates the result by the bias vector, bh. To perform this affine transformation, the hidden layer weights are represented by a 2 x 3 matrix, Wh, and the hidden layer bias vector by a three-dimensional vector, as follows:

The hidden layer activations, h, result from the application of the sigmoid function to the dot product of the input data and the weights after adding the bias vector, as follows:

To implement ...