August 2018
Intermediate to advanced
272 pages
7h 2m
English
A better idea is to initialize your weights with small random values all centered at zero. For this, we can use random values from a normal distribution with zero mean and unit variance, which are then scaled by some small value, such as 0.01.
Doing this will break the symmetry in the weights as they will all be random and unique, which is a good thing. Calculating the forward and backward passes, our model neurons will now update in distinct ways. This will give them the chance to learn many different features that will all work together as part of a big neural network.
The only thing to then worry about is how small we set our weight values. If set too small, backpropagation updates will also be ...