March 2018
Intermediate to advanced
272 pages
7h 53m
English
If we make our network architecture too complicated, two things will happen:
If we add many layers, our gradients will get smaller and smaller until the first few layers barely train, which is called the vanishing gradient problem. We're nowhere near that yet, but we will talk about it later.
In (almost) the words of rap legend Christopher Wallace, aka Notorious B.I.G., the more neurons we come across, the more problems we see. With that said, the variance can be managed with dropout, regularization, and early stopping, and advances in GPU computing make deeper networks possible.
If I had to pick ...