Vanishing gradient 

Neural networks have been a revelation in extracting complex features out of the data. Be it images or texts, they are able to find the combinations that result in better predictions. The deeper the network, the higher the chances of picking those complex features. If we keep on adding more hidden layers, the learning speed of the added hidden layers get faster.

However, when we get down to backpropagation, which is moving backwards in the network to find out gradients of the loss with respect to weights, the gradient tends to get smaller and smaller as we head towards the first layer. It that initial layers of the deep network become slower learners and later layers tend to learn faster. This is called the vanishing ...

Get Machine Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.