June 2020
Intermediate to advanced
364 pages
13h 56m
English
In Chapter 7, Feedforward Neural Networks, we covered backpropagation and gradient descent as a way to optimize the parameters of our model to reduce the loss; but we also saw that it is quite slow and requires a lot of training samples and so a lot of compute power. To overcome this, we use optimization-based meta learning, where we learn the optimization process. But how do we do that?