Skip to Content
Hands-On Mathematics for Deep Learning
book

Hands-On Mathematics for Deep Learning

by Jay Dawani
June 2020
Intermediate to advanced
364 pages
13h 56m
English
Packt Publishing
Content preview from Hands-On Mathematics for Deep Learning

Training and optimization

As in the neural networks we have already encountered, RNNs also update their parameters using backpropagation by finding the gradient of the error (loss) with respect to the weights. Here, however, it is referred to as Backpropagation Through Time (BPTT) because each node in the RNN has a time step. I know the name sounds cool, but it has nothing to do with time travel—it's still just good old backpropagation with gradient descent for the parameter updates.

Here, using BPTT, we want to find out how much the hidden units and output affect the total error, as well as how much changing the weights (U, V, W) affects the output. W, as we know, is constant throughout the network, so we need to traverse all the way back ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Math for Deep Learning

Math for Deep Learning

Ronald T. Kneusel
Deep Learning with PyTorch

Deep Learning with PyTorch

Eli Stevens, Thomas Viehmann, Luca Pietro Giovanni Antiga

Publisher Resources

ISBN: 9781838647292