Backpropagating through time

We just learned how forward propagation works in RNNs and how it predicts the output. Now, we compute the loss, , at each time step, , to determine how well the RNN has predicted the output. We use the cross-entropy loss as our loss function. The loss at a time step can be given as follows:

Here, is the actual output, ...

Get Hands-On Deep Learning Algorithms with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.