Backpropagation through time

Backpropagation through time is the typical algorithm we use to train recurrent networks (http://axon.cs.byu.edu/~martinez/classes/678/Papers/Werbos_BPTT.pdf). As the name suggests, it's based on the backpropagation algorithm we discussed in Chapter 2, Neural Networks.

The main difference between regular backpropagation and backpropagation through time is that the recurrent network is unfolded through time for a certain number of time steps (as illustrated in the preceding diagram). Once the unfolding is complete, we end up with a model that is quite similar to a regular multilayer feedforward network. That is, one hidden layer of that network represents one step through time. The only differences are that each ...

Get Python Deep Learning - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.