Vanishing and exploding gradients rescued by LSTM

The problem the RNN suffers from is either vanishing or exploding gradients. This happens because, over time, the gradient we try to minimize or reduce becomes so small or big that any additional training has no effect. This limits the usefulness of the RNN, but fortunately this problem was corrected with Long Short-Term Memory (LSTM) blocks, as shown in this diagram:

Example of an LSTM block

LSTM blocks overcome the vanishing gradient problem using a few techniques. Internally, in the diagram where you see a x inside a circle, it denotes a gate controlled by an activation function. In the ...

Get Hands-On Deep Learning for Games now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.