Recurrent and LSTM networks

As discussed in Chapter 1Getting Started with Deep Learning, RNNs make use of information from the past; they can make predictions in data with high temporal dependencies. A more explicit architecture can be found in following diagram where the temporally shared weights w2 (for the hidden layer) must be learned in addition to w1 (for the input layer) and w3 (for the output layer). From a computational point of view, an RNN takes many input vectors to process and generate output vectors. Imagine that each rectangle in the following diagram has a vectorial depth and other special hidden quirks:

An RNN architecture ...

Get Java Deep Learning Projects now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.