April 2017
Intermediate to advanced
406 pages
10h 15m
English
The neural network architectures we discussed in the previous chapters take in fixed sized input and provide fixed sized output. Even the convolutional networks used in image recognition (Chapter 5, Image Recognition) are flattened into a fixed output vector. This chapter will lift us from this constraint by introducing Recurrent Neural Networks (RNNs). RNNs help us deal with sequences of variable length by defining a recurrence relation over these sequences, hence the name.
The ability to process arbitrary sequences of input makes RNNs applicable for tasks such as language modeling (see section on Language Modelling) or speech recognition (see section on Speech Recognition). In fact, in ...