April 2017
Intermediate to advanced
318 pages
7h 40m
English
In Chapter 3, Deep Learning with ConvNets, we learned about convolutional neural networks (CNN) and saw how they exploit the spatial geometry of their input. For example, CNNs apply convolution and pooling operations in one dimension for audio and text data along the time dimension, in two dimensions for images along the (height x width) dimensions and in three dimensions, for videos along the (height x width x time) dimensions.
In this chapter, we will learn about recurrent neural networks (RNN), a class of neural networks that exploit the sequential nature of their input. Such inputs could be text, speech, time series, and anything else where the occurrence of an element in the sequence is dependent on the ...