February 2018
Intermediate to advanced
450 pages
11h 27m
English
Depending on our background of using previous deep learning architectures, you will find out why RNNs are special. The previous architectures that we have learned about are not flexible in terms of their input or training. They accept a fixed-size sequence/vector/image as an input and produce another fixed-size one as an output. RNN architectures are somehow different, because they enable you to feed a sequence as input and get another sequence as output, or to have sequences in the input only/output only as shown in Figure 1. This kind of flexibility is very useful for multiple applications such as language modeling and sentiment analysis:
Read now
Unlock full access