Chapter 12. Neural networks that write like Shakespeare: recurrent layers for variable-length data
In this chapter
- The challenge of arbitrary length
- The surprising power of averaged word vectors
- The limitations of bag-of-words vectors
- Using identity vectors to sum word embeddings
- Learning the transition matrices
- Learning to create useful sentence vectors
- Forward propagation in Python
- Forward propagation and backpropagation with arbitrary length
- Weight update with arbitrary length
“There’s something magical about Recurrent Neural Networks.”
Andrej Karpathy, “The Unreasonable Effectiveness of Recurrent Neural Networks,” http://mng.bz/VPW
The challenge of arbitrary length
Let’s model arbitrarily long sequences of data with neural networks! ...
Get Grokking Deep Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.