Skip to Main Content
Grokking Deep Learning
book

Grokking Deep Learning

by Andrew W. Trask
February 2019
Intermediate to advanced content levelIntermediate to advanced
336 pages
9h 29m
English
Manning Publications
Content preview from Grokking Deep Learning

Chapter 12. Neural networks that write like Shakespeare: recurrent layers for variable-length data

In this chapter

  • The challenge of arbitrary length
  • The surprising power of averaged word vectors
  • The limitations of bag-of-words vectors
  • Using identity vectors to sum word embeddings
  • Learning the transition matrices
  • Learning to create useful sentence vectors
  • Forward propagation in Python
  • Forward propagation and backpropagation with arbitrary length
  • Weight update with arbitrary length

“There’s something magical about Recurrent Neural Networks.”

Andrej Karpathy, “The Unreasonable Effectiveness of Recurrent Neural Networks,” http://mng.bz/VPW

The challenge of arbitrary length

Let’s model arbitrarily long sequences of data with neural networks! ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Generative Deep Learning

Generative Deep Learning

David Foster
Deep Learning with PyTorch

Deep Learning with PyTorch

Eli Stevens, Thomas Viehmann, Luca Pietro Giovanni Antiga

Publisher Resources

ISBN: 9781617293702Supplemental ContentPublisher SupportOtherPublisher WebsiteSupplemental ContentPurchase Link