Language Models in Plain English

Book description

Recent advances in machine learning have lowered the barriers to creating and using ML models. But understanding what these models are doing has only become more difficult. We discuss technological advances with little understanding of how they work and struggle to develop a comfortable intuition for new functionality.

In this report, authors Austin Eovito and Marina Danilevsky from IBM focus on how to think about neural network-based language model architectures. They guide you through various models (neural networks, RNN/LSTM, encoder-decoder, attention/transformers) to convey a sense of their abilities without getting entangled in the complex details. The report uses simple examples of how humans approach language in specific applications to explore and compare how different neural network-based language models work.

This report will empower you to better understand how machines understand language.

  • Dive deep into the basic task of a language model to predict the next word, and use it as a lens to understand neural network language models
  • Explore encoder-decoder architecture through abstractive text summarization
  • Use machine translation to understand the attention mechanism and transformer architecture
  • Examine the current state of machine language understanding to discern what these language models are good at and their risks and weaknesses

Product information

  • Title: Language Models in Plain English
  • Author(s): Austin Eovito, Marina Danilevsky
  • Release date: October 2021
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781098109066