Richard Socher on the future of deep learning

The O’Reilly Bots Podcast: Making neural networks more accessible.

By Jon Bruner
December 1, 2016
A two-layer feedforward artificial neural network. A two-layer feedforward artificial neural network. (source: Akritasa on Wikimedia Commons)

In this episode of the O’Reilly Bots Podcast, Pete Skomoroch and I talk with Richard Socher, chief scientist at Salesforce.  He was previously the founder and CEO of MetaMind, a deep learning startup that Salesforce acquired in 2016.  Socher also teaches the “Deep Learning for Natural Language Processing” course at Stanford University. Our conversation focuses on where deep learning and NLP are headed, and interesting current and near-future applications.

Discussion points:

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more
  • Accessibility, in a couple of senses: making deep learning easier for computer scientists to implement, and making the power of deep learning available through intuitive applications
  • AI-enabled question answering systems and dynamic co-attention networks
  • The issue of interpretability, and progress in creating more interpretable models
  • Why Socher believes that human-in-the-loop is the best solution for the current “fake news” controversy, the hottest topic in NLP now
  • Why Quasi-Recurrent Neural Networks (QRNNs) are an advancement over Long Short Term Memory networks (LSTMs), the subject of a recent paper co-authored by Socher

Other links:

 

Post topics: AI & ML
Post tags:

Get the O’Reilly Artificial Intelligence Newsletter

Get the O’Reilly Artificial Intelligence Newsletter