Richard Socher on the future of deep learning
The O’Reilly Bots Podcast: Making neural networks more accessible.
In this episode of the O’Reilly Bots Podcast, Pete Skomoroch and I talk with Richard Socher, chief scientist at Salesforce. He was previously the founder and CEO of MetaMind, a deep learning startup that Salesforce acquired in 2016. Socher also teaches the “Deep Learning for Natural Language Processing” course at Stanford University. Our conversation focuses on where deep learning and NLP are headed, and interesting current and near-future applications.
- Accessibility, in a couple of senses: making deep learning easier for computer scientists to implement, and making the power of deep learning available through intuitive applications
- AI-enabled question answering systems and dynamic co-attention networks
- The issue of interpretability, and progress in creating more interpretable models
- Why Socher believes that human-in-the-loop is the best solution for the current “fake news” controversy, the hottest topic in NLP now
- Why Quasi-Recurrent Neural Networks (QRNNs) are an advancement over Long Short Term Memory networks (LSTMs), the subject of a recent paper co-authored by Socher
- The Stanford Question Answering Dataset
- TensorFlow and Chainer, two frameworks for working with neural networks
- Summaries of recent papers by the Salesforce research team