Machine intelligence has been the subject of both exuberance and skepticism for decades. The promise of thinking, reasoning machines appeals to the human imagination, and more recently, the corporate budget. Beginning in the 1950s, Marvin Minksy, John McCarthy and other key pioneers in the field set the stage for today’s breakthroughs in theory, as well as practice. Peeking behind the equations and code that animate these peculiar machines, we find ourselves facing questions about the very nature of thought and knowledge. The mathematical and technical virtuosity of achievements in this field evoke the qualities that make us human: Everything from intuition and attention to planning and memory. As progress in the field accelerates, such questions only gain urgency.
Heading into 2016, the world of machine intelligence has been bustling with seemingly back-to-back developments. Google released its machine learning library, TensorFlow, to the public. Shortly thereafter, Microsoft followed suit with CNTK, its deep learning framework. Silicon Valley luminaries recently pledged up to one billion dollars towards the OpenAI institute, and Google developed software that bested Europe’s Go champion. These headlines and achievements, however, only tell a part of the story. For the rest, we should turn to the practitioners themselves. In the interviews that follow, we set out to give readers a view to the ideas and challenges that motivate this progress.
We kick off the series with Anima Anandkumar’s discussion of tensors and their application to machine learning problems in high-dimensional space and non-convex optimization. Afterwards, Yoshua Bengio delves into the intersection of Natural Language Processing and deep learning, as well as unsupervised learning and reasoning. Brendan Frey talks about the application of deep learning to genomic medicine, using models that faithfully encode biological theory. Risto Miikkulainen sees biology in another light, relating examples of evolutionary algorithms and their startling creativity. Shifting from the biological to the mechanical, Ben Recht explores notions of robustness through a novel synthesis of machine intelligence and control theory. In a similar vein, Daniela Rus outlines a brief history of robotics as a prelude to her work on self-driving cars and other autonomous agents. Gurjeet Singh subsequently brings the topology of machine learning to life. Ilya Sutskever recounts the mysteries of unsupervised learning and the promise of attention models. Oriol Vinyals then turns to deep learning vis-a-vis sequence to sequence models and imagines computers that generate their own algorithms. To conclude, Reza Zadeh reflects on the history and evolution of machine learning as a field and the role Apache Spark will play in its future.
It is important to note the scope of this report can only cover so much ground. With just ten interviews, it far from exhaustive: Indeed, for every such interview, dozens of other theoreticians and practitioners successfully advance the field through their efforts and dedication. This report, its brevity notwithstanding, offers a glimpse into this exciting field through the eyes of its leading minds.