This report explores some of the growing connections between the fields of artificial intelligence (AI) and what we understand about the brain from disciplines as varied as cognitive science, neuroscience, and developmental psychology.
Recently, AI has had a number of notable successes in areas of perception—particularly for speech and image recognition—and action; for example, in the case of AlphaGo and robotic control. These successes have been primarily due to development in two strands of AI: reinforcement learning and deep learning. Now, as AI scientists plot their next moves, some of them are turning to the human brain for inspiration about what to build next.
In this report, I share the thoughts and insights of four of today’s prominent voices in the AI field.
Geoff Hinton, a professor at the University of Toronto, for instance, has recently explored neural network models inspired by the varying dynamics of the synapses found in our brain. Dharmendra Modha at IBM has been on a decades-long quest to create an entirely new type of computer chip that breaks with the traditional Von Neumann architecture, and instead takes inspiration from the brain.
Meanwhile, within neuroscience there has been a “shedding of assumptions” about what the brain can and cannot do, which, according to MIT “neurotechnologist” Adam Marblestone, has caused scientists to become more receptive to ideas stemming from AI.
Breakthroughs in AI also have given researchers such as Leila Wehbe at UC Berkeley the computational tools they need to conduct new experiments on how the brain represents language. Tom Griffiths, a psychology and computer science professor, has also begun to explore how humans are able to develop surprisingly correct intuitions by developing complex cognitive models, which might inspire AI technologists.