March 2020
Beginner to intermediate
342 pages
8h 38m
English
In the 1990s, neural network research was in a rut again. After the discovery of backpropagation, connectionists had reaped a few high-profile successes, including a system to read handwritten numbers for the U.S. Postal Service.[31] And yet, the AI community at large still scoffed at neural networks.
The general consensus was that yeah, regular neural networks could solve simple problems—but that was about as far as they could go. To tackle more interesting problems, you needed deep neural networks, and those were a pain in the neck. They were slow to train, prone to overfitting, and riddled by frustrating problems such as the vanishing gradient. Connectionists had cracked backpropagation, but they still couldn’t win ...