Chapter 45. Don’t Generalize Until Your Model Does
Michael Hind
The amazing advances in machine learning come from its ability to find patterns in (often large) training datasets. This ability can result in predictions that match, and often exceed, those made by humans on the same task. However, these systems can sometimes be fooled with a prediction task that would not fool a human. One example is an ML system that can correctly identify a street sign, such as a stop sign, but will incorrectly predict that a stop sign defaced by a few black and white stickers is actually a speed limit sign.
The reason for this surprising deficiency in capability is that machine learning systems make their predictions in a different way than humans. They look for distinguishing patterns of the various outcome groups, such as which loan applicants should be approved or rejected for a loan. Humans, however, apply a combination of pattern recognition and reasoning. The absence of this reasoning step in machine learning systems can lead to surprising results, as with the stop sign example.
The public then gets the following impression of machine learning (AI):
AI can sometimes “think” better than humans.
AI can easily be fooled, and thus it is not trustworthy.
The result is a superhuman technology that cannot be trusted. Insert your favorite movie ...