Chapter 11. Future Trends: Toward Robust AI
This book has been about techniques for fooling AI that would not fool a human being. We should not forget, however, that we are susceptible to optical and audio illusions. Interestingly, research has shown that some adversarial examples can also fool time-limited humans.1 Conversely, some optical illusions can also trick neural networks.2
These cases suggest that there may be some similarities between biological and artificial perception, but adversarial inputs exploit the fundamental principle that deep learning models process data differently than their biological counterparts. While deep learning may create models that match or exceed human capability in processing sensory input, these models are likely to be a long way from how humans actually learn and perceive visual and auditory information.
There are fascinating areas of investigation opening up in the field of deep learning that are likely to bring about greater convergence between artificial and biological perception. Such research may result in AI that has greater resilience to adversarial examples. Here is a selection.
Increasing Robustness Through Outline Recognition
Neuroscientists and psychologists have known for many years that our understanding of the world around us is built through movement and physical exploration. A baby views items from different angles by moving, or because the items themselves move. We know that visual perception is very dependent on movement ...
Get Strengthening Deep Neural Networks now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.