Open AccessPosted Content
Comparing deep neural networks against humans: object recognition when the signal gets weaker
Robert Geirhos,David H. J. Janssen,Heiko H. Schütt,Jonas Rauber,Matthias Bethge,Felix A. Wichmann +5 more
TLDR
The human visual system is found to be more robust to image manipulations like contrast reduction, additive noise or novel eidolon-distortions than deep neural networks, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition.Abstract:
Human visual object recognition is typically rapid and seemingly effortless, as well as largely independent of viewpoint and object orientation. Until very recently, animate visual systems were the only ones capable of this remarkable computational feat. This has changed with the rise of a class of computer vision algorithms called deep neural networks (DNNs) that achieve human-level classification performance on object recognition tasks. Furthermore, a growing number of studies report similarities in the way DNNs and the human visual system process objects, suggesting that current DNNs may be good models of human visual object recognition. Yet there clearly exist important architectural and processing differences between state-of-the-art DNNs and the primate visual system. The potential behavioural consequences of these differences are not well understood. We aim to address this issue by comparing human and DNN generalisation abilities towards image degradations. We find the human visual system to be more robust to image manipulations like contrast reduction, additive noise or novel eidolon-distortions. In addition, we find progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition. We envision that our findings as well as our carefully measured and freely available behavioural datasets provide a new useful benchmark for the computer vision community to improve the robustness of DNNs and a motivation for neuroscientists to search for mechanisms in the brain that could facilitate this robustness.read more
Citations
More filters
Posted Content
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
Robert Geirhos,Patricia Rubisch,Claudio Michaelis,Matthias Bethge,Felix A. Wichmann,Wieland Brendel +5 more
TL;DR: It is shown that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes, which is in stark contrast to human behavioural evidence and reveals fundamentally different classification strategies.
Proceedings Article
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
Robert Geirhos,Patricia Rubisch,Claudio Michaelis,Matthias Bethge,Felix A. Wichmann,Wieland Brendel +5 more
TL;DR: In this paper, the same standard architecture that learns a texture-based representation on ImageNet is able to learn a shapebased representation instead when trained on "Stylized-ImageNet", a stylized version of ImageNet.
Posted Content
Generalisation in humans and deep neural networks
Robert Geirhos,Carlos R. Medina Temme,Jonas Rauber,Heiko H. Schütt,Matthias Bethge,Felix A. Wichmann +5 more
TL;DR: The robustness of humans and current convolutional deep neural networks on object recognition under twelve different types of image degradations is compared and it is shown that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on.
Journal ArticleDOI
Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior.
Kohitij Kar,Jonas Kubilius,Jonas Kubilius,Kailyn Schmidt,Elias B. Issa,Elias B. Issa,James J. DiCarlo,James J. DiCarlo +7 more
TL;DR: Using model- and primate behavior-driven image selection with large-scale electrophysiology in monkeys performing core recognition tasks, Kar et al. provide evidence that automatically engaged recurrent circuits are critical for rapid object identification.
Journal ArticleDOI
Large-Scale, High-Resolution Comparison of the Core Visual Object Recognition Behavior of Humans, Monkeys, and State-of-the-Art Deep Artificial Neural Networks
TL;DR: The results show that current DCNNIC models cannot account for the image-level behavioral patterns of primates and that new ANN models are needed to more precisely capture the neural mechanisms underlying primate object vision.
References
More filters
Journal Article
R: A language and environment for statistical computing.
TL;DR: Copyright (©) 1999–2012 R Foundation for Statistical Computing; permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and permission notice are preserved on all copies.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI
Deep learning
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.