scispace - formally typeset
W

Wieland Brendel

Researcher at University of Tübingen

Publications -  69
Citations -  8883

Wieland Brendel is an academic researcher from University of Tübingen. The author has contributed to research in topics: Robustness (computer science) & Deep learning. The author has an hindex of 27, co-authored 69 publications receiving 5662 citations. Previous affiliations of Wieland Brendel include Champalimaud Foundation & University of Erlangen-Nuremberg.

Papers
More filters
Posted Content

ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness

TL;DR: It is shown that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes, which is in stark contrast to human behavioural evidence and reveals fundamentally different classification strategies.
Journal ArticleDOI

Shortcut learning in deep neural networks

TL;DR: A set of recommendations for model interpretation and benchmarking is developed, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications.
Proceedings Article

ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness

TL;DR: In this paper, the same standard architecture that learns a texture-based representation on ImageNet is able to learn a shapebased representation instead when trained on "Stylized-ImageNet", a stylized version of ImageNet.
Proceedings Article

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

TL;DR: The Boundary Attack is introduced, a decision-based attack that starts from a large adversarial perturbations and then seeks to reduce the perturbation while staying adversarial and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet.
Posted Content

On Evaluating Adversarial Robustness

TL;DR: The methodological foundations are discussed, commonly accepted best practices are reviewed, and new methods for evaluating defenses to adversarial examples are suggested.