J
Jonas Rauber
Researcher at University of Tübingen
Publications - 22
Citations - 3746
Jonas Rauber is an academic researcher from University of Tübingen. The author has contributed to research in topics: Robustness (computer science) & Artificial neural network. The author has an hindex of 15, co-authored 22 publications receiving 2913 citations.
Papers
More filters
Proceedings Article
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
TL;DR: The Boundary Attack is introduced, a decision-based attack that starts from a large adversarial perturbations and then seeks to reduce the perturbation while staying adversarial and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet.
Posted Content
On Evaluating Adversarial Robustness
Nicholas Carlini,Anish Athalye,Nicolas Papernot,Wieland Brendel,Jonas Rauber,Dimitris Tsipras,Ian Goodfellow,Aleksander Madry,Alexey Kurakin +8 more
TL;DR: The methodological foundations are discussed, commonly accepted best practices are reviewed, and new methods for evaluating defenses to adversarial examples are suggested.
Posted Content
Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models
TL;DR: Foolbox is a new Python package that provides reference implementations of most published adversarial attack methods alongside some new ones, all of which perform internal hyperparameter tuning to find the minimum adversarial perturbation.
Posted Content
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library
Nicolas Papernot,Fartash Faghri,Nicholas Carlini,Ian Goodfellow,Reuben Feinman,Alexey Kurakin,Cihang Xie,Yash Sharma,Tom B. Brown,Aurko Roy,Alexander Matyasko,Vahid Behzadan,Karen Hambardzumyan,Zhishuai Zhang,Yi-Lin Juang,Zhi Li,Ryan Sheatsley,Abhibhav Garg,Jonathan Uesato,Willi Gierke,Yinpeng Dong,David Berthelot,Paul Hendricks,Jonas Rauber,Rujun Long,Patrick McDaniel +25 more
TL;DR: The core functionalities of the CleverHans library are presented, namely the attacks based on adversarial examples and defenses to improve the robustness of machine learning models to these attacks.
Posted Content
Generalisation in humans and deep neural networks
Robert Geirhos,Carlos R. Medina Temme,Jonas Rauber,Heiko H. Schütt,Matthias Bethge,Felix A. Wichmann +5 more
TL;DR: The robustness of humans and current convolutional deep neural networks on object recognition under twelve different types of image degradations is compared and it is shown that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on.