scispace - formally typeset
Open AccessProceedings Article

Towards Deep Learning Models Resistant to Adversarial Attacks.

Reads0
Chats0
TLDR
This article studied the adversarial robustness of neural networks through the lens of robust optimization and identified methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.
Abstract
Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples—inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. Code and pre-trained models are available at this https URL and this https URL.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

TL;DR: A thorough overview of the evolution of this research area over the last ten years and beyond is provided, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks.
Proceedings ArticleDOI

Exploring Self-Attention for Image Recognition

TL;DR: This work considers two forms of self-attention, pairwise and patchwise, which generalizes standard dot-product attention and is fundamentally a set operator and strictly more powerful than convolution.
Proceedings ArticleDOI

Generating Natural Language Adversarial Examples

TL;DR: This paper used a population-based optimization algorithm to generate semantically and syntactically similar adversarial examples that fool well-trained sentiment analysis and textual entailment models with success rates of 97% and 70%, respectively.
Posted Content

Robust Physical-World Attacks on Deep Learning Models

TL;DR: This work proposes a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.
Proceedings Article

Fast is better than free: Revisiting adversarial training

TL;DR: It is made the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice.