scispace - formally typeset
Open AccessProceedings Article

Towards Deep Learning Models Resistant to Adversarial Attacks.

Reads0
Chats0
TLDR
This article studied the adversarial robustness of neural networks through the lens of robust optimization and identified methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.
Abstract
Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples—inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. Code and pre-trained models are available at this https URL and this https URL.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Towards Robust Neural Networks via Orthogonal Diversity.

TL;DR: Zhang et al. as discussed by the authors proposed a novel defense that aims at augmenting the model in order to learn features adaptive to diverse inputs, including adversarial examples, by introducing multiple paths to augment the network, and imposing orthogonality constraint on these paths.
Posted Content

Improving Gradient-based Adversarial Training for Text Classification by Contrastive Learning and Auto-Encoder

TL;DR: The authors proposed two adversarial training approaches, CARL and RAR, to enhance the model's ability to defend gradient-based adversarial attack during model's training process and showed that these two approaches can effectively improve the robustness of the model.
Posted Content

A Practical Adversarial Attack on Contingency Detection of Smart Energy Systems.

Moein Sabounchi, +1 more
- 13 Sep 2021 - 
TL;DR: In this paper, the authors proposed an adversarial attack model that can practically compromise dynamical controls of energy system and optimized the deployment of the proposed adversarial attacks by employing deep reinforcement learning (RL) techniques.
Posted Content

REGroup: Rank-aggregating Ensemble of Generative Classifiers for Robust Predictions

TL;DR: In this article, an ensemble of these generative classifiers is used to rank-aggregate their predictions via a Borda count-based consensus. And the predictions of multiple such intermediate-layer based classifiers, when aggregated, show unexpected robustness to adversarial attacks.
Posted Content

Beyond Categorical Label Representations for Image Classification

TL;DR: In this paper, the authors find that the way we choose to represent data labels can have a profound effect on the quality of trained models, for example, training an image classifier to regress audio labels rather than traditional categorical probabilities produces a more reliable classification.