scispace - formally typeset
Journal ArticleDOI

DAMAD: Database, Attack, and Model Agnostic Adversarial Perturbation Detector

TLDR
DAMAD as mentioned in this paper is a generalized perturbation detection algorithm which is agnostic to model architecture, training data set, and loss function used during training, which is based on the fusion of autoencoder embedding and statistical texture features extracted from convolutional neural networks.
Abstract
Adversarial perturbations have demonstrated the vulnerabilities of deep learning algorithms to adversarial attacks. Existing adversary detection algorithms attempt to detect the singularities; however, they are in general, loss-function, database, or model dependent. To mitigate this limitation, we propose DAMAD--a generalized perturbation detection algorithm which is agnostic to model architecture, training data set, and loss function used during training. The proposed adversarial perturbation detection algorithm is based on the fusion of autoencoder embedding and statistical texture features extracted from convolutional neural networks. The performance of DAMAD is evaluated on the challenging scenarios of cross-database, cross-attack, and cross-architecture training and testing along with traditional evaluation of testing on the same database with known attack and model. Comparison with state-of-the-art perturbation detection algorithms showcase the effectiveness of the proposed algorithm on six databases: ImageNet, CIFAR-10, Multi-PIE, MEDS, point and shoot challenge (PaSC), and MNIST. Performance evaluation with nearly a quarter of a million adversarial and original images and comparison with recent algorithms show the effectiveness of the proposed algorithm.

read more

Citations
More filters
Journal ArticleDOI

Exploring Robustness Connection between Artificial and Natural Adversarial Examples

TL;DR: The possible robustness connection between natural and artificial adversarial examples is studied and can pave a way for the development of unified resiliency because defense against one attack is not sufficient for real-world use cases.

Fast Adversarial Training with Noise Augmentation: A Unified Perspective on RandStart and GradAlign

TL;DR: Noise augmentation (NoiseAug) is provided which is a non-trivial byproduct of simplifying GradAlign and achieves SOTA results in FGSM AT, and it is verified that this is caused not by data augmentation effect (inject noise on image) but by improved local linearity.
Journal ArticleDOI

Wavelet Regularization Benefits Adversarial Training

TL;DR: A wavelet regularization method based on the Haar wavelet decomposition which is named Wavelet Average Pooling is proposed and integrated into the wide residual neural network so that a new WideWaveletResNet model is formed.
Posted Content

AN-GCN: An Anonymous Graph Convolutional Network Defense Against Edge-Perturbing Attack

TL;DR: In this article, an anonymous graph convolutional network (AN-GCN) is proposed to counter against edge-perturbing attacks in node classification tasks, which can classify nodes without taking their position as input and thus makes it impossible for attackers to perturb edges anymore.
Journal ArticleDOI

Towards an Accurate and Secure Detector against Adversarial Perturbations

TL;DR: Zhang et al. as mentioned in this paper proposed an accurate and secure adversarial example detector, relying on a spatial-frequency discriminative decomposition with secret keys, which is more suitable for capturing adversarial patterns than the common trigonometric or wavelet basis.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI

Densely Connected Convolutional Networks

TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Related Papers (5)