scispace - formally typeset
Open AccessBook ChapterDOI

Enforcing Linearity in DNN Succours Robustness and Adversarial Image Generation

Reads0
Chats0
TLDR
This paper proposes a novel adversarial image generation method by leveraging Inverse Representation Learning and Linearity aspect of an adversarially trained deep neural network classifier and achieves state of the art adversarial accuracy on MNIST, CIFAR10 and SVHN dataset.
Abstract
Recent studies on the adversarial vulnerability of neural networks have shown that models trained with the objective of minimizing an upper bound on the worst-case loss over all possible adversarial perturbations improve robustness against adversarial attacks. Beside exploiting adversarial training framework, we show that by enforcing a Deep Neural Network (DNN) to be linear in transformed input and feature space improves robustness significantly. We also demonstrate that by augmenting the objective function with Local Lipschitz regularizer boost robustness of the model further. Our method outperforms most sophisticated adversarial training methods and achieves state of the art adversarial accuracy on MNIST, CIFAR10 and SVHN dataset. We also propose a novel adversarial image generation method by leveraging Inverse Representation Learning and Linearity aspect of an adversarially trained deep neural network classifier.

read more

Citations
More filters
Journal ArticleDOI

Image Generation: A Review

TL;DR: This paper presents the first comprehensive overview of existing image generation methods, based on the nature of the adopted algorithms, type of data used, and main objective, and each image generation category is discussed by presenting the proposed approaches.
Journal ArticleDOI

Generating Adversarial Surfaces via Band‐Limited Perturbations

TL;DR: It is shown that effective adversarial attacks can be concocted for surfaces embedded in 3D, under weak smoothness assumptions on the perceptibility of the attack.
Posted Content

Smoothed Inference for Adversarially-Trained Models

TL;DR: This work examines the application of randomized smoothing as a way to improve performance on unperturbed data as well as to increase robustness to adversarial attacks, and finds it lends itself well for trading-off between the model inference complexity and its performance.
Posted Content

ODE guided Neural Data Augmentation Techniques for Time Series Data and its Benefits on Robustness

TL;DR: Two local gradient based and one spectral density based time series data augmentation techniques are introduced and it is shown that a model trained with data obtained using these techniques obtains state-of-the-art classification accuracy on various time series benchmarks.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Related Papers (5)