scispace - formally typeset
Proceedings ArticleDOI

SmartBox: Benchmarking Adversarial Detection and Mitigation Algorithms for Face Recognition

TLDR
SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms against face recognition and provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark.
Abstract
Deep learning models are widely used for various purposes such as face recognition and speech recognition. However, researchers have shown that these models are vulnerable to adversarial attacks. These attacks compute perturbations to generate images that decrease the performance of deep learning models. In this research, we have developed a toolbox, termed as SmartBox, for benchmarking the performance of adversarial attack detection and mitigation algorithms against face recognition. SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms. In this research, Extended Yale Face Database B has been used for generating adversarial examples using various attack algorithms such as DeepFool, Gradient methods, Elastic-Net, and $L_{2}$ attack. SmartBox provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark. To assist the research community, the code of SmartBox is made available11http://iab-rubric.org/resources/SmartBox.html.

read more

Citations
More filters
Proceedings ArticleDOI

Securing CNN Model and Biometric Template using Blockchain

TL;DR: This research model a trained biometric recognition system in an architecture which leverages the blockchain technology to provide fault tolerant access in a distributed environment and shows that the proposed approach provides security to both deep learning model and the biometric template.
Posted Content

Fast Geometrically-Perturbed Adversarial Faces

TL;DR: In this article, the authors explore the extent to which face recognition systems are vulnerable to geometrically-perturbed adversarial faces and propose a fast landmark manipulation method for generating adversarial face, which is approximately 200 times faster than the previous geometric attacks.
Posted Content

Detecting Face2Face Facial Reenactment in Videos

TL;DR: A learning-based algorithm for detecting reenactment based alterations that uses a multi-stream network that learns regional artifacts and provides a robust performance at various compression levels and a loss function for the balanced learning of the streams for the proposed network is proposed.
Proceedings ArticleDOI

Adversarial Examples in Deep Learning for Multivariate Time Series Regression

TL;DR: In this article, the authors leverage existing adversarial attack generation techniques from the image classification domain and craft adversarial multivariate time series examples for three state-of-the-art deep learning regression models, specifically Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU).
Proceedings ArticleDOI

DNDNet: Reconfiguring CNN for Adversarial Robustness

TL;DR: A novel "defense layer" in a network which aims to block the generation of adversarial noise and prevents an adversarial attack in black-box and gray-box settings is presented.
References
More filters
Posted Content

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

TL;DR: This work proposes a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit and derives a robust initialization method that particularly considers the rectifier nonlinearities.
Proceedings ArticleDOI

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

TL;DR: In this paper, a Parametric Rectified Linear Unit (PReLU) was proposed to improve model fitting with nearly zero extra computational cost and little overfitting risk, which achieved a 4.94% top-5 test error on ImageNet 2012 classification dataset.
Proceedings Article

Intriguing properties of neural networks

TL;DR: It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Proceedings Article

Explaining and Harnessing Adversarial Examples

TL;DR: It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Proceedings ArticleDOI

Towards Evaluating the Robustness of Neural Networks

TL;DR: In this paper, the authors demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability.
Related Papers (5)