scispace - formally typeset
Proceedings ArticleDOI

SmartBox: Benchmarking Adversarial Detection and Mitigation Algorithms for Face Recognition

TLDR
SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms against face recognition and provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark.
Abstract
Deep learning models are widely used for various purposes such as face recognition and speech recognition. However, researchers have shown that these models are vulnerable to adversarial attacks. These attacks compute perturbations to generate images that decrease the performance of deep learning models. In this research, we have developed a toolbox, termed as SmartBox, for benchmarking the performance of adversarial attack detection and mitigation algorithms against face recognition. SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms. In this research, Extended Yale Face Database B has been used for generating adversarial examples using various attack algorithms such as DeepFool, Gradient methods, Elastic-Net, and $L_{2}$ attack. SmartBox provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark. To assist the research community, the code of SmartBox is made available11http://iab-rubric.org/resources/SmartBox.html.

read more

Citations
More filters
Journal ArticleDOI

A survey on kinship verification

TL;DR: The Nemo-Kinship dataset as discussed by the authors was proposed as a benchmark dataset addressing large inter-subject age variations and consisting of 4216 videos of 248 persons from 85 families.
Peer Review

UvA-DARE (Digital Academic Repository) A survey on kinship verification

TL;DR: A survey on kinship verification methods and datasets is provided in this article , where a new multi-modal dataset (Nemo-Kinship Dataset) is proposed as a benchmark dataset addressing large inter-subject age variations.
Posted Content

Securing CNN Model and Biometric Template using Blockchain

TL;DR: In this paper, the authors model a trained biometric recognition system in an architecture which leverages the blockchain technology to provide fault tolerant access in a distributed environment, where tampering in one particular component alerts the whole system and helps in easy identification of ''any'' possible alteration.
Posted Content

Data Fine-tuning

TL;DR: In this paper, a small amount of noise is added to the input with the objective of minimizing the classification loss without affecting the (visual) appearance of the input image without changing the parameters of the model.
Journal ArticleDOI

ApaNet: adversarial perturbations alleviation network for face verification

TL;DR: ApaNet as discussed by the authors uses stacked residual blocks to alleviate latent adversarial perturbations hidden in the input facial image during the supervised learning of ApaNet, only the Labeled Faces in the Wild (LFW) is used as the training set, and the legitimate examples and corresponding adversarial examples produced by projected gradient descent algorithm compose supervision and inputs respectively.
References
More filters
Posted Content

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

TL;DR: This work proposes a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit and derives a robust initialization method that particularly considers the rectifier nonlinearities.
Proceedings ArticleDOI

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

TL;DR: In this paper, a Parametric Rectified Linear Unit (PReLU) was proposed to improve model fitting with nearly zero extra computational cost and little overfitting risk, which achieved a 4.94% top-5 test error on ImageNet 2012 classification dataset.
Proceedings Article

Intriguing properties of neural networks

TL;DR: It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Proceedings Article

Explaining and Harnessing Adversarial Examples

TL;DR: It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Proceedings ArticleDOI

Towards Evaluating the Robustness of Neural Networks

TL;DR: In this paper, the authors demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability.
Related Papers (5)