scispace - formally typeset
Proceedings ArticleDOI

Are Image-Agnostic Universal Adversarial Perturbations for Face Recognition Difficult to Detect?

Reads0
Chats0
TLDR
A simple but efficient approach based on pixel values and Principal Component Analysis as features coupled with a Support Vector Machine as the classifier, to detect image-agnostic universal perturbations.
Abstract
High performance of deep neural network based systems have attracted many applications in object recognition and face recognition. However, researchers have also demonstrated them to be highly sensitive to adversarial perturbation and hence, tend to be unreliable and lack robustness. While most of the research on adversarial perturbation focuses on image specific attacks, recently, image-agnostic Universal perturbations are proposed which learn the adversarial pattern over training distribution and have broader impact on real-world security applications. Such adversarial attacks can have compounding effect on face recognition where these visually imperceptible attacks can cause mismatches. To defend against adversarial attacks, sophisticated detection approaches are prevalent but most of the existing approaches do not focus on image-agnostic attacks. In this paper, we present a simple but efficient approach based on pixel values and Principal Component Analysis as features coupled with a Support Vector Machine as the classifier, to detect image-agnostic universal perturbations. We also present evaluation metrics, namely adversarial perturbation class classification error rate, original class classification error rate, and average classification error rate, to estimate the performance of adversarial perturbation detection algorithms. The experimental results on multiple databases and different DNN architectures show that it is indeed not required to build complex detection algorithms; rather simpler approaches can yield higher detection rates and lower error rates for image agnostic adversarial perturbation.

read more

Citations
More filters
Journal ArticleDOI

A comprehensive overview of biometric fusion

TL;DR: A comprehensive review of techniques incorporating ancillary information in the biometric recognition pipeline is presented in this paper, where the authors provide a comprehensive overview of the role of information fusion in biometrics.
Journal ArticleDOI

Adversarial Examples—Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices

TL;DR: A number of COVID-19 diagnostic methods that rely on DL algorithms with relevant adversarial examples (AEs) are tested, showing that DL models that do not consider defensive models against adversarial perturbations remain vulnerable to adversarial attacks.
Journal ArticleDOI

Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition

TL;DR: This paper attempts to unravel three aspects related to the robustness of DNNs for face recognition in terms of vulnerabilities to attacks, detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and making corrections to the processing pipeline to alleviate the problem.
Journal ArticleDOI

On the Robustness of Face Recognition Algorithms Against Attacks and Bias

TL;DR: Different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working are summarized.
Proceedings ArticleDOI

SmartBox: Benchmarking Adversarial Detection and Mitigation Algorithms for Face Recognition

TL;DR: SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms against face recognition and provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Journal ArticleDOI

Eigenfaces for recognition

TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.