scispace - formally typeset
Search or ask a question
Author

Anannya Uberoi

Bio: Anannya Uberoi is an academic researcher from Indraprastha Institute of Information Technology. The author has contributed to research in topics: Spoofing attack. The author has an hindex of 1, co-authored 1 publications receiving 15 citations.

Papers
More filters
Proceedings ArticleDOI
01 Jun 2019
TL;DR: This paper designs a deep learning based panoptic algorithm for detection of both digital and physical presentation attacks using Cross Asymmetric Loss Function (CALF) and shows superior performance in three scenarios: ubiquitous environment, individual databases, and cross-attack/cross-database.
Abstract: With the advancements in technology and growing popularity of facial photo editing in the social media landscape, tools such as face swapping and face morphing have become increasingly accessible to the general public. It opens up the possibilities for different kinds of face presentation attacks, which can be taken advantage of by impostors to gain unauthorized access of a biometric system. Moreover, the wide availability of 3D printers has caused a shift from print attacks to 3D mask attacks. With increasing types of attacks, it is necessary to come up with a generic and ubiquitous algorithm with a panoptic view of these attacks, and can detect a spoofed image irrespective of the method used. The key contribution of this paper is designing a deep learning based panoptic algorithm for detection of both digital and physical presentation attacks using Cross Asymmetric Loss Function (CALF). The performance is evaluated for digital and physical attacks in three scenarios: ubiquitous environment, individual databases, and cross-attack/cross-database. Experimental results showcase the superior performance of the proposed presentation attack detection algorithm.

21 citations


Cited by
More filters
Posted Content
TL;DR: A capsule network that can detect various kinds of attacks, from presentation attacks using printed images and replayed videos to attacks using fake videos created using deep learning, uses many fewer parameters than traditional convolutional neural networks with similar performance.
Abstract: The revolution in computer hardware, especially in graphics processing units and tensor processing units, has enabled significant advances in computer graphics and artificial intelligence algorithms. In addition to their many beneficial applications in daily life and business, computer-generated/manipulated images and videos can be used for malicious purposes that violate security systems, privacy, and social trust. The deepfake phenomenon and its variations enable a normal user to use his or her personal computer to easily create fake videos of anybody from a short real online video. Several countermeasures have been introduced to deal with attacks using such videos. However, most of them are targeted at certain domains and are ineffective when applied to other domains or new attacks. In this paper, we introduce a capsule network that can detect various kinds of attacks, from presentation attacks using printed images and replayed videos to attacks using fake videos created using deep learning. It uses many fewer parameters than traditional convolutional neural networks with similar performance. Moreover, we explain, for the first time ever in the literature, the theory behind the application of capsule networks to the forensics problem through detailed analysis and visualization.

109 citations

Journal ArticleDOI
TL;DR: A new framework for PAD is proposed using a one-class classifier, where the representation used is learned with a Multi-Channel Convolutional Neural Network (MCCNN) and a novel loss function is introduced, which forces the network to learn a compact embedding for bonafide class while being far from the representation of attacks.
Abstract: Face recognition has evolved as a widely used biometric modality. However, its vulnerability against presentation attacks poses a significant security threat. Though presentation attack detection (PAD) methods try to address this issue, they often fail in generalizing to unseen attacks. In this work, we propose a new framework for PAD using a one-class classifier, where the representation used is learned with a Multi-Channel Convolutional Neural Network ( MCCNN ). A novel loss function is introduced, which forces the network to learn a compact embedding for bonafide class while being far from the representation of attacks. A one-class Gaussian Mixture Model is used on top of these embeddings for the PAD task. The proposed framework introduces a novel approach to learn a robust PAD system from bonafide and available (known) attack classes. This is particularly important as collecting bonafide data and simpler attacks are much easier than collecting a wide variety of expensive attacks. The proposed system is evaluated on the publicly available WMCA multi-channel face PAD database, which contains a wide variety of 2D and 3D attacks. Further, we have performed experiments with MLFP and SiW-M datasets using RGB channels only. Superior performance in unseen attack protocols shows the effectiveness of the proposed approach. Software, data, and protocols to reproduce the results are made available publicly.

77 citations

Journal ArticleDOI
03 Apr 2020
TL;DR: Different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working are summarized.
Abstract: Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working Different types of attacks such as physical presentation attacks, disguise/makeup, digital adversarial attacks, and morphing/tampering using GANs have been discussed We also present a discussion on the effect of bias on face recognition models and showcase that factors such as age and gender variations affect the performance of modern algorithms The paper also presents the potential reasons for these challenges and some of the future research directions for increasing the robustness of face recognition models

53 citations

Journal ArticleDOI
Shan Jia1, Xin Li2, Chuanbo Hu2, Guodong Guo2, Zhengquan Xu1 
TL;DR: This work proposes a novel anti-spoofing method, based on factorized bilinear coding of multiple color channels (namely MC\_FBC), that achieves the state-of-the-art performance on both the authors' own WFFD and other face spoofing databases under various intra-database and inter-database testing scenarios.
Abstract: We have witnessed rapid advances in both face presentation attack models and presentation attack detection (PAD) in recent years. When compared with widely studied 2D face presentation attacks, 3D face spoofing attacks are more challenging because face recognition systems are more easily confused by the 3D characteristics of materials similar to real faces. In this work, we tackle the problem of detecting these realistic 3D face presentation attacks and propose a novel anti-spoofing method from the perspective of fine-grained classification. Our method, based on factorized bilinear coding of multiple color channels (namely MC_FBC), targets at learning subtle fine-grained differences between real and fake images. By extracting discriminative and fusing complementary information from RGB and YCbCr spaces, we have developed a principled solution to 3D face spoofing detection. A large-scale wax figure face database (WFFD) with both images and videos has also been collected as super realistic attacks to facilitate the study of 3D face presentation attack detection. Extensive experimental results show that our proposed method achieves the state-of-the-art performance on both our own WFFD and other face spoofing databases under various intra-database and inter-database testing scenarios.

36 citations

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a two head contraction expansion convolutional neural network (CNN) architecture for robust presentation attack detection, which consists of raw image and edge enhanced image to learn discriminating features for binary classification.

10 citations