scispace - formally typeset
Search or ask a question
Author

Tejas I. Dhamecha

Bio: Tejas I. Dhamecha is an academic researcher from IBM. The author has contributed to research in topics: Facial recognition system & Linear discriminant analysis. The author has an hindex of 12, co-authored 35 publications receiving 788 citations. Previous affiliations of Tejas I. Dhamecha include Indraprastha Institute of Information Technology.

Papers
More filters
Proceedings ArticleDOI
23 Jun 2013
TL;DR: A new approach for spoofing detection in face videos using motion magnification using Eulerian motion magnification approach, which improves the state-of-art performance, especially HOOF descriptor yielding a near perfect half total error rate.
Abstract: For a robust face biometric system, a reliable anti-spoofing approach must be deployed to circumvent the print and replay attacks. Several techniques have been proposed to counter face spoofing, however a robust solution that is computationally efficient is still unavailable. This paper presents a new approach for spoofing detection in face videos using motion magnification. Eulerian motion magnification approach is used to enhance the facial expressions commonly exhibited by subjects in a captured video. Next, two types of feature extraction algorithms are proposed: (i) a configuration of LBP that provides improved performance compared to other computationally expensive texture based approaches and (ii) motion estimation approach using HOOF descriptor. On the Print Attack and Replay Attack spoofing datasets, the proposed framework improves the state-of-art performance, especially HOOF descriptor yielding a near perfect half total error rate of 0%and 1.25% respectively.

243 citations

Journal ArticleDOI
16 Jul 2014-PLOS ONE
TL;DR: An automated algorithm is developed to verify the faces presented under disguise variations using automatically localized feature descriptors which can identify disguised face patches and account for this information to achieve improved matching accuracy.
Abstract: Face verification, though an easy task for humans, is a long-standing open research area. This is largely due to the challenging covariates, such as disguise and aging, which make it very hard to accurately verify the identity of a person. This paper investigates human and machine performance for recognizing/verifying disguised faces. Performance is also evaluated under familiarity and match/mismatch with the ethnicity of observers. The findings of this study are used to develop an automated algorithm to verify the faces presented under disguise variations. We use automatically localized feature descriptors which can identify disguised face patches and account for this information to achieve improved matching accuracy. The performance of the proposed algorithm is evaluated on the IIIT-Delhi Disguise database that contains images pertaining to 75 subjects with different kinds of disguise variations. The experiments suggest that the proposed algorithm can outperform a popular commercial system and evaluates them against humans in matching disguised face images.

110 citations

Proceedings ArticleDOI
04 Jun 2013
TL;DR: A framework, termed as Aravrta1, is proposed, which classifies the local facial regions of both visible and thermal face images into biometric (regions without disguise) and non-biometric (Regions with disguise) classes, and improves the performance compared to existing algorithms.
Abstract: Face verification, though for humans seems to be an easy task, is a long-standing research area. With challenging covariates such as disguise or face obfuscation, automatically verifying the identity of a person is assumed to be very hard. This paper explores the feasibility of face verification under disguise variations using multi-spectrum (visible and thermal) face images. We propose a framework, termed as Aravrta1, which classifies the local facial regions of both visible and thermal face images into biometric (regions without disguise) and non-biometric (regions with disguise) classes. The biometric patches are then used for facial feature extraction and matching. The performance of the algorithm is evaluated on the IHTD In and Beyond Visible Spectrum Disguise database that is prepared by the authors and contains images pertaining to 75 subjects with different kinds of disguise variations. The experimental results suggest that the proposed framework improves the performance compared to existing algorithms, however there is a need for more research to address this important covariate.

108 citations

Proceedings ArticleDOI
01 Dec 2016
TL;DR: A novel multi-feature evidence aggregation method for face spoofing detection that fuses evidence from features encoding of both texture and motion properties in the face and also the surrounding scene regions and provides robustness to different attacks.
Abstract: Biometric systems can be attacked in several ways and the most common being spoofing the input sensor. Therefore, anti-spoofing is one of the most essential prerequisite against attacks on biometric systems. For face recognition it is even more vulnerable as the image capture is non-contact based. Several anti-spoofing methods have been proposed in the literature for both contact and non-contact based biometric modalities often using video to study the temporal characteristics of a real vs. spoofed biometric signal. This paper presents a novel multi-feature evidence aggregation method for face spoofing detection. The proposed method fuses evidence from features encoding of both texture and motion (liveness) properties in the face and also the surrounding scene regions. The feature extraction algorithms are based on a configuration of local binary pattern and motion estimation using histogram of oriented optical flow. Furthermore, the multi-feature windowed videolet aggregation of these orthogonal features coupled with support vector machine-based classification provides robustness to different attacks. We demonstrate the efficacy of the proposed approach by evaluating on three standard public databases: CASIA-FASD, 3DMAD and MSU-MFSD with equal error rate of 3.14%, 0%, and 0%, respectively.

98 citations

Book ChapterDOI
25 Jun 2019
TL;DR: This work experiments with fine-tuning a pre-trained self-attention language model, namely Bidirectional Encoder Representations from Transformers (BERT) applying it to short answer grading, and shows that it produces superior results across multiple domains.
Abstract: Dialogue-based tutoring platforms have shown great promise in helping individual students improve mastery. Short answer grading is a crucial component of such platforms. However, generative short answer grading using the same platform for diverse disciplines and titles is a crucial challenge due to data distribution variations across domains and a frequent occurrence of non-sentential answers. Recent NLP research has introduced novel deep learning architectures such as the Transformer, which merely uses self-attention mechanisms. Pre-trained models based on the Transformer architecture have been used to produce impressive results across a range of NLP tasks. In this work, we experiment with fine-tuning a pre-trained self-attention language model, namely Bidirectional Encoder Representations from Transformers (BERT) applying it to short answer grading, and show that it produces superior results across multiple domains. On the benchmarking dataset of SemEval-2013, we report up to 10% absolute improvement in macro-average-F1 over state-of-the-art results. On our two psychology domain datasets, the fine-tuned model yields classification almost up to the human-agreement levels. Moreover, we study the effectiveness of fine-tuning as a function of the size of the task-specific labeled data, the number of training epochs, and its generalizability to cross-domain and join-domain scenarios.

68 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An efficient and rather robust face spoof detection algorithm based on image distortion analysis (IDA) that outperforms the state-of-the-art methods in spoof detection and highlights the difficulty in separating genuine and spoof faces, especially in cross-database and cross-device scenarios.
Abstract: Automatic face recognition is now widely used in applications ranging from deduplication of identity to authentication of mobile payment. This popularity of face recognition has raised concerns about face spoof attacks (also known as biometric sensor presentation attacks), where a photo or video of an authorized person’s face could be used to gain access to facilities or services. While a number of face spoof detection techniques have been proposed, their generalization ability has not been adequately addressed. We propose an efficient and rather robust face spoof detection algorithm based on image distortion analysis (IDA). Four different features (specular reflection, blurriness, chromatic moment, and color diversity) are extracted to form the IDA feature vector. An ensemble classifier, consisting of multiple SVM classifiers trained for different face spoof attacks (e.g., printed photo and replayed video), is used to distinguish between genuine (live) and spoof faces. The proposed approach is extended to multiframe face spoof detection in videos using a voting-based scheme. We also collect a face spoof database, MSU mobile face spoofing database (MSU MFSD), using two mobile devices (Google Nexus 5 and MacBook Air) with three types of spoof attacks (printed photo, replayed video with iPhone 5S, and replayed video with iPad Air). Experimental results on two public-domain face spoof databases (Idiap REPLAY-ATTACK and CASIA FASD), and the MSU MFSD database show that the proposed approach outperforms the state-of-the-art methods in spoof detection. Our results also highlight the difficulty in separating genuine and spoof faces, especially in cross-database and cross-device scenarios.

716 citations

01 Jan 2005
TL;DR: In this article, a general technique called Bubbles is proposed to assign the credit of human categorization performance to specific visual information, such as gender, expressive or not and identity.
Abstract: Everyday, people flexibly perform different categorizations of common faces, objects and scenes. Intuition and scattered evidence suggest that these categorizations require the use of different visual information from the input. However, there is no unifying method, based on the categorization performance of subjects, that can isolate the information used. To this end, we developed Bubbles, a general technique that can assign the credit of human categorization performance to specific visual information. To illustrate the technique, we applied Bubbles on three categorization tasks (gender, expressive or not and identity) on the same set of faces, with human and ideal observers to compare the features they used.

623 citations

Proceedings ArticleDOI
18 Jun 2018
TL;DR: This paper argues the importance of auxiliary supervision to guide the learning toward discriminative and generalizable cues, and introduces a new face anti-spoofing database that covers a large range of illumination, subject, and pose variations.
Abstract: Face anti-spoofing is crucial to prevent face recognition systems from a security breach. Previous deep learning approaches formulate face anti-spoofing as a binary classification problem. Many of them struggle to grasp adequate spoofing cues and generalize poorly. In this paper, we argue the importance of auxiliary supervision to guide the learning toward discriminative and generalizable cues. A CNN-RNN model is learned to estimate the face depth with pixel-wise supervision, and to estimate rPPG signals with sequence-wise supervision. The estimated depth and rPPG are fused to distinguish live vs. spoof faces. Further, we introduce a new face anti-spoofing database that covers a large range of illumination, subject, and pose variations. Experiments show that our model achieves the state-of-the-art results on both intra- and cross-database testing.

502 citations

Journal ArticleDOI
TL;DR: This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis that exploits the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces.
Abstract: Research on non-intrusive software-based face spoofing detection schemes has been mainly focused on the analysis of the luminance information of the face images, hence discarding the chroma component, which can be very useful for discriminating fake faces from genuine ones. This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis. We exploit the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces. More specifically, the feature histograms are computed over each image band separately. Extensive experiments on the three most challenging benchmark data sets, namely, the CASIA face anti-spoofing database, the replay-attack database, and the MSU mobile face spoof database, showed excellent results compared with the state of the art. More importantly, unlike most of the methods proposed in the literature, our proposed approach is able to achieve stable performance across all the three benchmark data sets. The promising results of our cross-database evaluation suggest that the facial colour texture representation is more stable in unknown conditions compared with its gray-scale counterparts.

449 citations