scispace - formally typeset
T

Tejas I. Dhamecha

Researcher at IBM

Publications -  37
Citations -  1030

Tejas I. Dhamecha is an academic researcher from IBM. The author has contributed to research in topics: Facial recognition system & Linear discriminant analysis. The author has an hindex of 12, co-authored 35 publications receiving 788 citations. Previous affiliations of Tejas I. Dhamecha include Indraprastha Institute of Information Technology.

Papers
More filters
Proceedings ArticleDOI

Computationally Efficient Face Spoofing Detection with Motion Magnification

TL;DR: A new approach for spoofing detection in face videos using motion magnification using Eulerian motion magnification approach, which improves the state-of-art performance, especially HOOF descriptor yielding a near perfect half total error rate.
Journal ArticleDOI

Recognizing disguised faces: human and machine evaluation.

TL;DR: An automated algorithm is developed to verify the faces presented under disguise variations using automatically localized feature descriptors which can identify disguised face patches and account for this information to achieve improved matching accuracy.
Proceedings ArticleDOI

Disguise detection and face recognition in visible and thermal spectrums

TL;DR: A framework, termed as Aravrta1, is proposed, which classifies the local facial regions of both visible and thermal face images into biometric (regions without disguise) and non-biometric (Regions with disguise) classes, and improves the performance compared to existing algorithms.
Proceedings ArticleDOI

Face anti-spoofing with multifeature videolet aggregation

TL;DR: A novel multi-feature evidence aggregation method for face spoofing detection that fuses evidence from features encoding of both texture and motion properties in the face and also the surrounding scene regions and provides robustness to different attacks.
Book ChapterDOI

Improving Short Answer Grading Using Transformer-Based Pre-training.

TL;DR: This work experiments with fine-tuning a pre-trained self-attention language model, namely Bidirectional Encoder Representations from Transformers (BERT) applying it to short answer grading, and shows that it produces superior results across multiple domains.