scispace - formally typeset
Search or ask a question
Author

Maneet Singh

Bio: Maneet Singh is an academic researcher from Indraprastha Institute of Information Technology. The author has contributed to research in topics: Facial recognition system & Autoencoder. The author has an hindex of 15, co-authored 51 publications receiving 605 citations.

Papers published on a yearly basis

Papers
More filters
Posted Content
TL;DR: A novel supervised auto-encoder based approach, Deep Class-Encoder, which uses class labels to learn discriminative representation for the given sample by mapping the learned feature vector to its label.
Abstract: Soft biometric modalities have shown their utility in different applications including reducing the search space significantly. This leads to improved recognition performance, reduced computation time, and faster processing of test samples. Some common soft biometric modalities are ethnicity, gender, age, hair color, iris color, presence of facial hair or moles, and markers. This research focuses on performing ethnicity and gender classification on iris images. We present a novel supervised autoencoder based approach, Deep Class-Encoder, which uses class labels to learn discriminative representation for the given sample by mapping the learned feature vector to its label. The proposed model is evaluated on two datasets each for ethnicity and gender classification. The results obtained using the proposed Deep Class-Encoder demonstrate its effectiveness in comparison to existing approaches and state-of-the-art methods.

24 citations

Journal ArticleDOI
TL;DR: The proposed algorithm utilizes neural network and random decision forest to encode age variations across different weight categories to improve the performance of face recognition with age variations.
Abstract: With the increase in age, there are changes in skeletal structure, muscle mass, and body fat. For recognizing faces with age variations, researchers have generally focused on the skeletal structure and muscle mass. However, the effect of change in body fat has not been studied with respect to face recognition. In this paper, we incorporate weight information to improve the performance of face recognition with age variations. The proposed algorithm utilizes neural network and random decision forest to encode age variations across different weight categories. The results are reported on the WhoIsIt database prepared by the authors containing 1109 images from 110 individuals with age and weight variations. The comparison with existing state-of-the-art algorithms and commercial system on WhoIsIt and FG-Net databases shows that the proposed algorithm outperforms existing algorithms significantly.

24 citations

Proceedings ArticleDOI
01 Jul 2017
TL;DR: Experimental evaluation and analysis on the proposed dataset show the effectiveness of the transfer learning approach for performing cross-modality recognition.
Abstract: Matching facial sketches to digital face images has widespread application in law enforcement scenarios. Recent advancements in technology have led to the availability of sketch generation tools, minimizing the requirement of a sketch artist. While these sketches have helped in manual authentication, matching composite sketches with digital mugshot photos automatically show high modality gap. This research aims to address the task of matching a composite face sketch image to digital images by proposing a transfer learning based evolutionary algorithm. A new feature descriptor, Histogram of Image Moments, has also been presented for encoding features across modalities. Moreover, IIITD Composite Face Sketch Database of 150 subjects is presented to fill the gap due to limited availability of databases in this problem domain. Experimental evaluation and analysis on the proposed dataset show the effectiveness of the transfer learning approach for performing cross-modality recognition.

24 citations

Proceedings ArticleDOI
01 Oct 2019
TL;DR: DirectCapsNet as discussed by the authors utilizes a combination of capsule and convolutional layers for learning an effective very low resolution (VLR) recognition model, and incorporates two novel loss functions: (i) the HR-anchor loss and (ii) the proposed targeted reconstruction loss, in order to overcome the challenges of limited information content in VLR images.
Abstract: Very low resolution (VLR) image recognition corresponds to classifying images with resolution 16x16 or less. Though it has widespread applicability when objects are captured at a very large stand-off distance (e.g. surveillance scenario) or from wide angle mobile cameras, it has received limited attention. This research presents a novel Dual Directed Capsule Network model, termed as DirectCapsNet, for addressing VLR digit and face recognition. The proposed architecture utilizes a combination of capsule and convolutional layers for learning an effective VLR recognition model. The architecture also incorporates two novel loss functions: (i) the proposed HR-anchor loss and (ii) the proposed targeted reconstruction loss, in order to overcome the challenges of limited information content in VLR images. The proposed losses use high resolution images as auxiliary data during training to "direct" discriminative feature learning. Multiple experiments for VLR digit classification and VLR face recognition are performed along with comparisons with state-of-the-art algorithms. The proposed DirectCapsNet consistently showcases state-of-the-art results; for example, on the UCCS face database, it shows over 95% face recognition accuracy when 16x16 images are matched with 80x80 images.

24 citations

Posted Content
TL;DR: This research presents a novel Dual Directed Capsule Network model, termed as DirectCapsNet, for addressing VLR digit and face recognition, which utilizes a combination of capsule and convolutional layers for learning an effective VLR recognition model.
Abstract: Very low resolution (VLR) image recognition corresponds to classifying images with resolution 16x16 or less. Though it has widespread applicability when objects are captured at a very large stand-off distance (e.g. surveillance scenario) or from wide angle mobile cameras, it has received limited attention. This research presents a novel Dual Directed Capsule Network model, termed as DirectCapsNet, for addressing VLR digit and face recognition. The proposed architecture utilizes a combination of capsule and convolutional layers for learning an effective VLR recognition model. The architecture also incorporates two novel loss functions: (i) the proposed HR-anchor loss and (ii) the proposed targeted reconstruction loss, in order to overcome the challenges of limited information content in VLR images. The proposed losses use high resolution images as auxiliary data during training to "direct" discriminative feature learning. Multiple experiments for VLR digit classification and VLR face recognition are performed along with comparisons with state-of-the-art algorithms. The proposed DirectCapsNet consistently showcases state-of-the-art results; for example, on the UCCS face database, it shows over 95\% face recognition accuracy when 16x16 images are matched with 80x80 images.

23 citations


Cited by
More filters
01 Jun 2005

3,154 citations

Journal ArticleDOI
TL;DR: This survey provides a thorough review of techniques for manipulating face images including DeepFake methods, and methods to detect such manipulations, with special attention to the latest generation of DeepFakes.

502 citations

Journal ArticleDOI
TL;DR: A comprehensive review of the recent developments on deep face recognition can be found in this paper, covering broad topics on algorithm designs, databases, protocols, and application scenes, as well as the technical challenges and several promising directions.

353 citations

Journal ArticleDOI
TL;DR: Major deep learning concepts pertinent to face image analysis and face recognition are reviewed, and a concise overview of studies on specific face recognition problems is provided, such as handling variations in pose, age, illumination, expression, and heterogeneous face matching.

312 citations