scispace - formally typeset
Search or ask a question

Showing papers by "Maneet Singh published in 2017"


Proceedings ArticleDOI
01 Oct 2017
TL;DR: DeepTransformer as mentioned in this paper learns a transformation and mapping function between the features of two domains, which can be applied with any existing learned or hand-crafted feature and can be used for sketch-to-sketch matching.
Abstract: Face sketch to digital image matching is an important challenge of face recognition that involves matching across different domains. Current research efforts have primarily focused on extracting domain invariant representations or learning a mapping from one domain to the other. In this research, we propose a novel transform learning based approach termed as DeepTransformer, which learns a transformation and mapping function between the features of two domains. The proposed formulation is independent of the input information and can be applied with any existing learned or hand-crafted feature. Since the mapping function is directional in nature, we propose two variants of DeepTransformer: (i) semi-coupled and (ii) symmetrically-coupled deep transform learning. This research also uses a novel IIIT-D Composite Sketch with Age (CSA) variations database which contains sketch images of 150 subjects along with age-separated digital photos. The performance of the proposed models is evaluated on a novel application of sketch-to-sketch matching, along with sketch-to-digital photo matching. Experimental results demonstrate the robustness of the proposed models in comparison to existing state-of-the-art sketch matching algorithms and a commercial face recognition system.

25 citations


Posted Content
TL;DR: A novel supervised auto-encoder based approach, Deep Class-Encoder, which uses class labels to learn discriminative representation for the given sample by mapping the learned feature vector to its label.
Abstract: Soft biometric modalities have shown their utility in different applications including reducing the search space significantly. This leads to improved recognition performance, reduced computation time, and faster processing of test samples. Some common soft biometric modalities are ethnicity, gender, age, hair color, iris color, presence of facial hair or moles, and markers. This research focuses on performing ethnicity and gender classification on iris images. We present a novel supervised autoencoder based approach, Deep Class-Encoder, which uses class labels to learn discriminative representation for the given sample by mapping the learned feature vector to its label. The proposed model is evaluated on two datasets each for ethnicity and gender classification. The results obtained using the proposed Deep Class-Encoder demonstrate its effectiveness in comparison to existing approaches and state-of-the-art methods.

24 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: Experimental evaluation and analysis on the proposed dataset show the effectiveness of the transfer learning approach for performing cross-modality recognition.
Abstract: Matching facial sketches to digital face images has widespread application in law enforcement scenarios. Recent advancements in technology have led to the availability of sketch generation tools, minimizing the requirement of a sketch artist. While these sketches have helped in manual authentication, matching composite sketches with digital mugshot photos automatically show high modality gap. This research aims to address the task of matching a composite face sketch image to digital images by proposing a transfer learning based evolutionary algorithm. A new feature descriptor, Histogram of Image Moments, has also been presented for encoding features across modalities. Moreover, IIITD Composite Face Sketch Database of 150 subjects is presented to fill the gap due to limited availability of databases in this problem domain. Experimental evaluation and analysis on the proposed dataset show the effectiveness of the transfer learning approach for performing cross-modality recognition.

24 citations


Proceedings ArticleDOI
TL;DR: Deep Class Encoder as mentioned in this paper uses class labels to learn discriminative representation for the given sample by mapping the learned feature vector to its label, which leads to improved recognition performance, reduced computation time and faster processing of test samples.
Abstract: Soft biometric modalities have shown their utility in different applications including reducing the search space significantly. This leads to improved recognition performance, reduced computation time, and faster processing of test samples. Some common soft biometric modalities are ethnicity, gender, age, hair color, iris color, presence of facial hair or moles, and markers. This research focuses on performing ethnicity and gender classification on iris images. We present a novel supervised auto-encoder based approach, Deep Class-Encoder, which uses class labels to learn discriminative representation for the given sample by mapping the learned feature vector to its label. The proposed model is evaluated on two datasets each for ethnicity and gender classification. The results obtained using the proposed Deep Class-Encoder demonstrate its effectiveness in comparison to existing approaches and state-of-the-art methods.

20 citations


Proceedings ArticleDOI
TL;DR: In this article, the first skull-face image pair database, Identify Me, was introduced and a preliminary approach using the proposed semi-supervised formulation of transform learning was presented.
Abstract: Forensic application of automatically matching skull with face images is an important research area linking biometrics with practical applications inforensics. It is an opportunity for biometrics and face recognition researchers to help the law enforcement and forensic experts in giving an identity to unidentified human skulls. It is an extremely challenging problem which is further exacerbated due to lack of any publicly available database related to this problem. This is the first research in this direction with a twofold contribution: (i) introducing the first of its kind skull-face image pair database, Identify Me, and (ii) presenting a preliminary approach using the proposed semi-supervised formulation of transform learning. The experimental results and comparison with existing algorithms showcase the challenging nature of the problem. We assert that the availability of the database will inspire researchers to build sophisticated skull-to-face matching algorithms.

6 citations


Proceedings ArticleDOI
01 May 2017
TL;DR: This work attempts to address the challenging problem of gender classification in multi-spectral low resolution face images by proposing a robust Class Representative Autoencoder model, termed as AutoGen for the same.
Abstract: Gender is one of the most common attributes used to describe an individual. It is used in multiple domains such as human computer interaction, marketing, security, and demographic reports. Research has been performed to automate the task of gender recognition in constrained environment using face images, however, limited attention has been given to gender classification in unconstrained scenarios. This work attempts to address the challenging problem of gender classification in multi-spectral low resolution face images. We propose a robust Class Representative Autoencoder model, termed as AutoGen for the same. The proposed model aims to minimize the intra-class variations while maximizing the inter-class variations for the learned feature representations. Results on visible as well as near infrared spectrum data for different resolutions and multiple databases depict the efficacy of the proposed model. Comparative results with existing approaches and two commercial off-the-shelf systems further motivate the use of class representative features for classification.

6 citations


Posted Content
TL;DR: A novel transform learning based approach termed as DeepTransformer, which learns a transformation and mapping function between the features of two domains, and can be applied with any existing learned or hand-crafted feature.
Abstract: Face sketch to digital image matching is an important challenge of face recognition that involves matching across different domains. Current research efforts have primarily focused on extracting domain invariant representations or learning a mapping from one domain to the other. In this research, we propose a novel transform learning based approach termed as DeepTransformer, which learns a transformation and mapping function between the features of two domains. The proposed formulation is independent of the input information and can be applied with any existing learned or hand-crafted feature. Since the mapping function is directional in nature, we propose two variants of DeepTransformer: (i) semi-coupled and (ii) symmetrically-coupled deep transform learning. This research also uses a novel IIIT-D Composite Sketch with Age (CSA) variations database which contains sketch images of 150 subjects along with age-separated digital photos. The performance of the proposed models is evaluated on a novel application of sketch-to-sketch matching, along with sketch-to-digital photo matching. Experimental results demonstrate the robustness of the proposed models in comparison to existing state-of-the-art sketch matching algorithms and a commercial face recognition system.

5 citations


Posted Content
TL;DR: This work introduces the first of its kind skull-face image pair database, Identify Me, and presents a preliminary approach using the proposed semi-supervised formulation of transform learning to inspire researchers to build sophisticated skull-to-face matching algorithms.
Abstract: Forensic application of automatically matching skull with face images is an important research area linking biometrics with practical applications in forensics. It is an opportunity for biometrics and face recognition researchers to help the law enforcement and forensic experts in giving an identity to unidentified human skulls. It is an extremely challenging problem which is further exacerbated due to lack of any publicly available database related to this problem. This is the first research in this direction with a two-fold contribution: (i) introducing the first of its kind skull-face image pair database, IdentifyMe, and (ii) presenting a preliminary approach using the proposed semi-supervised formulation of transform learning. The experimental results and comparison with existing algorithms showcase the challenging nature of the problem. We assert that the availability of the database will inspire researchers to build sophisticated skull-to-face matching algorithms.

3 citations


Proceedings ArticleDOI
01 May 2017
TL;DR: A novel two-level fMRI dictionary learning approach to predict if the stimuli observed is genuine or imposter using the brain activation data for selected regions is proposed.
Abstract: This paper focuses on decoding the process of face verification in the human brain using fMRI responses. 2400 fMRI responses are collected from different participants while they perform face verification on genuine and imposter stimuli face pairs. The first part of the paper analyzes the responses covering both cognitive and fMRI neuro-imaging results. With an average verification accuracy of 64.79% by human participants, the results of the cognitive analysis depict that the performance of female participants is significantly higher than the male participants with respect to imposter pairs. The results of the neuro-imaging analysis identifies regions of the brain such as the left fusiform gyrus, caudate nucleus, and superior frontal gyrus that are activated when participants perform face verification tasks. The second part of the paper proposes a novel two-level fMRI dictionary learning approach to predict if the stimuli observed is genuine or imposter using the brain activation data for selected regions. A comparative analysis with existing machine learning techniques illustrates that the proposed approach yields at least 4.5% higher classification accuracy than other algorithms. It is envisioned that the result of this study is the first step in designing brain-inspired automatic face verification algorithms.

1 citations