scispace - formally typeset
M

Maneet Singh

Researcher at Indraprastha Institute of Information Technology

Publications -  55
Citations -  861

Maneet Singh is an academic researcher from Indraprastha Institute of Information Technology. The author has contributed to research in topics: Facial recognition system & Deep learning. The author has an hindex of 15, co-authored 51 publications receiving 605 citations.

Papers
More filters
Posted Content

Learning A Shared Transform Model for Skull to Digital Face Image Matching.

TL;DR: A novel Shared Transform Model is proposed for learning discriminative representations that learns robust features while reducing the intra-class variations between skulls and digital face images, and can assist law enforcement agencies by speeding up the process of skull identification, and reducing the manual load.
Proceedings ArticleDOI

Triplet Transform Learning for Automated Primate Face Recognition

TL;DR: A novel Triplet Transform Learning (TTL) model for learning discriminative representations of primate faces is proposed, where it outperforms the existing approaches and attains state-of-the-art performance on the primates database.
Proceedings ArticleDOI

Region-specific fMRI dictionary for decoding face verification in humans

TL;DR: A novel two-level fMRI dictionary learning approach to predict if the stimuli observed is genuine or imposter using the brain activation data for selected regions is proposed.
Book ChapterDOI

GroupMixNorm Layer for Learning Fair Models

TL;DR: GroupMixNorm as discussed by the authors proposes a novel in-processing based GroupMixNorm layer for mitigating bias from deep learning models, which probabilistically mixes group-level feature statistics of samples across different groups based on the protected attribute.
Journal ArticleDOI

Detox Loss: Fairness Constraints for Learning With Imbalanced Data

TL;DR: In this article , the authors proposed Detox loss, a bias invariant feature learning loss function for learning unbiased models, which can be used to learn fairer deep learning classifiers, and mitigate bias from existing pre-trained networks, especially in the challenging constraint of imbalanced training data with respect to a protected attribute.