scispace - formally typeset
Search or ask a question
Author

Prateekshit Pandey

Bio: Prateekshit Pandey is an academic researcher from University of Pennsylvania. The author has contributed to research in topics: Behavior change & Facial recognition system. The author has an hindex of 5, co-authored 10 publications receiving 126 citations. Previous affiliations of Prateekshit Pandey include Indraprastha Institute of Information Technology.

Papers
More filters
Proceedings ArticleDOI
TL;DR: A novel descriptor based minutiae detection algorithm for latent fingerprints that shows promising results on latent fingerprint matching on the NIST SD-27 database.
Abstract: Latent fingerprint identification is of critical importance in criminal investigation. FBI’s Next Generation Identification program demands latent fingerprint identification to be performed in lights-out mode, with very little or no human intervention. However, the performance of an automated latent fingerprint identification is limited due to imprecise automated feature (minutiae) extraction, specifically due to noisy ridge pattern and presence of background noise. In this paper, we propose a novel descriptor based minutiae detection algorithm for latent fingerprints. Minutia and non-minutia descriptors are learnt from a large number of tenprint fingerprint patches using stacked denoising sparse autoencoders. Latent fingerprint minutiae extraction is then posed as a binary classification problem to classify patches as minutia or non-minutia patch. Experiments performed on the NIST SD-27 database shows promising results on latent fingerprint matching.

50 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed that focusing on values and activities that transcend the self can allow people to see that their self-worth is not tied to a specific behavior in question, and in turn become more receptive to subsequent, otherwise threatening health information.
Abstract: Self-transcendence refers to a shift in mindset from focusing on self-interests to the well-being of others. We offer an integrative neural model of self-transcendence in the context of persuasive messaging by examining the mechanisms of self-transcendence in promoting receptivity to health messages and behavior change. Specifically, we posited that focusing on values and activities that transcend the self can allow people to see that their self-worth is not tied to a specific behavior in question, and in turn become more receptive to subsequent, otherwise threatening health information. To test whether inducing self-transcendent mindsets before message delivery would help overcome defensiveness and increase receptivity, we used two priming tasks, affirmation and compassion, to elicit a transcendent mindset among 220 sedentary adults. As preregistered, those who completed a self-transcendence task before health message exposure, compared with controls, showed greater increases in objectively logged levels of physical activity throughout the following month. In the brain, self-transcendence tasks up-regulated activity in a region of the ventromedial prefrontal cortex, chosen for its role in positive valuation and reward processing. During subsequent health message exposure, self-transcendence priming was associated with increased activity in subregions of the ventromedial prefrontal cortex, implicated in self-related processing and positive valuation, which predicted later decreases in sedentary behavior. The present findings suggest that having a positive self-transcendent mindset can increase behavior change, in part by increasing neural receptivity to health messaging.

30 citations

Proceedings ArticleDOI
13 Jun 2016
TL;DR: An autoencoder-style mapping function (AutoScat) is proposed that learns to encode the ScatNet representation of a face image to reduce the computation time.
Abstract: Prolonged usage of illicit drugs alter texture and geometric variations of a face and hence, affect the performance of face recognition algorithms. This research proposes a two fold contribution for advancing the state-of-art in recognizing face images with variations caused due to substance abuse: firstly, scattering transform (ScatNet) based face recognition algorithm is proposed. The algorithm yields good results however, it is very expensive in terms of the computational time and space. Therefore, as the next contribution, an autoencoder-style mapping function (AutoScat) is proposed that learns to encode the ScatNet representation of a face image to reduce the computation time. The results are evaluated on the publicly available Illicit Drug Abuse Face database. The results show that ScatNet based face recognition algorithm outperforms two commercial matchers. Further, compared with ScatNet, AutoScat is able to achieve lower rank-1 accuracy but requires 10−3 times lesser computational requirements and around 400 times smaller feature space.

26 citations

Proceedings ArticleDOI
07 Mar 2016
TL;DR: The experimental results show the decreased performance of current face recognition algorithms on drug abuse face images, and a proposed projective Dictionary learning based illicit Drug Abuse face Classification (DDAC) framework to effectively detect and separate faces affected by drug abuse from normal faces is proposed.
Abstract: Over the years, significant research has been undertaken to improve the performance of face recognition in the presence of covariates such as variations in pose, illumination, expressions, aging, and use of disguises. This paper highlights the effect of illicit drug abuse on facial features. An Illicit Drug Abuse Face (IDAF) database of 105 subjects has been created to study the performance on two commercial face recognition systems and popular face recognition algorithms. The experimental results show the decreased performance of current face recognition algorithms on drug abuse face images. This paper also proposes projective Dictionary learning based illicit Drug Abuse face Classification (DDAC) framework to effectively detect and separate faces affected by drug abuse from normal faces. This important pre-processing step stimulates researchers to develop a new class of face recognition algorithms specifically designed to improve the face recognition performance on faces affected by drug abuse. The highest classification accuracy of 88.81% is observed to detect such faces by the proposed DDAC framework on a combined database of illicit drug abuse and regular faces.

24 citations

02 Oct 2018
TL;DR: It is proposed that focusing on values and activities that transcend the self can allow people to see that their self-worth is not tied to a specific behavior in question, and in turn become more receptive to subsequent, otherwise threatening health information.
Abstract: Self-transcendence refers to a shift in mindset from focusing on self-interests to the well-being of others. We offer an integrative neural model of self-transcendence in the context of persuasive messaging by examining the mechanisms of self-transcendence in promoting receptivity to health messages and behavior change. Specifically, we posited that focusing on values and activities that transcend the self can allow people to see that their self-worth is not tied to a specific behavior in question, and in turn become more receptive to subsequent, otherwise threatening health information. To test whether inducing self-transcendent mindsets before message delivery would help overcome defensiveness and increase receptivity, we used two priming tasks, affirmation and compassion, to elicit a transcendent mindset among 220 sedentary adults. As preregistered, those who completed a self-transcendence task before health message exposure, compared with controls, showed greater increases in objectively logged levels of physical activity throughout the following month. In the brain, self-transcendence tasks up-regulated activity in a region of the ventromedial prefrontal cortex, chosen for its role in positive valuation and reward processing. During subsequent health message exposure, self-transcendence priming was associated with increased activity in subregions of the ventromedial prefrontal cortex, implicated in self-related processing and positive valuation, which predicted later decreases in sedentary behavior. The present findings suggest that having a positive self-transcendent mindset can increase behavior change, in part by increasing neural receptivity to health messaging.

16 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A novel cosaliency detection approach using deep learning models, called intrasaliency prior transfer and deep intersaliency mining, which can extract more intrinsic and general hidden patterns to discover the homogeneity of cosalient objects in terms of some higher level concepts.
Abstract: As an interesting and emerging topic, cosaliency detection aims at simultaneously extracting common salient objects in multiple related images. It differs from the conventional saliency detection paradigm in which saliency detection for each image is determined one by one independently without taking advantage of the homogeneity in the data pool of multiple related images. In this paper, we propose a novel cosaliency detection approach using deep learning models. Two new concepts, called intrasaliency prior transfer and deep intersaliency mining, are introduced and explored in the proposed work. For the intrasaliency prior transfer, we build a stacked denoising autoencoder (SDAE) to learn the saliency prior knowledge from auxiliary annotated data sets and then transfer the learned knowledge to estimate the intrasaliency for each image in cosaliency data sets. For the deep intersaliency mining, we formulate it by using the deep reconstruction residual obtained in the highest hidden layer of a self-trained SDAE. The obtained deep intersaliency can extract more intrinsic and general hidden patterns to discover the homogeneity of cosalient objects in terms of some higher level concepts. Finally, the cosaliency maps are generated by weighted integration of the proposed intrasaliency prior, deep intersaliency, and traditional shallow intersaliency. Comprehensive experiments over diverse publicly available benchmark data sets demonstrate consistent performance gains of the proposed method over the state-of-the-art cosaliency detection methods.

171 citations

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed an automated latent fingerprint recognition algorithm that utilizes Convolutional Neural Networks (ConvNets) for ridge flow estimation and minutiae descriptor extraction, and extract complementary templates (two minutia templates and one texture template) to represent the latent.
Abstract: Latent fingerprints are one of the most important and widely used evidence in law enforcement and forensic agencies worldwide. Yet, NIST evaluations show that the performance of state-of-the-art latent recognition systems is far from satisfactory. An automated latent fingerprint recognition system with high accuracy is essential to compare latents found at crime scenes to a large collection of reference prints to generate a candidate list of possible mates. In this paper, we propose an automated latent fingerprint recognition algorithm that utilizes Convolutional Neural Networks (ConvNets) for ridge flow estimation and minutiae descriptor extraction, and extract complementary templates (two minutiae templates and one texture template) to represent the latent. The comparison scores between the latent and a reference print based on the three templates are fused to retrieve a short candidate list from the reference database. Experimental results show that the rank-1 identification accuracies (query latent is matched with its true mate in the reference database) are 64.7 percent for the NIST SD27 and 75.3 percent for the WVU latent databases, against a reference database of 100K rolled prints. These results are the best among published papers on latent recognition and competitive with the performance (66.7 and 70.8 percent rank-1 accuracies on NIST SD27 and WVU DB, respectively) of a leading COTS latent Automated Fingerprint Identification System (AFIS). By score-level (rank-level) fusion of our system with the commercial off-the-shelf (COTS) latent AFIS, the overall rank-1 identification performance can be improved from 64.7 and 75.3 to 73.3 percent (74.4 percent) and 76.6 percent (78.4 percent) on NIST SD27 and WVU latent databases, respectively.

139 citations

Journal ArticleDOI
TL;DR: Longitudinal analysis of two mugshot databases shows that despite decreasing genuine scores, 99% of subjects can still be recognized at 0.01% FAR up to approximately 6 years elapsed time, and that age, sex, race, and race only marginally influence these trends.
Abstract: The two underlying premises of automatic face recognition are uniqueness and permanence. This paper investigates the permanence property by addressing the following: Does face recognition ability of state-of-the-art systems degrade with elapsed time between enrolled and query face images? If so, what is the rate of decline w.r.t. the elapsed time? While previous studies have reported degradations in accuracy, no formal statistical analysis of large-scale longitudinal data has been conducted. We conduct such an analysis on two mugshot databases, which are the largest facial aging databases studied to date in terms of number of subjects, images per subject, and elapsed times. Mixed-effects regression models are applied to genuine similarity scores from state-of-the-art COTS face matchers to quantify the population-mean rate of change in genuine scores over time, subject-specific variability, and the influence of age, sex, race, and face image quality. Longitudinal analysis shows that despite decreasing genuine scores, 99% of subjects can still be recognized at 0.01% FAR up to approximately 6 years elapsed time, and that age, sex, and race only marginally influence these trends. The methodology presented here should be periodically repeated to determine age-invariant properties of face recognition as state-of-the-art evolves to better address facial aging.

108 citations

Proceedings ArticleDOI
TL;DR: FingerNet as mentioned in this paper combines domain knowledge and the representation ability of deep learning for fingerprint extraction, and achieves state-of-the-art performance on the NIST SD27 latent database and FVC 2004 slap database.
Abstract: Minutiae extraction is of critical importance in automated fingerprint recognition. Previous works on rolled/slap fingerprints failed on latent fingerprints due to noisy ridge patterns and complex background noises. In this paper, we propose a new way to design deep convolutional network combining domain knowledge and the representation ability of deep learning. In terms of orientation estimation, segmentation, enhancement and minutiae extraction, several typical traditional methods performed well on rolled/slap fingerprints are transformed into convolutional manners and integrated as an unified plain network. We demonstrate that this pipeline is equivalent to a shallow network with fixed weights. The network is then expanded to enhance its representation ability and the weights are released to learn complex background variance from data, while preserving end-to-end differentiability. Experimental results on NIST SD27 latent database and FVC 2004 slap database demonstrate that the proposed algorithm outperforms the state-of-the-art minutiae extraction algorithms. Code is made publicly available at: https://github.com/felixTY/FingerNet.

93 citations

Journal ArticleDOI
Huaping Liu1, Yupei Wu1, Fuchun Sun1, Bin Fang1, Di Guo1 
TL;DR: A novel projective dictionary learning framework for weakly paired multimodal data fusion is established by introducing a latent pairing matrix, which realizes the simultaneous dictionary learning and the pairing matrix estimation, and therefore improves the fusion effect.
Abstract: The ever-growing development of sensor technology has led to the use of multimodal sensors to develop robotics and automation systems. It is therefore highly expected to develop methodologies capable of integrating information from multimodal sensors with the goal of improving the performance of surveillance, diagnosis, prediction, and so on. However, real multimodal data often suffer from significant weak-pairing characteristics, i.e., the full pairing between data samples may not be known, while pairing of a group of samples from one modality to a group of samples in another modality is known. In this paper, we establish a novel projective dictionary learning framework for weakly paired multimodal data fusion. By introducing a latent pairing matrix, we realize the simultaneous dictionary learning and the pairing matrix estimation, and therefore improve the fusion effect. In addition, the kernelized version and the optimization algorithms are also addressed. Extensive experimental validations on some existing data sets are performed to show the advantages of the proposed method. Note to Practitioners —In many industrial environments, we usually use multiple heterogeneous sensors, which provide multimodal information. Such multimodal data usually lead to two technical challenges. First, different sensors may provide different patterns of data. Second, the full-pairing information between modalities may not be known. In this paper, we develop a unified model to tackle such problems. This model is based on a projective dictionary learning method, which efficiently produces the representation vector for the original data by an explicit form. In addition, the latent pairing relation between samples can be learned automatically and be used to improve the classification performance. Such a method can be flexibly used for multimodal fusion with full-pairing, partial-pairing and weak-pairing cases.

78 citations