scispace - formally typeset
Search or ask a question

Showing papers by "Nalini K. Ratha published in 2014"


Journal ArticleDOI
TL;DR: A co-transfer learning framework is proposed, which is a cross-pollination of transfer learning and co-training paradigms and is applied for cross-resolution face matching and enhances the performance of cross- resolution face recognition.
Abstract: Face recognition algorithms are generally trained for matching high-resolution images and they perform well for similar resolution test data. However, the performance of such systems degrades when a low-resolution face image captured in unconstrained settings, such as videos from cameras in a surveillance scenario, are matched with high-resolution gallery images. The primary challenge, here, is to extract discriminating features from limited biometric content in low-resolution images and match it to information rich high-resolution face images. The problem of cross-resolution face matching is further alleviated when there is limited labeled positive data for training face recognition algorithms. In this paper, the problem of cross-resolution face matching is addressed where low-resolution images are matched with high-resolution gallery. A co-transfer learning framework is proposed, which is a cross-pollination of transfer learning and co-training paradigms and is applied for cross-resolution face matching. The transfer learning component transfers the knowledge that is learnt while matching high-resolution face images during training to match low-resolution probe images with high-resolution gallery during testing. On the other hand, co-training component facilitates this transfer of knowledge by assigning pseudolabels to unlabeled probe instances in the target domain. Amalgamation of these two paradigms in the proposed ensemble framework enhances the performance of cross-resolution face recognition. Experiments on multiple face databases show the efficacy of the proposed algorithm and compare with some existing algorithms and a commercial system. In addition, several high profile real-world cases have been used to demonstrate the usefulness of the proposed approach in addressing the tough challenges.

68 citations


Proceedings ArticleDOI
TL;DR: A very high accuracy multi-modal authentication system based on fusion of several biometrics combined with a policy manager and a new biometric modality: chirography which is based on user writing on multi-touch screens using their finger is introduced.
Abstract: User authentication in the context of a secure transaction needs to be continuously evaluated for the risks associated with the transaction authorization. The situation becomes even more critical when there are regulatory compliance requirements. Need for such systems have grown dramatically with the introduction of smart mobile devices which make it far easier for the user to complete such transaction quickly but with a huge exposure to risk. Biometrics can play a very significant role in addressing such problems as a key indicator of the user identity and thus reducing the risk of fraud. While unimodal biometrics authentication systems are being increasingly experimented by mainstream mobile system manufacturers (e.g., fingerprint in iOS), we explore various opportunities of reducing risk in a multimodal biometrics system. The multimodal system is based on fusion of several biometrics combined with a policy manager. A new biometric modality: chirography which is based on user writing on multi-touch screens using their finger is introduced. Coupling with chirography, we also use two other biometrics: face and voice. Our fusion strategy is based on inter-modality score level fusion that takes into account a voice quality measure. The proposed system has been evaluated on an in-house database that reflects the latest smart mobile devices. On this database, we demonstrate a very high accuracy multi-modal authentication system reaching an EER of 0.1% in an office environment and an EER of 0.5% in challenging noisy environments.

38 citations


Patent
04 Apr 2014
TL;DR: In this article, a display device comprises a plurality of light emitting elements in a layer on a substrate, microprisms positioned over the layer, a pluralityof light detectors on the substrate, each light detector respectively corresponding to a light emitting element of the plurality of elements, and a display screen, wherein the light detectors are used to sense at least one property of an item in contact with the display screen.
Abstract: A display device comprises a plurality of light emitting elements in a layer on a substrate, a plurality of microprisms positioned over the layer, a plurality of light detectors on the substrate, each light detector respectively corresponding to a light emitting element of the plurality of light emitting elements, and a display screen, wherein the light detectors are used to sense at least one property of an item in contact with the display screen.

10 citations


Patent
05 Dec 2014
TL;DR: One or more processors generate a set of facial appearance parameters that are derived from a first facial image One or more processor generate a graphics control vector based, at least in part, on the set of face appearance parameters.
Abstract: One or more processors generate a set of facial appearance parameters that are derived from a first facial image One or more processors generate a graphics control vector based, at least in part, on the set of facial appearance parameters One or more processors render a second facial image based on the graphics control vector One or more processors compare the second facial image to the first image One or more processors generate an adjusted vector by adjusting one or more parameters of the graphics control vector such that a degree of similarity between the second facial image and the first facial image is increased The adjusted vector includes a biometric portion One or more processors generate a first face representation based, at least in part, on the biometric portion of the adjusted vector

6 citations


Patent
30 Apr 2014
TL;DR: In this article, a method, system and computer program for categorizing heart diseases is presented, which is based on segmentation of the myocardium and the interior fibrous muscle.
Abstract: A method, system and computer program for categorizing heart diseases is presented. An example method includes receiving a series of cardiac images of a heart, the cardiac images including a myocardium, and interior fibrous muscles of the heart. Cardiac images are segmented, into a myocardium segmentation showing an anatomical shape and a motion of the myocardium, and an interior fibrous muscles segmentation showing an anatomical shape and a motion of the interior fibrous muscles. The myocardium segmentation is converted into a regional characterization of the anatomical shape and motion of the myocardium. The interior fibrous muscles segmentation is converted to a regional characterization of the anatomical shape and motion of the interior fibrous muscles. Heart, conditions are characterized based on the regional characterizations of the anatomical shape and the motion of the myocardium and the interior fibrous muscles.

1 citations