scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

On matching latent to latent fingerprints

TL;DR: A comparative analysis of existing algorithms is presented for this application, fusion and context switching frameworks are presented to improve the identification performance, and a multi-latent fingerprint database is prepared.
Abstract: This research presents a forensics application of matching two latent fingerprints. In crime scene settings, it is often required to match multiple latent fingerprints. Unlike matching latent with inked or live fingerprints, this research problem is very challenging and requires proper analysis and attention. The contribution of this paper is three fold: (i) a comparative analysis of existing algorithms is presented for this application, (ii) fusion and context switching frameworks are presented to improve the identification performance, and (iii) a multi-latent fingerprint database is prepared. The experiments highlight the need for improved feature extraction and processing methods and exhibit large scope of improvement in this important research problem.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The results proved that the used software functioned perfectly until a compression ratio of (30–40%) of the raw images; any higher ratio would negatively affect the accuracy of the used system.
Abstract: Despite the large body of work on fingerprint identification systems, most of it focused on using specialized devices. Due to the high price of such devices, some researchers directed their attention to digital cameras as an alternative source for fingerprints images. However, such sources introduce new challenges related to image quality. Specifically, most digital cameras compress captured images before storing them leading to potential losses of information. This study comes to address the need to determine the optimum ratio of the fingerprint image compression to ensure the fingerprint identification system’s high accuracy. This study is conducted using a large in-house dataset of raw images. Therefore, all fingerprint information is stored in order to determine the compression ratio accurately. The results proved that the used software functioned perfectly until a compression ratio of (30–40%) of the raw images; any higher ratio would negatively affect the accuracy of the used system.

154 citations


Additional excerpts

  • ...IIIT-D Latent database [47] consists of images of all 10 fingers for 15 subjects....

    [...]

Journal ArticleDOI
TL;DR: A new fingerprint matching algorithm which is especially designed for matching latents and uses a robust alignment algorithm (descriptor-based Hough transform) to align fingerprints and measures similarity between fingerprints by considering both minutiae and orientation field information.
Abstract: Identifying suspects based on impressions of fingers lifted from crime scenes (latent prints) is a routine procedure that is extremely important to forensics and law enforcement agencies. Latents are partial fingerprints that are usually smudgy, with small area and containing large distortion. Due to these characteristics, latents have a significantly smaller number of minutiae points compared to full (rolled or plain) fingerprints. The small number of minutiae and the noise characteristic of latents make it extremely difficult to automatically match latents to their mated full prints that are stored in law enforcement databases. Although a number of algorithms for matching full-to-full fingerprints have been published in the literature, they do not perform well on the latent-to-full matching problem. Further, they often rely on features that are not easy to extract from poor quality latents. In this paper, we propose a new fingerprint matching algorithm which is especially designed for matching latents. The proposed algorithm uses a robust alignment algorithm (descriptor-based Hough transform) to align fingerprints and measures similarity between fingerprints by considering both minutiae and orientation field information. To be consistent with the common practice in latent matching (i.e., only minutiae are marked by latent examiners), the orientation field is reconstructed from minutiae. Since the proposed algorithm relies only on manually marked minutiae, it can be easily used in law enforcement applications. Experimental results on two different latent databases (NIST SD27 and WVU latent databases) show that the proposed algorithm outperforms two well optimized commercial fingerprint matchers. Further, a fusion of the proposed algorithm and commercial fingerprint matchers leads to improved matching accuracy.

119 citations

Journal ArticleDOI
TL;DR: This paper proposes the first method in the literature able to extract the coordinates of the pores from touch-based, touchless, and latent fingerprint images, and uses specifically designed and trained Convolutional Neural Networks to estimate and refine the centroid of each pore.
Abstract: Most fingerprint recognition systems use Level 1 characteristics (ridge flow, orientation, and frequency) and Level 2 features (minutiae points) to recognize individuals. Level 3 features (sweat pores, incipient ridges and ultra-thin characteristics of the ridges) are less frequently adopted because they can be extracted only from high resolution images, but they have the potential of improving all the steps of the biometric recognition process. In particular, sweat pores can be used for quality assessment, liveness detection, biometric matching in live applications, and matching of partial latent fingerprints in forensic applications. Currently, each type of fingerprint acquisition technique (touch-based, touchless, or latent) requires a different algorithm for pore extraction. In this paper, we propose the first method in the literature able to extract the coordinates of the pores from touch-based, touchless, and latent fingerprint images. Our method uses specifically designed and trained Convolutional Neural Networks (CNN) to estimate and refine the centroid of each pore. Results show that our method is feasible and achieved satisfactory accuracy for all the types of evaluated images, with a better performance with respect to the compared state-of-the-art methods.

81 citations


Cites background from "On matching latent to latent finger..."

  • ...Another dataset of latent fingerprint images with high resolution is the IIIT-D Latent Fingerprint Database [43]....

    [...]

Proceedings ArticleDOI
TL;DR: A new fingerprint matching algorithm which is especially designed for matching latents and uses a robust alignment algorithm (descriptor-based Hough transform) to align fingerprints and measures similarity between fingerprints by considering both minutiae and orientation field information.
Abstract: Identifying suspects based on impressions of fingers lifted from crime scenes (latent prints) is extremely important to law enforcement agencies. Latents are usually partial fingerprints with small area, contain nonlinear distortion, and are usually smudgy and blurred. Due to some of these characteristics, they have a significantly smaller number of minutiae points (one of the most important features in fingerprint matching) and therefore it can be extremely difficult to automatically match latents to plain or rolled fingerprints that are stored in law enforcement databases. Our goal is to develop a latent matching algorithm that uses only minutiae information. The proposed approach consists of following three modules: (i) align two sets of minutiae by using a descriptor-based Hough Transform; (ii) establish the correspondences between minutiae; and (iii) compute a similarity score. Experimental results on NIST SD27 show that the proposed algorithm outperforms a commercial fingerprint matcher.

80 citations


Cites background from "On matching latent to latent finger..."

  • ...There have also been som e studies on fusion of multiple matchers [21] or multiple late nt prints [22]....

    [...]

Journal ArticleDOI
TL;DR: The process of automatic latent fingerprint matching is divided into five definite stages, and the existing algorithms, limitations, and future research directions in each of the stages are discussed.
Abstract: Latent fingerprint has been used as evidence in the court of law for over 100 years. However, even today, a completely automated latent fingerprint system has not been achieved. Researchers have identified several important challenges in latent fingerprint recognition: 1) low information content; 2) presence of background noise and nonlinear ridge distortion; 3) need for an established scientific procedure for matching latent fingerprints; and 4) lack of publicly available latent fingerprint databases. The process of automatic latent fingerprint matching is divided into five definite stages, and this paper discusses the existing algorithms, limitations, and future research directions in each of the stages.

72 citations


Cites methods from "On matching latent to latent finger..."

  • ...The baseline accuracies are computed on two commonly used public latent fingerprint databases - NIST SD-27 [16] and IIIT-D latent fingerprint database [78]....

    [...]

  • ...There are three publicly available latent fingerprint databases namely: NIST SD-27 [16] database, IIIT-D latent fingerprint database [78], and IIIT-D SLF database [80]....

    [...]

References
More filters
Book
12 Aug 2008
TL;DR: This book explains the principles that make support vector machines (SVMs) a successful modelling and prediction tool for a variety of applications and provides a unique in-depth treatment of both fundamental and recent material on SVMs that so far has been scattered in the literature.
Abstract: This book explains the principles that make support vector machines (SVMs) a successful modelling and prediction tool for a variety of applications. The authors present the basic ideas of SVMs together with the latest developments and current research questions in a unified style. They identify three reasons for the success of SVMs: their ability to learn well with only a very small number of free parameters, their robustness against several types of model violations and outliers, and their computational efficiency compared to several other methods. Since their appearance in the early nineties, support vector machines and related kernel-based methods have been successfully applied in diverse fields of application such as bioinformatics, fraud detection, construction of insurance tariffs, direct marketing, and data and text mining. As a consequence, SVMs now play an important role in statistical machine learning and are used not only by statisticians, mathematicians, and computer scientists, but also by engineers and data analysts. The book provides a unique in-depth treatment of both fundamental and recent material on SVMs that so far has been scattered in the literature. The book can thus serve as both a basis for graduate courses and an introduction for statisticians, mathematicians, and computer scientists. It further provides a valuable reference for researchers working in the field. The book covers all important topics concerning support vector machines such as: loss functions and their role in the learning process; reproducing kernel Hilbert spaces and their properties; a thorough statistical analysis that uses both traditional uniform bounds and more advanced localized techniques based on Rademacher averages and Talagrand's inequality; a detailed treatment of classification and regression; a detailed robustness analysis; and a description of some of the most recent implementation techniques. To make the book self-contained, an extensive appendix is added which provides the reader with the necessary background from statistics, probability theory, functional analysis, convex analysis, and topology.

4,664 citations

Journal ArticleDOI
TL;DR: This issue's collection of essays should help familiarize readers with this interesting new racehorse in the Machine Learning stable, and give a practical guide and a new technique for implementing the algorithm efficiently.
Abstract: My first exposure to Support Vector Machines came this spring when heard Sue Dumais present impressive results on text categorization using this analysis technique. This issue's collection of essays should help familiarize our readers with this interesting new racehorse in the Machine Learning stable. Bernhard Scholkopf, in an introductory overview, points out that a particular advantage of SVMs over other learning algorithms is that it can be analyzed theoretically using concepts from computational learning theory, and at the same time can achieve good performance when applied to real problems. Examples of these real-world applications are provided by Sue Dumais, who describes the aforementioned text-categorization problem, yielding the best results to date on the Reuters collection, and Edgar Osuna, who presents strong results on application to face detection. Our fourth author, John Platt, gives us a practical guide and a new technique for implementing the algorithm efficiently.

4,319 citations


"On matching latent to latent finger..." refers methods in this paper

  • ...Inspired from [18], SVM-based [7] context switching framework is used to dynamically select one of the three fingerprint classifiers....

    [...]

Book
10 Mar 2005
TL;DR: This unique reference work is an absolutely essential resource for all biometric security professionals, researchers, and systems administrators.
Abstract: A major new professional reference work on fingerprint security systems and technology from leading international researchers in the field Handbook provides authoritative and comprehensive coverage of all major topics, concepts, and methods for fingerprint security systems This unique reference work is an absolutely essential resource for all biometric security professionals, researchers, and systems administrators

3,821 citations

Journal ArticleDOI
TL;DR: A filter-based fingerprint matching algorithm which uses a bank of Gabor filters to capture both local and global details in a fingerprint as a compact fixed length FingerCode and is able to achieve a verification accuracy which is only marginally inferior to the best results of minutiae-based algorithms published in the open literature.
Abstract: Biometrics-based verification, especially fingerprint-based identification, is receiving a lot of attention. There are two major shortcomings of the traditional approaches to fingerprint representation. For a considerable fraction of population, the representations based on explicit detection of complete ridge structures in the fingerprint are difficult to extract automatically. The widely used minutiae-based representation does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Further, minutiae-based matching has difficulty in quickly matching two fingerprint images containing a different number of unregistered minutiae points. The proposed filter-based algorithm uses a bank of Gabor filters to capture both local and global details in a fingerprint as a compact fixed length FingerCode. The fingerprint matching is based on the Euclidean distance between the two corresponding FingerCodes and hence is extremely fast. We are able to achieve a verification accuracy which is only marginally inferior to the best results of minutiae-based algorithms published in the open literature. Our system performs better than a state-of-the-art minutiae-based system when the performance requirement of the application system does not demand a very low false acceptance rate. Finally, we show that the matching performance can be improved by combining the decisions of the matchers based on complementary (minutiae-based and filter-based) fingerprint information.

1,207 citations

Journal ArticleDOI
TL;DR: Experiments on three multibiometric databases indicate that the proposed fusion framework achieves consistently high performance compared to commonly used score fusion techniques based on score transformation and classification.
Abstract: Multibiometric systems fuse information from different sources to compensate for the limitations in performance of individual matchers. We propose a framework for the optimal combination of match scores that is based on the likelihood ratio test. The distributions of genuine and impostor match scores are modeled as finite Gaussian mixture model. The proposed fusion approach is general in its ability to handle 1) discrete values in biometric match score distributions, 2) arbitrary scales and distributions of match scores, 3) correlation between the scores of multiple matchers, and 4) sample quality of multiple biometric sources. Experiments on three multibiometric databases indicate that the proposed fusion framework achieves consistently high performance compared to commonly used score fusion techniques based on score transformation and classification.

538 citations