scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Adaptive fusion of biometric and biographic information for identity de-duplication

TL;DR: This paper proposes an adaptive sequential framework to automatically determine which subset of biometric traits and biographic information is adequate for de-duplication of a given query, on a virtual multi-biometric database of 27,000 subjects.
About: This article is published in Pattern Recognition Letters.The article was published on 2016-12-01. It has received 15 citations till now. The article focuses on the topics: Biometrics & Fingerprint (computing).
Citations
More filters
Proceedings ArticleDOI
01 Dec 2016
TL;DR: Two widely used unimodal biometric systems, keystroke dynamics and face recognition, are fused to create a stronger multi-biometric system for continuous authentication and the experimental results confirm that a multi-factor authentication system gives better accuracy than a single-Factor authentication system.
Abstract: Several application scenarios require the user to be authenticated not only at the time of logging in to a device, but continuously, such as a mobile device being used for an extended period of time, or examinees attempting for an online test. In this paper, two widely used unimodal biometric systems, which can both easily be captured on modern computing devices, keystroke dynamics and face recognition, are fused to create a stronger multi-biometric system for continuous authentication. The matching score for keystroke dynamics system is obtained using nearest neighbor classification (combined distance) and for the face recognition system, the EigenFace approach is used. The fusion of matching scores obtained by these unimodal biometric systems at the score level improves the accuracy. Scores obtained from each individual biometric system on the CMU keystroke dynamics database and the ORL face database is normalized using min-max normalization before fusion. The sum, product and weighted sum rules have been used for fusion and the experimental results confirm that a multi-factor authentication system gives better accuracy than a single-factor authentication system. The experiments also indicate that the weighted sum rule outperforms the sum and product rule method.

8 citations


Cites methods from "Adaptive fusion of biometric and bi..."

  • ...An adaptive fusion strategy [23] may also be applied to determine the modalities that would be most useful for computation and fusion of matching scores....

    [...]

Journal ArticleDOI
TL;DR: This scheme, based on fuzzy commitment protocol, provides a flexible way to realize the fingerprint recognition between the complexity and security by designing error-correcting codes with different parameters, but also offers a good balance between the genuine accept rate (GAR) and the false acceptance rate (FAR) with customized sector coding strategies.
Abstract: In parallel with the rapid developments of cloud-assisted IoT, the corresponding security and privacy issue emerges as a challenge. Biometric recognition technologies are interesting and promising to reinforce traditional cryptographic and personal authentication systems for cloud-assisted IoT. However, the biometric information, if compromised, cannot be canceled and substituted easily. In this paper, based on fuzzy commitment protocol, a fingerprint recognition scheme using minutiae-based sector coding strategy is proposed for cloud-assisted IoT. In our approach, the minutiae of a fingerprint are classified into many designed sectors and then encoded according to their features. With the idea of fuzzy commitment, the key encryption process is accomplished by using BCH codes and Hash mappings. Our scheme, not only provides a flexible way to realize the fingerprint recognition between the complexity and security by designing error-correcting codes with different parameters, but also offers a good balance between the genuine accept rate (GAR) and the false acceptance rate (FAR) with customized sector coding strategies.

7 citations


Cites methods from "Adaptive fusion of biometric and bi..."

  • ...The biometrics are widely used in the identity authentication system due to their unique and stability [10], [11]....

    [...]

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This research proposes a conceptual framework called CaCM (Context-aware Call Management) for mobile call management and implements a prototype system based on CaCM that incorporates rich context factors, including time, location, event, social relations, environment, body position, and body movement, and leverages machine learning algorithms to build call management models for individual mobile phone users.
Abstract: When a user receives a phone call, his mobile phone will normally ring or vibrate immediately regardless of whether the user is available to answer the call or not, which could be disruptive to his ongoing tasks or social situation. Mobile call management systems are a type of mobile applications for coping with the problem of mobile interruption. They aim to reduce mobile interruption through effective management of incoming phone calls and improve user satisfaction. Many existing systems often utilize only one or two types of user context (e.g., location) to determine the availability of the callee and make real-time decisions on how to handle the incoming call. In reality, however, mobile call management needs to take diverse contextual information of individual users into consideration, such as time, location, event, and social relations. The objective of this research is to propose a conceptual framework called CaCM (Context-aware Call Management) for mobile call management and implement a prototype system based on CaCM that incorporates rich context factors, including time, location, event, social relations, environment, body position, and body movement, and leverages machine learning algorithms to build call management models for individual mobile phone users. An empirical evaluation via a field study shows promising results that demonstrate the effectiveness of the proposed approach.

6 citations


Cites methods from "Adaptive fusion of biometric and bi..."

  • ...Logistic regression is a classification method [29] for predicting an outcome of a categorical variable (two levels of a binary dependent variable: “yes” (interruption) vs....

    [...]

Journal ArticleDOI
TL;DR: This work proposes the use of a graph structure to model the relationship between the biometric records in a database and shows the benefits of such a graph in deducing biographic labels of incomplete records, i.e. records that may have missing biographic information.
Abstract: A biometric system uses the physical or behavioural attributes of a person, such as face, fingerprint, iris or voice, to recognise an individual. Many operational biometric systems store the biographic information of an individual, viz., name, gender, age and ethnicity, besides the biometric data itself. Thus, the biometric record pertaining to an individual consists of both biometric data and biographic data. We propose the use of a graph structure to model the relationship between the biometric records in a database. We show the benefits of such a graph in deducing biographic labels of incomplete records, i.e. records that may have missing biographic information. In particular, we use a label propagation scheme to deduce missing values for both binary-valued biographic attributes (e.g. gender) as well as multi-valued biographic attributes (e.g. age group). Experimental results using face-based biometric records consisting of name, age, gender and ethnicity convey the pros and cons of the proposed method.

5 citations


Cites background or methods from "Adaptive fusion of biometric and bi..."

  • ...[29] that use synthetic datasets, our work utilizes real naturally occurring datasets....

    [...]

  • ...[29] Proprietary Face, Fingerprint Name, Father’s Name Fusion with Biometric Score...

    [...]

  • ...[29] also combine biometric and biographic matchers for de-duplication....

    [...]

Journal ArticleDOI
07 Oct 2021-PeerJ
TL;DR: In this article, the authors focus on the existing multibiometric systems that use hand based modalities for the identification of individuals and discuss various open issues and challenges faced by researchers and propose some future directions that can enhance the security of multiibiometric templates.
Abstract: The traditional methods used for the identification of individuals such as personal identification numbers (PINs), identification tags, etc., are vulnerable as they are easily compromised by the hackers. In this paper, we aim to focus on the existing multibiometric systems that use hand based modalities for the identification of individuals. We cover the existing multibiometric systems in the context of various feature extraction schemes, along with an analysis of their performance using one of the performance measures used for biometric systems. Later, we cover the literature on template protection including various cancelable biometrics and biometric cryptosystems and provide a brief comment about the methods used for multibiometric template protection. Finally, we discuss various open issues and challenges faced by researchers and propose some future directions that can enhance the security of multibiometric templates.

4 citations

References
More filters
Journal ArticleDOI
TL;DR: This paper presents an extensive set of duplicate detection algorithms that can detect approximately duplicate records in a database and covers similarity metrics that are commonly used to detect similar field entries.
Abstract: Often, in the real world, entities have two or more representations in databases. Duplicate records do not share a common key and/or they contain errors that make duplicate matching a difficult task. Errors are introduced as the result of transcription errors, incomplete information, lack of standard formats, or any combination of these factors. In this paper, we present a thorough analysis of the literature on duplicate record detection. We cover similarity metrics that are commonly used to detect similar field entries, and we present an extensive set of duplicate detection algorithms that can detect approximately duplicate records in a database. We also cover multiple techniques for improving the efficiency and scalability of approximate duplicate detection algorithms. We conclude with coverage of existing tools and with a brief discussion of the big open problems in the area

1,778 citations

Journal ArticleDOI
TL;DR: A recursive approach based on Kalman's work in linear dynamic filtering and prediction is applied, derivable also from the work of Swerling (1959), which provides an example of many other possible uses of recursive techniques in nonlinear estimation and in related areas.
Abstract: SUMMARY A method for estimating the probability of occurrence of an event from dichotomous or polychotomous data is developed, using a recursive approach. The method in the dichotomous case is applied to the data of a 10-year prospective study of coronary disease. Other areas of application are briefly indicated. The purpose of this paper is to develop a method for estimating from dichotomous (quantal) or polychotomous data, the probability of occurrence of an event as a function of a relatively large number of independent variables. A key feature of the method is a recursive approach based on Kalman's work (Kalman, 1960 and unpublished report) in linear dynamic filtering and prediction, derivable also from the work of Swerling (1959), which provides an example of many other possible uses of recursive techniques in nonlinear estimation and in related areas. The problem that motivated the investigation is a central one in the epidemiology of coronary heart disease, and it will be used to fix ideas and illustrate the method. Some indication of the range of applications will be given in the conclusion.

1,662 citations

Journal ArticleDOI
TL;DR: This paper presents an extensive set of duplicate detection algorithms that can detect approximately duplicate records in a database and covers similarity metrics that are commonly used to detect similar field entries.
Abstract: Often, in the real world, entities have two or more representations in databases. Duplicate records do not share a common key and/or they contain errors that make duplicate matching a difficult task. Errors are introduced as the result of transcription errors, incomplete information, lack of standard formats, or any combination of these factors. In this paper, we present a thorough analysis of the literature on duplicate record detection. We cover similarity metrics that are commonly used to detect similar field entries, and we present an extensive set of duplicate detection algorithms that can detect approximately duplicate records in a database. We also cover multiple techniques for improving the efficiency and scalability of approximate duplicate detection algorithms. We conclude with coverage of existing tools and with a brief discussion of the big open problems in the area

1,640 citations