scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A comprehensive overview of biometric fusion

TL;DR: A comprehensive review of techniques incorporating ancillary information in the biometric recognition pipeline is presented in this paper, where the authors provide a comprehensive overview of the role of information fusion in biometrics.
About: This article is published in Information Fusion.The article was published on 2019-12-01 and is currently open access. It has received 151 citations till now. The article focuses on the topics: Biometrics.
Citations
More filters
01 Jun 2005

3,154 citations

Journal ArticleDOI
TL;DR: This survey provides a thorough review of techniques for manipulating face images including DeepFake methods, and methods to detect such manipulations, with special attention to the latest generation of DeepFakes.

502 citations

Posted Content
TL;DR: This survey provides an overview of the face image quality assessment literature, which predominantly focuses on visible wavelength face image input and a trend towards deep learning based methods is observed, including notable conceptual differences among the recent approaches.
Abstract: The performance of face analysis and recognition systems depends on the quality of the acquired face data, which is influenced by numerous factors. Automatically assessing the quality of face data in terms of biometric utility can thus be useful to filter out low quality data. This survey provides an overview of the face quality assessment literature in the framework of face biometrics, with a focus on face recognition based on visible wavelength face images as opposed to e.g. depth or infrared quality assessment. A trend towards deep learning based methods is observed, including notable conceptual differences among the recent approaches. Besides image selection, face image quality assessment can also be used in a variety of other application scenarios, which are discussed herein. Open issues and challenges are pointed out, i.a. highlighting the importance of comparability for algorithm evaluations, and the challenge for future work to create deep learning approaches that are interpretable in addition to providing accurate utility predictions.

51 citations

Posted Content
TL;DR: FaceQnet is a novel opensource face quality assessment tool, inspired and powered by deep learning technology, which assigns a scalar quality measure to facial images, as prediction of their recognition accuracy.
Abstract: "The output of a computerised system can only be as accurate as the information entered into it." This rather trivial statement is the basis behind one of the driving concepts in biometric recognition: biometric quality. Quality is nowadays widely regarded as the number one factor responsible for the good or bad performance of automated biometric systems. It refers to the ability of a biometric sample to be used for recognition purposes and produce consistent, accurate, and reliable results. Such a subjective term is objectively estimated by the so-called biometric quality metrics. These algorithms play nowadays a pivotal role in the correct functioning of systems, providing feedback to the users and working as invaluable audit tools. In spite of their unanimously accepted relevance, some of the most used and deployed biometric characteristics are lacking behind in the development of these methods. This is the case of face recognition. After a gentle introduction to the general topic of biometric quality and a review of past efforts in face quality metrics, in the present work, we address the need for better face quality metrics by developing FaceQnet. FaceQnet is a novel opensource face quality assessment tool, inspired and powered by deep learning technology, which assigns a scalar quality measure to facial images, as prediction of their recognition accuracy. Two versions of FaceQnet have been thoroughly evaluated both in this work and also independently by NIST, showing the soundness of the approach and its competitiveness with respect to current state-of-the-art metrics. Even though our work is presented here particularly in the framework of face biometrics, the proposed methodology for building a fully automated quality metric can be very useful and easily adapted to other artificial intelligence tasks.

44 citations

References
More filters
Journal ArticleDOI
TL;DR: The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Abstract: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.

18,616 citations

Proceedings ArticleDOI
07 Dec 2015
TL;DR: A novel deep learning framework for attribute prediction in the wild that cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently.
Abstract: Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.

6,273 citations

Proceedings ArticleDOI
27 Jun 2016
TL;DR: DeepFool as discussed by the authors proposes the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers by making them more robust.
Abstract: State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.1

4,505 citations

Book
10 Mar 2005
TL;DR: This unique reference work is an absolutely essential resource for all biometric security professionals, researchers, and systems administrators.
Abstract: A major new professional reference work on fingerprint security systems and technology from leading international researchers in the field Handbook provides authoritative and comprehensive coverage of all major topics, concepts, and methods for fingerprint security systems This unique reference work is an absolutely essential resource for all biometric security professionals, researchers, and systems administrators

3,821 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers and outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.
Abstract: Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images

2,081 citations