scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Comparison and combination of ear and face images in appearance-based biometrics

TL;DR: It is found that recognition performance is not significantly different between the face and the ear, for example, 70.5 percent versus 71.6 percent in one experiment and multimodal recognition using both the ear and face results in statistically significant improvement over either individual biometric.
Abstract: Researchers have suggested that the ear may have advantages over the face for biometric recognition. Our previous experiments with ear and face recognition, using the standard principal component analysis approach, showed lower recognition performance using ear images. We report results of similar experiments on larger data sets that are more rigorously controlled for relative quality of face and ear images. We find that recognition performance is not significantly different between the face and the ear, for example, 70.5 percent versus 71.6 percent, respectively, in one experiment. We also find that multimodal recognition using both the ear and face results in statistically significant improvement over either individual biometric, for example, 90.9 percent in the analogous experiment.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This survey focuses on recognition performed by matching models of the three-dimensional shape of the face, either alone or in combination with matching corresponding two-dimensional intensity images.

1,069 citations


Cites methods from "Comparison and combination of ear a..."

  • ...For example, the use of 2D images of the face has the potential to provide data that might be used for iris recognition or ear recognition [ 15 ] as well....

    [...]

Journal ArticleDOI
TL;DR: A discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has been provided.
Abstract: Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.

751 citations


Cites methods from "Comparison and combination of ear a..."

  • ...Face recognition is also being used in conjunction with other biometrics such as speech, iris, fingerprint, ear and gait recognition in order to enhance the recognition performance of these methods [8, 22-34]....

    [...]

Proceedings ArticleDOI
28 Mar 2005
TL;DR: This work discusses fusion at the feature level in 3 different scenarios: (i) fusion of PCA and LDA coefficients of face; (ii) Fusion of LDA coefficient corresponding to the R,G,B channels of a face image; and (iii) fusionof face and hand modalities.
Abstract: Multibiometric systems utilize the evidence presented by multiple biometric sources (e.g., face and fingerprint, multiple fingers of a user, multiple matchers, etc.) in order to determine or verify the identity of an individual. Information from multiple sources can be consolidated in several distinct levels, including the feature extraction level, match score level and decision level. While fusion at the match score and decision levels have been extensively studied in the literature, fusion at the feature level is a relatively understudied problem. In this paper we discuss fusion at the feature level in 3 different scenarios: (i) fusion of PCA and LDA coefficients of face; (ii) fusion of LDA coefficients corresponding to the R,G,B channels of a face image; (iii) fusion of face and hand modalities. Preliminary results are encouraging and help in highlighting the pros and cons of performing fusion at this level. The primary motivation of this work is to demonstrate the viability of such a fusion and to underscore the importance of pursuing further research in this direction.

397 citations

Journal ArticleDOI
TL;DR: A complete system for ear biometrics, including automated segmentation of the ear in a profile view image and 3D shape matching for recognition is presented, achieving a rank-one recognition rate of 97.8 percent.
Abstract: Previous works have shown that the ear is a promising candidate for biometric identification. However, in prior work, the preprocessing of ear images has had manual steps and algorithms have not necessarily handled problems caused by hair and earrings. We present a complete system for ear biometrics, including automated segmentation of the ear in a profile view image and 3D shape matching for recognition. We evaluated this system with the largest experimental study to date in ear biometrics, achieving a rank-one recognition rate of 97.8 percent for an identification scenario and an equal error rate of 1.2 percent for a verification scenario on a database of 415 subjects and 1,386 total probes.

376 citations


Cites methods from "Comparison and combination of ear a..."

  • ...In their work, a lighting compensation technique is introduced to normalize the color appearance....

    [...]

  • ...Section 7 gives the summary and conclusions....

    [...]

  • ...In another previous work, we compared recognition using 2D intensity images of the ear with recognition using 2D intensity images of the face and suggested that they are comparable in recognition power [6], [27]....

    [...]

  • ...Ç...

    [...]

  • ...We evaluated this system with the largest experimental study to date in ear biometrics, achieving a rank-one recognition rate of 97.8 percent for an identification scenario and an equal error rate of 1.2 percent for a verification scenario on a database of 415 subjects and 1,386 total probes....

    [...]

Journal ArticleDOI
TL;DR: An extensive review of biometric technology is presented here, focusing on mono-modal biometric systems along with their architecture and information fusion levels.

351 citations


Additional excerpts

  • ...Table 4: Summary of behavioral modalities: Modality Feature Set Recognition Techniques Datasets Voice Spectrum, Glottal pulse features, pitch, energy, duration, rhythm, temporal features, phones, idiolects, semantics, accent, pronunciation, LPCC [204], MFCC [205], VQ [206-208], HMM [209], GMM [210, 211], DTW [212, 213], ANN [214], ICA [215], SVM [216, 217], ACO [195, 218] TIMIT, TIDIGIT, AURORA, YOHO Keystroke dynamics Keystroke duration, hold time, keystroke latency, speed, pressure, digraph latency, Nearest neighbor [219], SVM [220, 221], HMM [222], Manhattan distance [223], GMM [224], Euclidean distance [225], ANN [226- 228], Random forests [229], Fuzzy logic [230], GA [231], Mean & Standard deviation [232], Bayesian & FLD [233], Time interval histogram [234] MySQL, GREYC Gait Full subject silhouette, PCA [235], LDA [236], K-nearest neighbor CASIA, strides, length, cadence, speed, singularity of silhouette shape [237], SVM [238], DTW [239, 240], HMM [241-244], VHT [245, 246], Radon transform [247], LPP [248, 249], DLA [250], Wavelets [251] CMU Mobo, UMD, USF Signature Signature shape, Pen position, pressure, pen direction, acceleration, length of strokes, tangential acceleration, curvature radius, azimuth DTW [252, 253], HMM [254, 255], ANN [256, 257], Bayesian [258], SVM [259, 260], Fuzzy [261],EPs [262], PCA [263], Regional correlation [264], NCA/PCA [265], DTW-VQ [266] MCYT, SignatureDB, SUSIG, GAVAB offline signature database Fig....

    [...]

  • ...Table 3: Summary of ocular region modalities: Modality Feature Set Recognition Techniques Datasets Iris Color, shape and iris texture (crypts, furrows, corona, freckles) 2D Gabor filters [144, 154, 155], Wavelets [156-159], LoG filter [160], DCT [161], Ordinal measures [162, 163], ICA [164], PCA [165, 166], LDA [167], LBP [168, 169], WCPH [170], Neural Networks [171, 172], SVM [173], SIFT [168, 174], Adaboost [175], Texton histogram [162], Weight map [176], Directionlets [177], GA [178] CASIA, UBIRIS, WVU, MMU1, MMU2, IIT Delhi Retina Vein bifurcations, area of optic disk or fovea Principal bifurcation orientation (PBO) [140], DB-ICP [179], Gabor wavelet [180], SIFT [181], SFR [182] VARIA Sclera Vein bifurcations ANN [146], SURF [183], Direct correlation [183], Minutiae matching [183] ------ Fig....

    [...]

  • ...Table 2: Summary of facial region modalities: Modality Feature Set Recognition Techniques Datasets Face Distance between eyes, mouth, side of nose, entire face image, corner points, contours, gender, goatee, roundness of face, edge maps, pixel intensity, local and global curvatures PCA [95, 96], LDA [97], Self-organizing map & convolutional network [98], Template Matching [99], LEMs [100], EBGM [101], DCP [102], LBP [103], CSML [104], SVM [105], DBN [106, 107, 108], NMF [109], SIFT [110, 111], HMM [112], HOG-EBGM [113] FERET, AR faces, MIT, CVL, XM2VTS, Yale face, Yale face B, 3D RMA, CASIA, GavabDB Ear Shape Size, length, width & height of helix rim, triangular fossa, antihelix, concha, lobule, step edge magnitude, color, curvature, contours, edge information, shape indices, registered color, range image pair Vornoi distance graphs [114], LDA [115], Force field transform [116, 117], GA [118], PCA [119, 120], Active shape model [121], NMF [122], Gabor filters [123, 124, 125], ICA [126], Wavelets [127], SIFT [128, 129], SURF [130], LBP [131, 132, 138], Moment invariants [133], SVM [134], ICP [135], Mesh-PCA [136], Local surface patch [137] XM2VTS, UND, UCR, USTB (Dataset 1, Dataset 2, Dataset 3, Dataset 4), WPUT-DB, IIT Delhi, IIT Kanpur, ScFace, YSU, NCKU, UBEAR Tongue print Width, thickness, curvature of tongue contour, cracks, texture 2D Gabor filter [93, 139] ---- Fig....

    [...]

  • ...Table 1: Summary of hand region modalities: Modality Feature Set Recognition Techniques Datasets Fingerprint Ridge flow, ridge pattern, singular points, ridge skeleton, ridge flow, ridge ending, ridge contours, ridge kernel, orientation field, island, spur, crossover, learned feature, sweat pores, dots & incipient ridges k-nearest neighbor [24], FFT [25], GA [26, 27], DTW [28], ACO [29], Graph matching [30, 31], Neural networks [32- 34], SVM [34, 35], HMMs [36], Bayesian [37], Adaboost [6], Fuzzy logic [38, 39], Corner detection [40], Decision trees [41] CASIA, Sfinge, FVC 2004 DB1, FVC 2006, NUERO technology Palmprint Ridges, singular points, minutiae points, principal lines, wrinkles, palm texture, mean, variance, moments, center of gravity & density, spatial dispersivity, L1-norm energy Edge maps [42-46], PCA [47, 48], LDA [48-50], ICA [48, 51], DCT [52], Zernike moments [53], Hu invariant moments [54], Mean [55, 56], HMM [21], Directional line detector [55], Wavelets [57-60], LBP [61], SVM [62] CASIA, PolyU, IIT Delhi Hand geometry Length & width of fingers, aspect ratio of finger or palm, length, thickness & area of hand, hand contour, hand coordinates and angles, Zernike moments, skin folds and crease pattern Correlation co-efficient [63], Absolute distance [64, 65], Mahalanobis distance [66, 67], Euclidean distance [68], Bayes classifier [69], Mean alignment error [70], Hamming distance [22], GMM [22, 71], L1 cosine distance [65], SVM [72] Bosphorus Hand vein pattern Vein bifurcation & ending Adaptive thresholding [73], Morphological gradient operator [74], PCA [75], LDA [76], FFT [72], Feature point distance [73], Vein triangulation and shape [20], SVM [77], SIFT [78], LBP [79], Curvelet transform [80] --- Finger knuckle print Texture of lines, orientation, magnitude, Localized Radon Transform [81], PCA [81], Gabor filters [82], BLPOC [17], LDA [82], OE-SIFT [83], Phase congruency [18], ICA [82] PolyU (FKP) database Fig....

    [...]

  • ...Table 2: Summary of facial region modalities: Modality Feature Set Recognition Techniques Datasets Face Distance between eyes, mouth, side of nose, entire face image, corner points, contours, gender, goatee, roundness of face, edge maps, pixel intensity, local and global curvatures PCA [95, 96], LDA [97], Self-organizing map & convolutional network [98], Template Matching [99], LEMs [100], EBGM [101], DCP [102], LBP [103], CSML [104], SVM [105], DBN [106, 107, 108], NMF [109], SIFT [110, 111], HMM [112], HOG-EBGM [113] FERET, AR faces, MIT, CVL, XM2VTS, Yale face, Yale face B, 3D RMA, CASIA, GavabDB Ear Shape Size, length, width & height of helix rim, triangular fossa, antihelix, concha, lobule, step edge magnitude, color, curvature, contours, edge information, shape indices, registered color, range image pair Vornoi distance graphs [114], LDA [115], Force field transform [116, 117], GA [118], PCA [119, 120], Active shape model [121], NMF [122], Gabor filters [123, 124, 125], ICA [126], Wavelets [127], SIFT [128, 129], SURF [130], LBP [131, 132, 138], Moment invariants [133], SVM [134], ICP [135], Mesh-PCA [136], Local surface patch [137] XM2VTS, UND, UCR, USTB (Dataset 1, Dataset 2, Dataset 3, Dataset 4), WPUT-DB, IIT Delhi, IIT Kanpur, ScFace, YSU, NCKU, UBEAR Tongue print Width, thickness, curvature of tongue contour, cracks, texture 2D Gabor filter [93, 139] ----...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations


Additional excerpts

  • ...Extensive work has been done on face recognition algorithms based on principal component analysis (PCA), popularly known as “eigenfaces” [5]....

    [...]

Journal ArticleDOI
TL;DR: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems.
Abstract: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance.

4,816 citations

Journal ArticleDOI
TL;DR: A prototype biometrics system which integrates faces and fingerprints is developed which overcomes the limitations of face recognition systems as well as fingerprint verification systems and operates in the identification mode with an admissible response time.
Abstract: An automatic personal identification system based solely on fingerprints or faces is often not able to meet the system performance requirements. We have developed a prototype biometrics system which integrates faces and fingerprints. The system overcomes the limitations of face recognition systems as well as fingerprint verification systems. The integrated prototype system operates in the identification mode with an admissible response time. The identity established by the system is more reliable than the identity established by a face recognition system. In addition, the proposed decision fusion scheme enables performance improvement by integrating multiple cues with different confidence measures. Experimental results demonstrate that our system performs very well. It meets the response time as well as the accuracy requirements.

651 citations

Proceedings ArticleDOI
11 Aug 2002
TL;DR: The PCA approach to images of the face and ear using the same set of subjects indicates that the face provides a more reliable biometric than the ear.
Abstract: Face recognition based on principal component analysis is a heavily researched topic in computer vision. The ear has been proposed as a biometric, with claimed advantages over the face. We have applied the PCA approach to images of the face and ear using the same set of subjects. Testing was done with three different gallery/probe combinations. For faces we have: 1) probes of same day but different expression, 2) probes of a different day but similar expression, and 3) probes of different day and different expression. Analogously, for ears, we have: 1) probes of same day but other ear, 2) probes of a different day but same ear, and 3) probes of different day and other ear Results indicate that the face provides a more reliable biometric than the ear.

186 citations


"Comparison and combination of ear a..." refers result in this paper

  • ...The results reported here follow up on those reported in an earlier study [4]....

    [...]

Journal ArticleDOI
TL;DR: A novel force field transformation has been developed in which the image is treated as an array of Gaussian attractors that act as the source of a force field, and shows promising results in automatic ear recognition.

164 citations