scispace - formally typeset
Search or ask a question

Showing papers on "Three-dimensional face recognition published in 1995"


Proceedings ArticleDOI
20 Jun 1995
TL;DR: A compact parametrised model of facial appearance which takes into account all sources of variability and can be used for tasks such as image coding, person identification, pose recovery, gender recognition and expression recognition is described.
Abstract: Face images are difficult to interpret because they are highly variable. Sources of variability include individual appearance, 3D pose, facial expression and lighting. We describe a compact parametrised model of facial appearance which takes into account all these sources of variability. The model represents both shape and grey-level appearance and is created by performing a statistical analysis over a training set of face images. A robust multi-resolution search algorithm is used to fit the model to faces in new images. This allows the main facial features to be located and a set of shape and grey-level appearance parameters to be recovered. A good approximation to a given face can be reconstructed using less than 100 of these parameters. This representation can be used for tasks such as image coding, person identification, pose recovery, gender recognition and expression recognition. The system performs well on all the tasks listed above. >

208 citations


Journal ArticleDOI
TL;DR: BSc Degree Examination 2018-9 Department: BIOLOGY Title of Exam: Protein protein recognition Time Allowed: 2 hours Marking Scheme: Total marks available for this paper: 100
Abstract: BSc Degree Examination 2018-9 Department​: BIOLOGY Title of Exam: Protein protein recognition Time Allowed: 2 hours Marking Scheme: Total marks available for this paper: 100 The marks available for each question are indicated on the paper Instructions: Section A​: Answer all questions in the spaces provided on the examination paper Section B​: Answer ​either​ question A ​or​ question B. Write your answer on the separate paper provided and attach it to the back of the question paper using the cable tie provided Materials Supplied: GREEN ANSWER BOOKLET For marker use only: Office use only: 1 2 3 4 5 6 A B Total as %

207 citations


01 Jan 1995
TL;DR: A unique face recognition system which considers information from both frontal and pro le view images is presented and the problem of identifying the of the database which is most similar to the target is considered.
Abstract: This paper presents a unique face recognition sys tem which considers information from both frontal and pro le view images This system represents the rst step toward the development of a face recogni tion solution for the intensity image domain based on a D context In the current system we construct a D face centered model from the two independent images Geometric information is used for view nor malization and at the lowest level the comparison is based on general pattern matching techniques We also discuss the use of geometric information to index the reference database to quickly eliminate impossi ble matches from further consideration The system has been tested using subjects from the FERET program database and has shown excellent results For example we consider the problem of identifying the of the database which is most similar to the target The correct match is included in this list of the time in the system s fully automated mode and of the time in the manually assisted mode The International Workshop on Automatic Face and Gesture Recognition Zurich June

58 citations


Proceedings ArticleDOI
31 Aug 1995
TL;DR: A preliminary study also confirms that a similar DBNN recognizer can effectively recognize palms, which could potentially offer a much more reliable biometric feature.
Abstract: This paper proposes a face/palm recognition system based on decision-based neural networks (DBNN). The face recognition system consists of three modules. First, the face detector finds the location of a human face in an image. The eye localizer determines the positions of both eyes in order to generate meaningful feature vectors. The facial region proposed contains eyebrows, eyes, and nose, but excluding mouth. (Eye-glasses will be permissible.) Lastly, the third module is a face recognizer. The DBNN can be effectively applied to all the three modules. It adopts a hierarchical network structures with nonlinear basis functions and a competitive credit-assignment scheme. The paper demonstrates its successful application to face recognition applications on both the public (FERET) and in-house (SCR) databases. In terms of speed, given the extracted features, the training phase for 100-200 persons would take less than one hour on Sparc10. The whole recognition process (including eye localization, feature extraction, and classification using DBNN) may consume only a fraction of a second on Sparc10. Experiments on three different databases all demonstrated high recognition accuracies. A preliminary study also confirms that a similar DBNN recognizer can effectively recognize palms, which could potentially offer a much more reliable biometric feature.

52 citations


Proceedings Article
20 Aug 1995
TL;DR: This paper proposes a scheme for expression-invariant face recognition that employs a fixed set of these "natural" basis functions to generate multiscale iconic representations of human faces that exploits the dimensionality-reducing properties of PCA.
Abstract: Recent work regarding the statistics of natural images has revealed that the dominant eigenvectors of arbitrary natural images closely approximate various oriented derivative-of-Gaussian functions; these functions have also been shown to provide the best fit to the receptive field profiles of cells in the primate striate cortex. We propose a scheme for expression-invariant face recognition that employs a fixed set of these "natural" basis functions to generate multiscale iconic representations of human faces. Using a fixed set of basis functions obviates the need for recomputing eigenvectors (a step that was necessary in some previous approaches employing principal component analysis (PCA) for recognition) while at the same time retaining the redundancy-reducing properties of PCA. A face is represented by a set of iconic representations automatically extracted from an input image. The description thus obtained is stored in a topographically-organized sparse distributed memory that is based on a model of human long-term memory first proposed by Kanerva. We describe experimental results for an implementation of the method on a pipeline image processor that is capable of achieving near real-time recognition by exploiting the processor's frame-rate convolution capability for indexing purposes. 1 Introduction The problem of object recognition has been a central subject in the field of computer vision. An especially interesting albeit difficult subproblem is that of recognizing human faces. In addition to the difficulties posed by changing viewing conditions, computational methods for face recognition have had to confront the fact that faces are complex non-rigid stimuli that defy easy geometric characterizations and form a dense cluster in the multidimensional space of input images. One of the most important issues in face recognition has therefore been the representation of faces. Early schemes for face recognition utilized geometrical representations; prominent features such as eyes, nose, mouth, and chin were detected and geometrical models of faces given by feature vectors whose dimensions, for instance, denoted the relative positions of the facial features were used for the purposes of recognition [Bledsoe, 1966; Kanade, 1973]. Recently, researchers have reported successful results using photometric representations i.e. representations that are computed directly from the intensity values of the input image. Some prominent examples include face representations based on biologically-motivated Gabor filter "jets" [Buhmann et al., 1990], randomly placed zeroth-order Gaussian kernels [Edelman et a/. This paper explores the use of an iconic representation of human faces that exploits the dimensionality-reducing properties of PCA. However, unlike previous approaches employing …

49 citations


Journal ArticleDOI
TL;DR: Face silhouettes instead of intensity images are used for this research, which results in reduction in both space and processing time and shows that the approach is robust, accurate and reasonably fast.
Abstract: Face detection is integral to any automatic face recognition system. The goal of this research is to develop a system that performs the task of human face detection automatically in a scene. A system to correctly locate and identify human faces will find several applications, some examples are criminal identification and authentication in secure systems. This work presents a new approach based on principal component analysis. Face silhouettes instead of intensity images are used for this research. It results in reduction in both space and processing time. A set of basis face silhouettes are obtained using principal component analysis. These are then used with a Hough-like technique to detect faces. The results show that the approach is robust, accurate and reasonably fast.

37 citations


Journal ArticleDOI
TL;DR: This paper aims at personal identification by the facial image using the multiresolution mosaic to apply the detection or recognition process based on the shape features as in the conventional method to the face which is a typical soft object.
Abstract: To realize fully automated face image recognition, there must be thorough processing from the detection of the face in a scene to recognition. It is very difficult, however, to apply the detection or recognition process based on the shape features as in the conventional method to the face which is a typical soft object. This paper aims at personal identification by the facial image. The face in a scene is sought by coarse-to-fine processing using only the gray-level data, and the result is applied to the recognition. First, the human head is detected from the scene using the multiresolution mosaic. Then the central part of the face is detected from the head region using the mosaic, and the precise position is determined based on the histogram for the eye and the nose region. The search algorithm is applied to 100 personal images derived from the motion image. The detection and location succeeded in 97 percent of the cases, except for the face with eyes shielded by hair, for example. When the result of successful detection is applied to the recognition, the recognition rate of 99 percent is obtained. In this method, a facial image of any size at any position in a scene can be detected. Other features are that the background can be uniform, and the color data are not required, which greatly relaxes the past requirement for the input image.

25 citations


Journal ArticleDOI
TL;DR: A model of representation that can be useful for recognition of faces in a database is presented, and may be used to define the minimum image quality required for retrieval of facial records at different confidence levels.

24 citations


Proceedings ArticleDOI
Sun-Yuan Kung1, M. Fang1, S.P. Liou1, M.Y. Chiu1, Jin-Shiuh Taur1 
23 Oct 1995
TL;DR: The DBNN based face recognizer has yielded very high recognition accuracies based on experiments on the ARPA-FERET and SCR-IM databases and is superior to that of multilayer perceptron (MLP).
Abstract: This paper proposes a face recognition system based on decision-based neural networks (DBNN). The DBNN adopts a hierarchical network structure with nonlinear basis functions and a competitive credit-assignment scheme. The face recognition system consists of three modules. First, a face detector finds the location of a human face in an image. Then an eye localizer determines the positions of both eyes to help generate size-normalized, reoriented, and reduced-resolution feature vectors. (The facial region proposed contains eyebrows, eyes, and nose, but excluding mouth. Eye-glasses will be permissible.) The last module is a face recognizer. The DBNN can be effectively applied to all the three modules. The DBNN based face recognizer has yielded very high recognition accuracies based on experiments on the ARPA-FERET and SCR-IM databases. In terms of processing speeds and recognition accuracies, the performance of DBNN is superior to that of multilayer perceptron (MLP). The training phase for 100 persons would take around one hour, while the recognition phase (including eye localization, feature extraction, and classification using DBNN) consumes only a fraction of a second (on Sparc10).

23 citations


Proceedings ArticleDOI
05 Jul 1995
TL;DR: The algorithm of automated face area segmentation and facial feature extraction from input images with free backgrounds is described, which can be applied not only to the media conversion but also to human-machine interface.
Abstract: In this paper, we describe the algorithm of automated face area segmentation and facial feature extraction from input images with free backgrounds. The extracted feature points around the eyes, mouth, nose and facial contours are used for modifying facial images. The modified images are stored in the frame memory, and the human speaking scene is generated by continually changing the frames according to input text or speech sound. When speaking voice is input, the vowels are recognised and the corresponding frames are recalled out. This system can be applied not only to the media conversion but also to human-machine interface.

21 citations


Journal ArticleDOI
TL;DR: A method of applying n-tuple recognition techniques to handwritten OCR, which involves scanning an n-Tuple classifier over a chain-code of the image, is described, offering superior recognition accuracy, as demonstrated by results on three widely used data sets.
Abstract: A method of applying n-tuple recognition techniques to handwritten OCR, which involves scanning an n-tuple classifier over a chain-code of the image, is described. The traditional advantages of n-tuple recognition, i.e. training and recognition speed, are retained, while offering superior recognition accuracy, as demonstrated by results on three widely used data sets.

Journal ArticleDOI
TL;DR: An approach to the integration of off-line and on-line recognition of unconstrained handwritten characters by adapting an on-LINE recognition algorithm to off- line recognition, based on high-quality thinning algorithms is presented.

Proceedings ArticleDOI
21 Apr 1995
TL;DR: This symmetry measurement is used to locate the center line of faces, and afterward, to decide whether the face view is frontal or not, and can be a powerful tool in facial feature extraction under more constrain conditions.
Abstract: This paper presents a symmetry measurement based on the correlation coefficient. This symmetry measure-ment is used to locate the center line of faces, and afterward, to decide whether the face view is frontal or not. A483-face image database obtained from the U. S. Army was used to test the algorithm. Though the performance ofthe algorithm is limeted to 87%, this is due to the wide range of variations present in the database used to test ouralgorithm. Under more constrain conditions, such as uniform illumination, this technique can be a powerful toolin facial feature extraction. In regards its computational requirements, though this algorithm is very expensive,three independent optimizations are presented; two of which are successfully implemented, and tested.Keywords: symmertry, symmetry measurement, face detection, frontal view face detection. 1 INTRODUCTION. This paper presents an algorithm based on symmetry measurements, useful to extract information about sym-metric objects from images. The proposed technique was developed and tested in the context of face recognition,but it might find applications in other areas of computer vision in which symmetry is involved. The motivationfor this work is the high amount of symmetry present in frontal view faces; the claim is that this information canbe useful in the estimation of face orientation, as well as in the extraction of feature points.In a face recognition system based on template matching, estimating the orientation of the probe faces helps indiscarding templates that do not need to be searched reducing the computational requirements and the executiontime.1 On the other hand, this algorithm as a preprocessing step, provides information usually assumed as inputdata in facial feature extraction techniques. In a more general context, symmetry measurements combined withsome knowledge can be used to verify the presence of (symmetrical) objects in a given image.Much work has been previously done in this area; from the development of symmetry operators24 for detectionof interesting points, to fast algorithms for the location of axis of symmetry on images.5 However, the problemhas been stated as the location of the regions with the largest amount of symmetry to guide the feature extractionassuming their presence. In the context of face recognition, we use a symmetry measurement to decide if theprobe image is a frontal view; for each case, we also provide an estimation of the tilt angle, and the location ofthe center line.

Proceedings ArticleDOI
31 Aug 1995
TL;DR: This work presents a distribution-based modeling cum example-based learning approach for detecting human faces in cluttered scenes and shows how explicitly modeling the distribution of certain "facelike" nonface patterns can help improve classification results.
Abstract: We present a distribution-based modeling cum example-based learning approach for detecting human faces in cluttered scenes. The distribution-based model captures complex variations in human face patterns that cannot be adequately described by classical pictorial template-based matching techniques or geometric model-based pattern recognition schemes. We also show how explicitly modeling the distribution of certain "facelike" nonface patterns can help improve classification results.

Proceedings ArticleDOI
20 Mar 1995
TL;DR: A fuzzy-based recognition method of human front faces using the trapezoidal membership function that absorbs the variation of feature values of the same person is proposed.
Abstract: A fuzzy-based recognition method of human front faces using the trapezoidal membership function is proposed. In the preprocessing step, we extract the face part from the background image by tracking face boundaries under the assumption that the face part is located in the center of a captured image with homogeneous background. Then based on the a priori knowledge of human faces we extract five normalized features. In the recognition step, we propose a fuzzy-based algorithm that employs a trapezoidal membership function that absorbs the variation of feature values of the same person. Computer simulation results with 80 test images of 20 persons show that the proposed method yields higher recognition rate than the conventional ones. >

Proceedings ArticleDOI
22 Oct 1995
TL;DR: The authors implemented a task-level pipelined multicomputer system (RV860-PIPE) on which the proposed face recognition algorithm is implemented and showed that the 95% confidence interval for recognition rate ranges from 91%/spl sim/97% typically with 1 frame/sec throughput.
Abstract: This paper proposes an efficient face recognition scheme for getting real-time performance. The scheme is characterized by color-based facial region detection and normalized template matching for face identification. For real-time recognition, the authors also implemented a task-level pipelined multicomputer system (RV860-PIPE) on which the proposed face recognition algorithm is implemented. In the experiments with realistic face images, the proposed system shows that the 95% confidence interval for recognition rate ranges from 91%/spl sim/97% typically with 1 frame/sec throughput.

Proceedings ArticleDOI
18 Aug 1995
TL;DR: A set of approaches are proposed for extracting internal and external facial features in a grey-level images containing single or multiple faces of various sizes at different locations without any restriction on the background.
Abstract: The extraction of facial features is a fundamental and crucial step in most face detection and recognition systems. Here a set of approaches are proposed for extracting internal and external facial features in a grey-level images containing single or multiple faces of various sizes at different locations without any restriction on the background. These approaches are distinctive in several aspects: the use of some rarely exploited photometric properties as the basis for facial features extraction, a novel metrics on radial symmetry of gradient orientations, an inhibitory mechanism for extracting internal facial features, and a simple mechanism for external ones.

Proceedings ArticleDOI
05 Jul 1995
TL;DR: The results show that a neural network can learn to recognize faces based on sonar, and demonstrate that sonar can be a very effective input for neural networks that perform pattern recognition tasks.
Abstract: Face recognition has been a challenging task in academic research and in industrial applications Defined by many distinct features (and the relative spatial position of those features), faces are very complex targets Small inter-facial differences with high intra-facial variability makes face recognition a particularly complex task Until now, face recognition systems have relied on input from the visual domain The neural network system reported in this paper recognizes faces based on sonar input Research on echolocating bats has demonstrated that bats can use sonar to accurately perceive detailed descriptions of objects Previous research has shown that a sonar neural network system can recognize simple, 3D targets regardless of orientation (Dror, et al, 1995) In the present study we examine the effectiveness of using sonar input for more complex target recongition tasks We use sonar echoes from faces (recorded in a variety of facial expressions) to train a neural network to recognize faces, regardless of facial expressions After training, we examine the network's ability to generalize and correctly recognize the faces based on echoes from novel facial expressions, which were not included in the training set The performance of the network on these novel echoes was 100% correct To insure that our results were not due to the specific faces we used, we replicated our results two more times using different faces Again, performance was almost perfect--996% and 100% The results show that a neural network can learn to recognize faces based on sonar, and demonstrate that sonar can be a very effective input for neural networks that perform pattern recognition tasks© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering Downloading of the abstract is permitted for personal use only

Book ChapterDOI
TL;DR: A new approach of taking good images for the face recognition such as the extraction of facial expression is described, where the face like parts in an input image are first extracted by comparing the skin color regions and the hair regions detected from the image with several pre-defined face pattern models using the fuzzy pattern matching method.
Abstract: This paper describes a new approach of taking good images for the face recognition such as the extraction of facial expression. The face like parts in an input image are first extracted by comparing the skin color regions and the hair regions detected from the image with several pre-defined 2-dimensional face pattern models using the fuzzy pattern matching method. The 3 dimensional pose of the extracted face relative to the camera is estimated using the area and the center of the gravity of the skin color region and the ones of the hair region on the face, which is then used to guide an active camera changing its view point in order to taking the image where the face will appear in the desired pose.

01 Jan 1995
TL;DR: In this paper, an approach for the recognition of 3D objects of arbitrary shape using geometric 3D models and geometric matching has been proposed, which is an alternative to the classical segmentation and primitive extraction approach and provides a perspective to escape some of its difficulties to deal with free-form shapes.
Abstract: This paper investigates a new approach for the recognition of 3D objects of arbitrary shape. The proposed solution follows the principle of model-based recognition using geometric 3D models and geometric matching. It is an alternative to the classical segmentation and primitive extraction approach and provides a perspective to escape some of its difficulties to deal with free-form shapes. The heart of this new approach is a recently published iterative closest point matching algorithm, which is applied variously to a number of initial configurations. We examine methods to obtain successful matching. Our investigations refer to a recognition system used for the pose estimation of 3D industrial objects in automatic assembly, with objects obtained from range data. The recognition algorithm works directly on the 3D coordinates of the objects surface as measured by a range finder. This makes our system independent of assumptions on the objects geometry. Test and model objects are sets of 3D points to be compared with the iterative closest point matching algorithm. Substantially, we propose a set of rules to choose promising initial configurations for the iterative closest point matching; an appropriate quality measure which permits reliable decision; a method to represent the object surface in a way that improves computing time and matching quality. Examples demonstrate the feasibility of this approach to free-form recognition.

01 Sep 1995
TL;DR: In all cases, observers were much better at recognizing a face from the side learned, and past results were extended to explore the consistency of face recognizability for individual faces across di erent views and view transfer conditions.
Abstract: In two experiments we examined the ability of human observers to recognize faces from novel viewpoints. Previous work has indicated that there are marked declines in recognition performance when observers learn a particular view of a face and are asked to recognize the face from a novel viewpoint. We replicate these ndings and extend them in several ways. First, we replicate the well-known 3/4 view advantage for recognition and extend it to show that this advantage is stronger than would be expected simply due to the 3/4 view being the center of the learned views. In the second experiment, we found little evidence for advantageous transfer to a symmetric view of the other side of the face, in all cases, observers were much better at recognizing a face from the side learned. Third, we extended past results to explore the consistency of face recognizability for individual faces across di erent views and view transfer conditions. We found only a modest relationship between the recognizability of individual faces in the di erent view conditions. These data give insight into the organization of memory for faces and its stability across changes in viewpoint. Many thanks are due to Niko Troje and Isabelle B ultho for the stimulus creation and processing. Alice O'Toole gratefully acknowledges support by the Alexander von Humboldt Stiftung and the hospitality of the Max-Planck Institut f ur biologische Kybernetik. Please direct all correspondence to A. J. O'Toole, School of Human Development, GR4.1, The University of Texas at Dallas, Richardson, TX 75083-0688, TEL: (214) 883-2486, Email: otoole@utdallas.edu This document is available as /pub/mpi-memos/TR-21.ps.Z via anonymous ftp from ftp.mpik-tueb.mpg.de or from the World Wide Web, http://www.mpik-tueb.mpg.de/projects/TechReport/list.html.

Proceedings ArticleDOI
05 Jul 1995
TL;DR: This paper shows a method of computer human face recognition, which is robust against various expressions of human faces, and uses two stages neural network; the first neural network is used for compensating the emotion and second is for the recognition.
Abstract: This paper shows a method of computer human face recognition, which is robust against various expressions of human faces. The key points are: (1) a method to characterize human faces by a limited number of parameters; (2) use of two stages neural network; the first neural network is used for compensating the emotion and second is for the recognition. The proposed method shows reasonable performance through several experiments.

Dissertation
01 Jan 1995
TL;DR: The goal of this thesis was to improve the recognition of faces by using color by looking at the limitations of the eigenface method as applied to grey-scale images and correcting for the illumination differences through the use of color.
Abstract: The machine recognition of faces is useful for many commercial and law enforcement applications. Two typical examples would be security systems and mug-shot matching. A real-time method which has been developed in the last few years is the eigenface method of recognition. The eigenface method uses the first few principal components (the eigenfaces) of the database images to characterize the known faces. Images are classified by their weights, the weights are found by projecting each image onto the eigenfaces. The goal of this thesis was to improve the recognition of faces by using color. We started by looking at the limitations of the eigenface method as applied to grey-scale images. Next, color ratios, chromaticities and color band normalized images were used. Images were compared using both the eigenface method and doing a direct picture-to-picture comparison. Last but not least, a method was developed using color which would correct for illumination direction when there are gross differences in illumination between two images. For similar images, ie. images in which there was little variation in head size, orientation or illumination, the eigenface method with grey-scale images performed very well with a recognition rate of 95%. Of the color representations that were tried, only color band normalized images performed as well as the eigenface method for grey-scale images. When there was a gross change in the illumination the performance of the eigenface method declined to a recognition rate of 73%. Correcting for the illumination differences through the use of color allowed reliable recognition independent of illumination.

01 Jan 1995
TL;DR: A method to characterize human faces by limited number of parameters is shown, which is robust against various expression of human faces, and use of two stages neural network is used for compensating the emotion and second is for the recognition.
Abstract: This paper shows a method of computer human face distinction, which is robust against various expression of human faces. The key points are: (I) a method to characterize human .faces by limited number of parameters, (2) use of two stages neural network; the first neural network is used for compensating the emotion and second is for the recognition. The proposed method shows reasonable performance through several experiments.

01 Jan 1995
TL;DR: The DBNN based face recognizer has yielded very high recognition accuracies based on experiments on the ARPA-FERET and SCR-IM databases and is superior to that of multilayer perceptron (MLP).
Abstract: This paper proposes a face recognition system based on decision-based neural networks (DBNN). The DBNN adopts a hierarchical network structures with nonlinear basis functions and a competitive credit-assignment scheme. The face recognition system consists of three modules. First, a face detector finds the location of a human face in an image. Then an eye localizer determines the positions of both eyes to help generate size-normalized, reoriented, and reduced-resolution feature vectors. (The facial region proposed contains eyebrows, eyes, and nose, but excluding mouth. Eye-glasses will be permissible.) The last module is a face recognizer. The DBNN can be effectively applied to all the three modules. The DBNN based face recognizer has yielded very high recognition accuracies based on experiments on the ARPA-FERET and SCR-IM databases. In terms of processing speeds and recognition accuracies, the performance of DBNN is superior to that of multilayer perceptron (MLP). The training phase for 100 persons would take around one hour, while the recognition phase (including eye localization, feature extraction, and classification using DBNN) consumes only a fraction of a second (on SparclO).

Proceedings ArticleDOI
07 Jun 1995
TL;DR: A new facial feature point extraction method for an image synthesizing system that works successfully with images of more than 1000 Japanese and 500 Americans is reported.
Abstract: This paper reports a new facial feature point extraction method for an image synthesizing system. The method works successfully with images of more than 1000 Japanese and 500 Americans. A sample of application is reported.

Proceedings ArticleDOI
13 Oct 1995
TL;DR: A simple method to overcome the problem of character recognition by matched filtering without the use of a liquid gate when the object to be recognized is on a transparency.
Abstract: Character recognition by matched filtering is almost impossible to accomplish without the use of a liquid gate when the object to be recognized is on a transparency. This paper suggests a simple method to overcome this problem. A practical application to fingerprint recognition is discussed.