Showing papers in "Pattern Recognition in 2004"
TL;DR: A framework to handle semantic scene classification, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels, is presented and appears to generalize to other classification problems of the same nature.
Abstract: In classic pattern recognition problems, classes are mutually exclusive by definition. Classification errors occur when the classes overlap in the feature space. We examine a different situation, occurring when the classes are, by definition, not mutually exclusive. Such problems arise in semantic scene and document classification and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classification, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a field scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a different treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classification; furthermore, our work appears to generalize to other classification problems of the same nature.
2,161 citations
TL;DR: This paper identifies some promising techniques for image retrieval according to standard principles and examines implementation procedures for each technique and discusses its advantages and disadvantages.
Abstract: More and more images have been generated in digital form around the world. There is a growing interest in 1nding images in large collections or from remote databases. In order to 1nd an image, the image has to be described or represented by certain features. Shape is an important visual feature of an image. Searching for images using shape features has attracted much attention. There are many shape representation anddescription techniques in the literature. In this paper, we classify and review these important techniques. We examine implementation procedures for each technique and discuss its advantages and disadvantages. Some recent research results are also included and discussed in this paper. Finally, we identify some promising techniques for image retrieval according to standard principles.
1,910 citations
TL;DR: By applying an optimal pixel adjustment process to the stego-image obtained by the simple LSB substitution method, the image quality of the stega-image can be greatly improved with low extra computational complexity.
Abstract: In this paper, a data hiding scheme by simple LSB substitution is proposed. By applying an optimal pixel adjustment process to the stego-image obtained by the simple LSB substitution method, the image quality of the stego-image can be greatly improved with low extra computational complexity. The worst case mean-square-error between the stego-image and the cover-image is derived. Experimental results show that the stego-image is visually indistinguishable from the original cover-image. The obtained results also show a significant improvement with respect to a previous work.
1,586 citations
TL;DR: A new and definitive classification of patterns for structured light sensors is presented, based on projecting a light pattern and viewing the illuminated scene from one or more points of view, for recovering the surface of objects.
Abstract: Coded structured light is considered one of the most reliable techniques for recovering the surface of objects This technique is based on projecting a light pattern and viewing the illuminated scene from one or more points of view Since the pattern is coded, correspondences between image points and points of the projected pattern can be easily found The decoded points can be triangulated and 3D information is obtained We present an overview of the existing techniques, as well as a new and definitive classification of patterns for structured light sensors We have implemented a set of representative techniques in this field and present some comparative results The advantages and constraints of the different patterns are also discussed
1,283 citations
TL;DR: This tutorial performs a synthesis between the multiscale-decomposition-based image approach, the ARSIS concept, and a multisensor scheme based on wavelet decomposition, i.e. a multiresolution image fusion approach.
Abstract: The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a new image which is more suitable for human and machine perception or further image-processing tasks such as segmentation, feature extraction and object recognition. Different fusion methods have been proposed in literature, including multiresolution analysis. This paper is an image fusion tutorial based on wavelet decomposition, i.e. a multiresolution image fusion approach. We can fuse images with the same or different resolution level, i.e. range sensing, visual CCD, infrared, thermal or medical. The tutorial performs a synthesis between the multiscale-decomposition-based image approach (Proc. IEEE 87 (8) (1999) 1315), the ARSIS concept (Photogramm. Eng. Remote Sensing 66 (1) (2000) 49) and a multisensor scheme (Graphical Models Image Process. 57 (3) (1995) 235). Some image fusion examples illustrate the proposed fusion approach. A comparative analysis is carried out against classical existing strategies, including those of multiresolution.
1,187 citations
TL;DR: A large number of techniques to address the problem of text information extraction are classified and reviewed, benchmark data and performance evaluation are discussed, and promising directions for future research are pointed out.
Abstract: Text data present in images and video contain useful information for automatic annotation, indexing, and structuring of images. Extraction of this information involves detection, localization, tracking, extraction, enhancement, and recognition of the text from a given image. However, variations of text due to differences in size, style, orientation, and alignment, as well as low image contrast and complex background make the problem of automatic text extraction extremely challenging. While comprehensive surveys of related problems such as face detection, document analysis, and image & video indexing can be found, the problem of text information extraction is not well surveyed. A large number of techniques have been proposed to address this problem, and the purpose of this paper is to classify and review these algorithms, discuss benchmark data and performance evaluation, and to point out promising directions for future research.
927 citations
TL;DR: This paper proposed a novel two factor authenticator based on iterated inner products between tokenised pseudo-random number and the user specific fingerprint feature, which generated from the integrated wavelet and Fourier–Mellin transform, and hence produce a set of user specific compact code that coined as BioHashing.
Abstract: Human authentication is the security task whose job is to limit access to physical locations or computer network only to those with authorisation. This is done by equipped authorised users with passwords, tokens or using their biometrics. Unfortunately, the first two suffer a lack of security as they are easy being forgotten and stolen; even biometrics also suffers from some inherent limitation and specific security threats. A more practical approach is to combine two or more factor authenticator to reap benefits in security or convenient or both. This paper proposed a novel two factor authenticator based on iterated inner products between tokenised pseudo-random number and the user specific fingerprint feature, which generated from the integrated wavelet and Fourier–Mellin transform, and hence produce a set of user specific compact code that coined as BioHashing. BioHashing highly tolerant of data capture offsets, with same user fingerprint data resulting in highly correlated bitstrings. Moreover, there is no deterministic way to get the user specific code without having both token with random data and user fingerprint feature. This would protect us for instance against biometric fabrication by changing the user specific credential, is as simple as changing the token containing the random data. The BioHashing has significant functional advantages over solely biometrics i.e. zero equal error rate point and clean separation of the genuine and imposter populations, thereby allowing elimination of false accept rates without suffering from increased occurrence of false reject rates.
765 citations
TL;DR: A cluster validity index and its fuzzification is described, which can provide a measure of goodness of clustering on different partitions of a data set, and results demonstrating the superiority of the PBM-index in appropriately determining the number of clusters are provided.
Abstract: In this article, a cluster validity index and its fuzzification is described, which can provide a measure of goodness of clustering on different partitions of a data set. The maximum value of this index, called the PBM-index, across the hierarchy provides the best partitioning. The index is defined as a product of three factors, maximization of which ensures the formation of a small number of compact clusters with large separation between at least two clusters. We have used both the k-means and the expectation maximization algorithms as underlying crisp clustering techniques. For fuzzy clustering, we have utilized the well-known fuzzy c-means algorithm. Results demonstrating the superiority of the PBM-index in appropriately determining the number of clusters, as compared to three other well-known measures, the Davies–Bouldin index, Dunn's index and the Xie–Beni index, are provided for several artificial and real-life data sets.
710 citations
TL;DR: A review of the OCR work done on Indian language scripts and the scope of future work and further steps needed for Indian script OCR development is presented.
Abstract: Intensive research has been done on optical character recognition (OCR) and a large number of articles have been published on this topic during the last few decades. Many commercial OCR systems are now available in the market. But most of these systems work for Roman, Chinese, Japanese and Arabic characters. There are no sufficient number of work on Indian language character recognition although there are 12 major scripts in India. In this paper, we present a review of the OCR work done on Indian language scripts. The review is organized into 5 sections. Sections 1 and 2 cover introduction and properties on Indian scripts. In Section 3, we discuss different methodologies in OCR development as well as research work done on Indian scripts recognition. In Section 4, we discuss the scope of future work and further steps needed for Indian script OCR development. In Section 5 we conclude the paper.
592 citations
TL;DR: In this paper, the authors implemented the matrix multiplication of a neural network to enhance the time performance of a text detection system using an ATI RADEON 9700 PRO board, which produced a 20-fold performance enhancement.
Abstract: Graphics processing unit (GPU) is used for a faster artificial neural network. It is used to implement the matrix multiplication of a neural network to enhance the time performance of a text detection system. Preliminary results produced a 20-fold performance enhancement using an ATI RADEON 9700 PRO board. The parallelism of a GPU is fully utilized by accumulating a lot of input feature vectors and weight vectors, then converting the many inner-product operations into one matrix operation. Further research areas include benchmarking the performance with various hardware and GPU-aware learning algorithms.
421 citations
TL;DR: The generalized projection function (GPF) is defined and it is shown that IPF, VPF, and HPF are all effective in eye detection, while HPF is better thanVPF, while VPF isbetter than IPF.
Abstract: In this paper, the generalized projection function (GPF) is defined. Both the integral projection function (IPF) and the variance projection function (VPF) can be viewed as special cases of GPF. Another special case of GPF, i.e. the hybrid projection function (HPF), is developed through experimentally determining the optimal parameters of GPF. Experiments on three face databases show that IPF, VPF, and HPF are all effective in eye detection. Nevertheless, HPF is better than VPF, while VPF is better than IPF. Moreover, IPF is found to be more effective on occidental than on oriental faces, and VPF is more effective on oriental than on occidental faces. Analysis of the detections shows that this effect may be owed to the shadow of the noses and eyeholes of different races of people.
TL;DR: Results show that both gaits are potential biometrics, with running being more potent than walking, and a phase-weighted Fourier description gait signature by automated non-invasive means is derived.
Abstract: Gait enjoys advantages over other biometrics in that it can be perceived from a distance and is di cult to disguise.Current approaches are mostly statistical and concentrate on walking only.By analysing leg motion we show how we can recognise people not only by the walking gait,but also by the running gait.This is achieved by either of two new modelling approaches which employ coupled oscillators and the biomechanics of human locomotion as the underlying concepts.These models give a plausible method for data reduction by providing estimates of the inclination of the thigh and of the leg,from the image data. Both approaches derive a phase-weighted Fourier description gait signature by automated non-invasive means.One approach is completely automated whereas the other requires speci cation of a single parameter to distinguish between walking and running.Results show that both gaits are potential biometrics,with running being more potent.By its basis in evidence gathering,this new technique can tolerate noise and low resolution.
TL;DR: An innovative watermarking scheme based on genetic algorithms (GA) in the transform domain is proposed, which is robust againstWatermarking attacks, and the improvement in watermarked image quality with GA.
Abstract: An innovative watermarking scheme based on genetic algorithms (GA) in the transform domain is proposed. It is robust against watermarking attacks, which are commonly employed in the literature. In addition, the watermarked image quality is also considered. In this paper, we employ GA for optimizing both the fundamentally conflicting requirements. Watermarking with GA is easy for implementation. We also examine the effectiveness of our scheme by checking the fitness function in GA, which includes both factors related to robustness and invisibility. Simulation results also show both the robustness under attacks, and the improvement in watermarked image quality with GA.
TL;DR: It is argued that color and texture are separate phenomena that can, or even should, be treated individually.
Abstract: Current approaches to color texture analysis can be roughly divided into two categories: methods that process color and texture information separately, and those that consider color and texture a joint phenomenon. In this paper, both approaches are empirically evaluated with a large set of natural color textures. The classification performance of color indexing methods is compared to gray-scale and color texture methods, and to combined color and texture methods, in static and varying illumination conditions. Based on the results, we argue that color and texture are separate phenomena that can, or even should, be treated individually.
TL;DR: A semi-automatic contour extraction method is used to address the problem of fuzzy tooth contours caused by the poor image quality, using the contours of the teeth as the feature for matching.
Abstract: Forensic dentistry involves the identification of people based on their dental records, mainly available as radiograph images. Our goal is to automate this process using image processing and pattern recognition techniques. Given a postmortem radiograph, we search a database of antemortem radiographs in order to retrieve the closest match with respect to some salient features. In this paper, we use the contours of the teeth as the feature for matching. A semi-automatic contour extraction method is used to address the problem of fuzzy tooth contours caused by the poor image quality. The proposed method involves three stages: radiograph segmentation, pixel classification and contour matching. A probabilistic model is used to describe the distribution of object pixels in the image. Results of retrievals on a database of over 100 images are encouraging.
TL;DR: A comparison of normalization functions shows that moment-based functions outperform the dimension-based ones and the aspect ratio mapping is influential and the comparison of feature vectors shows that the improved feature extraction strategies outperform their baseline counterparts.
Abstract: The performance evaluation of various techniques is important to select the correct options in developing character recognition systems. In our previous works, we have proposed aspect ratio adaptive normalization (ARAN) and have evaluated the performance of state-of-the-art feature extraction and classification techniques. For this time, we will propose some improved normalization functions and direction feature extraction strategies and will compare their performance with existing techniques. We compare ten normalization functions (seven based on dimensions and three based on moments) and eight feature vectors on three distinct data sources. The normalization functions and feature vectors are combined to produce eighty classification accuracies to each dataset. The comparison of normalization functions shows that moment-based functions outperform the dimension-based ones and the aspect ratio mapping is influential. The comparison of feature vectors shows that the improved feature extraction strategies outperform their baseline counterparts. The gradient feature from gray-scale image mostly yields the best performance and the improved NCFE (normalization-cooperated feature extraction) features also perform well. The combined effects of normalization, feature extraction, and classification have yielded very high accuracies on well-known datasets.
TL;DR: A new method for localizing and recognizing text in complex images and videos and showing good performance when integrated in a sports video annotation system and a video indexing system within the framework of two European projects is presented.
Abstract: Text embedded in images and videos represents a rich source of information for content-based indexing and retrieval applications. In this paper, we present a new method for localizing and recognizing text in complex images and videos. Text localization is performed in a two step approach that combines the speed of a focusing step with the strength of a machine learning based text verification step. The experiments conducted show that the support vector machine is more appropriate for the verification task than the more commonly used neural networks. To perform text recognition on the localized regions, we propose a new multi-hypotheses method. Assuming different models of the text image, several segmentation hypotheses are produced. They are processed by an optical character recognition (OCR) system, and the result is selected from the generated strings according to a confidence value computed using language modeling and OCR statistics. Experiments show that this approach leads to much better results than the conventional method that tries to improve the individual segmentation algorithm. The whole system has been tested on several hours of videos and showed good performance when integrated in a sports video annotation system and a video indexing system within the framework of two European projects.
TL;DR: Integrative Co-occurrence matrices are introduced as novel features for color texture classification and the existence of intensity independent pure color patterns is demonstrated.
Abstract: Integrative Co-occurrence matrices are introduced as novel features for color texture classification. The extended Co-occurrence notation allows the comparison between integrative and parallel color texture concepts. The information profit of the new matrices is shown quantitatively using the Kolmogorov distance and by extensive classification experiments on two datasets. Applying them to the RGB and the LUV color space the combined color and intensity textures are studied and the existence of intensity independent pure color patterns is demonstrated. The results are compared with two baselines: gray-scale texture analysis and color histogram analysis. The novel features improve the classification results up to 20% and 32% for the first and second baseline, respectively.
TL;DR: It is argued that feature selection is an important problem in object detection and demonstrated that genetic algorithms (GAs) provide a simple, general, and powerful framework for selecting good subsets of features, leading to improved detection rates.
Abstract: Past work on object detection has emphasized the issues of feature extraction and classification, however, relatively less attention has been given to the critical issue of feature selection. The main trend in feature extraction has been representing the data in a lower dimensional space, for example, using principal component analysis (PCA). Without using an effective scheme to select an appropriate set of features in this space, however, these methods rely mostly on powerful classification algorithms to deal with redundant and irrelevant features. In this paper, we argue that feature selection is an important problem in object detection and demonstrate that genetic algorithms (GAs) provide a simple, general, and powerful framework for selecting good subsets of features, leading to improved detection rates. As a case study, we have considered PCA for feature extraction and support vector machines (SVMs) for classification. The goal is searching the PCA space using GAs to select a subset of eigenvectors encoding important information about the target concept of interest. This is in contrast to traditional methods selecting some percentage of the top eigenvectors to represent the target concept, independently of the classification task. We have tested the proposed framework on two challenging applications: vehicle detection and face detection. Our experimental results illustrate significant performance improvements in both cases.
TL;DR: An attempt to reflect shape information of the iris by analyzing local intensity variations of an iris image by constructing a set of one-dimensional intensity signals that reflect to a large extent their various spatial modes and are used as distinguishing features.
Abstract: As all emerging biometric for human identification, iris recognition has received increasing attention in recent years. This paper makes an attempt to reflect shape information of the iris by analyzing local intensity variations of an iris image. In Our framework, a set of one-dimensional (1D) intensity signals is constructed to contain the most important local variations of the original 2D iris image. Gaussian-Hermite moments of Such intensity signals reflect to a large extent their various spatial modes and are used as distinguishing features. A resulting high-dimensional feature vector is mapped into a low-dimensional subspace using Fisher linear discriminant, and then the nearest center classifier based on cosine similarity measure is adopted for classification. Extensive experimental results show that the proposed method is effective and encouraging. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
TL;DR: In this paper, a novel algorithm of incremental principal component analysis (PCA) is presented, which is computationally efficient for large-scale problems as well as adaptable to reflect the variable state of a dynamic system.
Abstract: Principal Component Analysis (PCA) has been of great interest in computer vision and pattern recognition. In particular, incrementally learning a PCA model, which is computationally efficient for large-scale problems as well as adaptable to reflect the variable state of a dynamic system, is an attractive research topic with numerous applications such as adaptive background modelling and active object recognition. In addition, the conventional PCA, in the sense of least mean squared error minimisation, is susceptible to outlying measurements. To address these two important issues, we present a novel algorithm of incremental PCA, and then extend it to robust PCA. Compared with the previous studies on robust PCA, our algorithm is computationally more efficient. We demonstrate the performance of these algorithms with experimental results on dynamic background modelling and multi-view face modelling.
TL;DR: A new thresholding technique based on two-dimensional Renyi's entropy is presented, which extends a method due to Sahoo et al. (1997) and includes a previously proposed global thresholding methodDue to Abutaleb (Pattern Recognition 47 (1989) 22).
Abstract: In this paper, we present a new thresholding technique based on two-dimensional Renyi's entropy. The two-dimensional Renyi's entropy was obtained from the two-dimensional histogram which was determined by using the gray value of the pixels and the local average gray value of the pixels. This new method extends a method due to Sahoo et al. (Pattern Recognition 30 (1997) 71) and includes a previously proposed global thresholding method due to Abutaleb (Pattern Recognition 47 (1989) 22). Further, our method extends a global thresholding method due to Chang et al. (IEEE Trans. Image Process. 4 (1995) 370) to the two-dimensional setting. The effectiveness of the proposed method is demonstrated by using examples from the real-world and synthetic images.
TL;DR: A new approach is developed, which allows the use of the k-means-type paradigm to efficiently cluster large data sets by using weighted dissimilarity measures for objects.
Abstract: One of the main problems in cluster analysis is the weighting of attributes so as to discover structures that may be present. By using weighted dissimilarity measures for objects, a new approach is developed, which allows the use of the k-means-type paradigm to efficiently cluster large data sets. The optimization algorithm is presented and the effectiveness of the algorithm is demonstrated with both synthetic and real data sets.
TL;DR: A new algorithm for determining the number of clusters in a given data set and a new validity index for measuring the "goodness" of clustering are presented.
Abstract: Clustering is an important research topic that has practical applications in many fields. It has been demonstrated that fuzzy clustering, using algorithms such as the fuzzy C-means (FCM), has clear advantages over crisp and probabilistic clustering methods. Like most clustering algorithms, however, FCM and its derivatives need the number of clusters in the given data set as one of their initializing parameters. The main goal of this paper is to develop an effective fuzzy algorithm for automatically determining the number of clusters. After a brief review of the relevant literature, we present a new algorithm for determining the number of clusters in a given data set and a new validity index for measuring the "goodness" of clustering. Experimental results and comparisons are given to illustrate the performance of the new algorithm.
TL;DR: The main characteristics of the proposed methods are image encryption, first stage compression-based frames differences and encryption of video whose compression error can be bounded pixelwise by a user specified value, very large number of encryption keys, and ability to encrypt large blocks of any digital data.
Abstract: This paper presents a new method for image and video encryption and a first stage lossy video compression based on frames difference before the encryption. The encryption methods are based on the SCAN methodology which is a formal language-based two-dimensional spatial accessing methodology which can generate very large number of scanning paths or space filling curves. The image encryption is performed by SCAN-based permutation of pixels and a substitution rule which together form an iterated product cipher. The video encryption is performed by first lossy compressing adjacent frame differences and then encrypting the compressed frame differences. The main characteristics of the proposed methods are image encryption, first stage compression-based frames differences and encryption of video whose compression error can be bounded pixelwise by a user specified value, very large number of encryption keys, and ability to encrypt large blocks of any digital data. Results from the use of the methods proposed here are also provided.
TL;DR: New algorithms that perform clustering and feature weighting simultaneously and in an unsupervised manner are introduced and can be used in the subsequent steps of a learning system to improve its learning behavior.
Abstract: In this paper, we introduce new algorithms that perform clustering and feature weighting simultaneously and in an unsupervised manner. The proposed algorithms are computationally and implementationally simple, and learn a different set of feature weights for each identified cluster. The cluster dependent feature weights offer two advantages. First, they guide the clustering process to partition the data set into more meaningful clusters. Second, they can be used in the subsequent steps of a learning system to improve its learning behavior. An extension of the algorithm to deal with an unknown number of clusters is also proposed. The extension is based on competitive agglomeration, whereby the number of clusters is over-specified, and adjacent clusters are allowed to compete for data points in a manner that causes clusters which lose in the competition to gradually become depleted and vanish. We illustrate the performance of the proposed approach by using it to segment color images, and to build a nearest prototype classifier.
TL;DR: This paper presents a meta-modelling framework for selecting Informative Features with Fuzzy-Rough Sets and its application for Complex Systems Monitoring.
Abstract: Q. Shen and R. Jensen, 'Selecting Informative Features with Fuzzy-Rough Sets and its Application for Complex Systems Monitoring,' Pattern Recognition, vol. 37, no. 7, pp. 1351-1363, 2004.
TL;DR: A new cluster validity index is proposed that determines the optimal partition and optimal number of clusters for fuzzy partitions obtained from the fuzzy c-means algorithm using an overlap measure and a separation measure between clusters.
Abstract: A new cluster validity index is proposed that determines the optimal partition and optimal number of clusters for fuzzy partitions obtained from the fuzzy c-means algorithm. The proposed validity index exploits an overlap measure and a separation measure between clusters. The overlap measure, which indicates the degree of overlap between fuzzy clusters, is obtained by computing an inter-cluster overlap. The separation measure, which indicates the isolation distance between fuzzy clusters, is obtained by computing a distance between fuzzy clusters. A good fuzzy partition is expected to have a low degree of overlap and a larger separation distance. Testing of the proposed index and nine previously formulated indexes on well-known data sets showed the superior effectiveness and reliability of the proposed index in comparison to other indexes.
TL;DR: Matching results on a database of 50 different fingers, with 200 impressions per finger, indicate that a systematic template selection procedure as presented here results in better performance than random template selection.
Abstract: A biometric authentication system operates by acquiring biometric data from a user and comparing it against the template data stored in a database in order to identify a person or to verify a claimed identity. Most systems store multiple templates per user in order to account for variations observed in a person's biometric data. In this paper we propose two methods to perform automatic template selection where the goal is to select prototype fingerprint templates for a finger from a given set of fingerprint impressions. The first method, called DEND, employs a clustering strategy to choose a template set that best represents the intra-class variations, while the second method, called MDIST, selects templates that exhibit maximum similarity with the rest of the impressions. Matching results on a database of 50 different fingers, with 200 impressions per finger, indicate that a systematic template selection procedure as presented here results in better performance than random template selection. The proposed methods have also been utilized to perform automatic template update. Experimental results underscore the importance of these techniques.
TL;DR: A novel algorithm for the automatic classification of low-resolution palmprints is proposed, using a set of directional line detectors to extract the principal lines of the palm in terms of their characteristics and their definitions in two steps.
Abstract: This paper proposes a novel algorithm for the automatic classification of low-resolution palmprints. First the principal lines of the palm are defined using their position and thickness. Then a set of directional line detectors is devised. After that we use these directional line detectors to extract the principal lines in terms of their characteristics and their definitions in two steps: the potential beginnings ("line initials") of the principal lines are extracted and then, based on these line initials, a recursive process is applied to extract the principal lines in their entirety. Finally palmprints are classified into six categories according to the number of the principal lines and the number of their intersections. The proportions of these six categories (1-6) in our database containing 13,800 samples are 0.36%, 1.23%, 2.83%, 11.81%, 78.12% and 5.65%, respectively. The proposed algorithm has been shown to classify palmprints with an accuracy of 96.03%.