scispace - formally typeset
Search or ask a question

Showing papers in "Pattern Recognition in 2005"


Journal ArticleDOI
TL;DR: This paper surveys and summarizes previous works that investigated the clustering of time series data in various application domains, including general-purpose clustering algorithms commonly used in time series clustering studies.
Abstract: Time series clustering has been shown effective in providing useful information in various domains. There seems to be an increased interest in time series clustering as part of the effort in temporal data mining research. To provide an overview, this paper surveys and summarizes previous works that investigated the clustering of time series data in various application domains. The basics of time series clustering are presented, including general-purpose clustering algorithms commonly used in time series clustering studies, the criteria for evaluating the performance of the clustering results, and the measures to determine the similarity/dissimilarity between two time series being compared, either in the forms of raw data, extracted features, or some model parameters. The past researchs are organized into three groups depending upon whether they work directly with the raw data either in the time or frequency domain, indirectly with features extracted from the raw data, or indirectly with models built from the raw data. The uniqueness and limitation of previous research are discussed and several possible topics for future research are identified. Moreover, the areas that time series clustering have been applied to are also summarized, including the sources of data used. It is hoped that this review will serve as the steppingstone for those interested in advancing this area of research.

2,336 citations


Journal ArticleDOI
TL;DR: Study of the performance of different normalization techniques and fusion rules in the context of a multimodal biometric system based on the face, fingerprint and hand-geometry traits of a user found that the application of min-max, z-score, and tanh normalization schemes followed by a simple sum of scores fusion method results in better recognition performance compared to other methods.
Abstract: Multimodal biometric systems consolidate the evidence presented by multiple biometric sources and typically provide better recognition performance compared to systems based on a single biometric modality. Although information fusion in a multimodal system can be performed at various levels, integration at the matching score level is the most common approach due to the ease in accessing and combining the scores generated by different matchers. Since the matching scores output by the various modalities are heterogeneous, score normalization is needed to transform these scores into a common domain, prior to combining them. In this paper, we have studied the performance of different normalization techniques and fusion rules in the context of a multimodal biometric system based on the face, fingerprint and hand-geometry traits of a user. Experiments conducted on a database of 100 users indicate that the application of min-max, z-score, and tanh normalization schemes followed by a simple sum of scores fusion method results in better recognition performance compared to other methods. However, experiments also reveal that the min-max and z-score normalization techniques are sensitive to outliers in the data, highlighting the need for a robust and efficient normalization procedure like the tanh normalization. It was also observed that multimodal systems utilizing user-specific weights perform better compared to systems that assign the same set of weights to the multiple biometric traits of all users.

2,021 citations


Journal ArticleDOI
TL;DR: A hybrid learning algorithm is proposed which uses the differential evolutionary algorithm to select the input weights and Moore-Penrose (MP) generalized inverse to analytically determine the output weights.
Abstract: Extreme learning machine (ELM) [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25-29 July 2004], a novel learning algorithm much faster than the traditional gradient-based learning algorithms, was proposed recently for single-hidden-layer feedforward neural networks (SLFNs). However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. In this paper, a hybrid learning algorithm is proposed which uses the differential evolutionary algorithm to select the input weights and Moore-Penrose (MP) generalized inverse to analytically determine the output weights. Experimental results show that this approach is able to achieve good generalization performance with much more compact networks.

734 citations


Journal ArticleDOI
TL;DR: The tests show that the extracted features are independent of sensor location, invariant to the individual's state of anxiety, and unique to an individual.
Abstract: The electrocardiogram (ECG also called EKG) trace expresses cardiac features that are unique to an individual. The ECG processing followed a logical series of experiments with quantifiable metrics. Data filters were designed based upon the observed noise sources. Fiducial points were identified on the filtered data and extracted digitally for each heartbeat. From the fiducial points, stable features were computed that characterize the uniqueness of an individual. The tests show that the extracted features are independent of sensor location, invariant to the individual's state of anxiety, and unique to an individual.

652 citations


Journal ArticleDOI
TL;DR: Experimental results on Concordia University CENPARMI database of handwritten Arabic numerals and Yale face database show that recognition rate is far higher than that of the algorithm adopting single feature or the existing fusion algorithm.
Abstract: A new method of feature extraction, based on feature fusion, is proposed in this paper according to the idea of canonical correlation analysis (CCA). At first, the theory framework of CCA used in pattern recognition and its reasonable description are discussed. The process can be explained as follows: extract two groups of feature vectors with the same pattern; establish the correlation criterion function between the two groups of feature vectors; and extract their canonical correlation features to form effective discriminant vector for recognition. Then, the problem of canonical projection vectors is solved when two total scatter matrixes are singular, such that it fits for the case of high-dimensional space and small sample size, in this sense, the applicable range of CCA is extended. At last, the inherent essence of this method used in recognition is analyzed further in theory. Experimental results on Concordia University CENPARMI database of handwritten Arabic numerals and Yale face database show that recognition rate is far higher than that of the algorithm adopting single feature or the existing fusion algorithm.

469 citations


Journal ArticleDOI
TL;DR: The use of combined neural network model to guide model selection for classification of electrocardiogram (ECG) beats achieved accuracy rates which were higher than that of the stand-alone neuralnetwork model.
Abstract: This paper illustrates the use of combined neural network model to guide model selection for classification of electrocardiogram (ECG) beats. The ECG signals were decomposed into time-frequency representations using discrete wavelet transform and statistical features were calculated to depict their distribution. The first level networks were implemented for ECG beats classification using the statistical features as inputs. To improve diagnostic accuracy, the second level networks were trained using the outputs of the first level networks as input data. Four types of ECG beats (normal beat, congestive heart failure beat, ventricular tachyarrhythmia beat, atrial fibrillation beat) obtained from the Physiobank database were classified with the accuracy of 96.94% by the combined neural network. The combined neural network model achieved accuracy rates which were higher than that of the stand-alone neural network model.

329 citations


Journal ArticleDOI
TL;DR: 2DLDA provides a sequentially optimal image compression mechanism, making the discriminant information compact into the up-left corner of the image, and suggests a feature selection strategy to select the most discriminative features from the corner.
Abstract: This paper develops a new image feature extraction and recognition method coined two-dimensional linear discriminant analysis (2DLDA). 2DLDA provides a sequentially optimal image compression mechanism, making the discriminant information compact into the up-left corner of the image. Also, 2DLDA suggests a feature selection strategy to select the most discriminative features from the corner. 2DLDA is tested and evaluated using the AT&T face database. The experimental results show 2DLDA is more effective and computationally more efficient than the current LDA algorithms for face feature extraction and recognition.

309 citations


Journal ArticleDOI
TL;DR: A new thresholding technique is introduced which processes thresholds as type II fuzzy sets and a new measure of ultrafuzziness is also introduced and experimental results using laser cladding images are provided.
Abstract: Image thresholding is a necessary task in some image processing applications. However, due to disturbing factors, e.g. non-uniform illumination, or inherent image vagueness, the result of image thresholding is not always satisfactory. In recent years, various researchers have introduced new thresholding techniques based on fuzzy set theory to overcome this problem. Regarding images as fuzzy sets (or subsets), different fuzzy thresholding techniques have been developed to remove the grayness ambiguity/vagueness during the task of threshold selection. In this paper, a new thresholding technique is introduced which processes thresholds as type II fuzzy sets. A new measure of ultrafuzziness is also introduced and experimental results using laser cladding images are provided.

297 citations


Journal ArticleDOI
TL;DR: This method is efficient as it only uses simple operations such as parity check and comparison between average intensities and effective because the detection is based on a hierarchical structure so that the accuracy of tamper localization can be ensured.
Abstract: In this paper, we present an efficient and effective digital watermarking method for image tamper detection and recovery. Our method is efficient as it only uses simple operations such as parity check and comparison between average intensities. It is effective because the detection is based on a hierarchical structure so that the accuracy of tamper localization can be ensured. That is, if a tampered block is not detected in level-1 inspection, it will be detected in level-2 or level-3 inspection with a probability of nearly 1. Our method is also very storage effective, as it only requires a secret key and a public chaotic mixing algorithm to recover a tampered image. The experimental results demonstrate that the precision of tamper detection and localization is 99.6% and 100% after level-2 and level-3 inspection, respectively. The tamper recovery rate is better than 93% for a less than half tampered image. As compared with the method in Celik et al. [IEEE Trans. Image Process. 11(6) (2002) 585], our method is not only as simple and as effective in tamper detection and localization, it also provides with the capability of tamper recovery by trading off the quality of the watermarked images about 5dB.

278 citations


Journal ArticleDOI
TL;DR: The goal of this paper is to introduce a certain number of papers related with grammatical inference, some of which are essential and should constitute a common background to research in the area, whereas others are specialized on particular problems or techniques, but can be of great help on specific tasks.
Abstract: The field of grammatical inference (also known as grammar induction) is transversal to a number of research areas including machine learning, formal language theory, syntactic and structural pattern recognition, computational linguistics, computational biology and speech recognition. There is no uniform literature on the subject and one can find many papers with original definitions or points of view. This makes research in this subject very hard, mainly for a beginner or someone who does not wish to become a specialist but just to find the most suitable ideas for his own research activity. The goal of this paper is to introduce a certain number of papers related with grammatical inference. Some of these papers are essential and should constitute a common background to research in the area, whereas others are specialized on particular problems or techniques, but can be of great help on specific tasks.

275 citations


Journal ArticleDOI
TL;DR: This work presents an approach that uses localized secondary features derived from relative minutiae information that is directly applicable to existing databases and balances the tradeoffs between maximizing the number of matches and minimizing total feature distance between query and reference fingerprints.
Abstract: Matching incomplete or partial fingerprints continues to be an important challenge today, despite the advances made in fingerprint identification techniques. While the introduction of compact silicon chip-based sensors that capture only part of the fingerprint has made this problem important from a commercial perspective, there is also considerable interest in processing partial and latent fingerprints obtained at crime scenes. When the partial print does not include structures such as core and delta, common matching methods based on alignment of singular structures fail. We present an approach that uses localized secondary features derived from relative minutiae information. A flow network-based matching technique is introduced to obtain one-to-one correspondence of secondary features. Our method balances the tradeoffs between maximizing the number of matches and minimizing total feature distance between query and reference fingerprints. A two-hidden-layer fully connected neural network is trained to generate the final similarity score based on minutiae matched in the overlapping areas. Since the minutia-based fingerprint representation is an ANSI-NIST standard [American National Standards Institute, New York, 1993], our approach has the advantage of being directly applicable to existing databases. We present results of testing on FVC2002's DB1 and DB2 databases.

Journal ArticleDOI
TL;DR: An efficient two-dimensional-to-three-dimensional integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination and the synthesized virtual faces significantly improve the accuracy of face recognition with changing PIE.
Abstract: Face recognition with variant pose, illumination and expression (PIE) is a challenging problem. In this paper, we propose an analysis-by-synthesis framework for face recognition with variant PIE. First, an efficient two-dimensional (2D)-to-three-dimensional (3D) integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Then, realistic virtual faces with different PIE are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related work, this framework has following advantages: (1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; (2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex PIE; and (3) compared with other 3D reconstruction approaches, our proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. The extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with changing PIE.

Journal ArticleDOI
TL;DR: The method of wavelet preprocessed golden image subtraction (WGIS) has been developed for defect detection on patterned fabric or repetitive patterned texture and it can be concluded that the WGIS method provides the best detection result.
Abstract: The wavelet transform (WT) has been developed over 20 years and successfully applied in defect detection on plain (unpatterned) fabric. This paper is on the use of the wavelet transform to develop an automated visual inspection method for defect detection on patterned fabric. A method called direct thresholding (DT) based on WT detailed subimages has been developed. The golden image subtraction method (GIS) is also introduced. GIS is an efficient and fast method, which can segment out the defective regions on patterned fabric effectively. In this paper, the method of wavelet preprocessed golden image subtraction (WGIS) has been developed for defect detection on patterned fabric or repetitive patterned texture. This paper also presents a comparison of the three methods. It can be concluded that the WGIS method provides the best detection result. The overall detection success rate is 96.7% with 30 defect-free images and 30 defective patterned images for one common kind of patterned Jacquard fabric.

Journal ArticleDOI
TL;DR: This paper proposes a new colour invariant image representation based on an existing grey-scale image enhancement technique: histogram equalisation and applies the method to an image indexing application and shows that the method out performs all previous invariant representations.
Abstract: Colour can potentially provide useful information for a variety of computer vision tasks such as image segmentation, image retrieval, object recognition and tracking. However, for it to be helpful in practice, colour must relate directly to the intrinsic properties of the imaged objects and be independent of imaging conditions such as scene illumination and the imaging device. To this end many invariant colour representations have been proposed in the literature. Unfortunately, recent work (Second Workshop on Content-based Multimedia Indexing) has shown that none of them provides good enough practical performance. In this paper we propose a new colour invariant image representation based on an existing grey-scale image enhancement technique: histogram equalisation. We show that provided the rank ordering of sensor responses are preserved across a change in imaging conditions (lighting or device) a histogram equalisation of each channel of a colour image renders it invariant to these conditions. We set out theoretical conditions under which rank ordering of sensor responses is preserved and we present empirical evidence which demonstrates that rank ordering is maintained in practice for a wide range of illuminants and imaging devices. Finally, we apply the method to an image indexing application and show that the method out performs all previous invariant representations, giving close to perfect illumination invariance and very good performance across a change in device.

Journal ArticleDOI
TL;DR: Experiments on the ORL and the UMIST face databases show that the new scheme outperforms the PCA and the conventional PCA+FLD schemes, not only in its computational efficiency, but also in its performance for the task of face recognition.
Abstract: This paper presents a new scheme of face image feature extraction, namely, the two-dimensional Fisher linear discriminant. Experiments on the ORL and the UMIST face databases show that the new scheme outperforms the PCA and the conventional PCA+FLD schemes, not only in its computational efficiency, but also in its performance for the task of face recognition.

Journal ArticleDOI
TL;DR: A new secret sharing scheme capable of protecting image data coded with B bits per pixel is introduced and analyzed in this paper, which allows for cost-effective cryptographic image processing of B-bit images over the Internet.
Abstract: A new secret sharing scheme capable of protecting image data coded with B bits per pixel is introduced and analyzed in this paper. The proposed input-agnostic encryption solution generates B-bit shares by combining bit-level decomposition/stacking with a {k,n}-threshold sharing strategy. Perfect reconstruction is achieved by performing decryption through simple logical operations in the decomposed bit-levels without the need for any postprocessing operations. The framework allows for cost-effective cryptographic image processing of B-bit images over the Internet.

Journal ArticleDOI
TL;DR: The experiments conducted on a bi-class problem show that the proposed methodology can adequately choose the SVM hyper-parameters using the empirical error criterion and it turns out that the criterion produces a less complex model with fewer support vectors.
Abstract: This approach aims to optimize the kernel parameters and to efficiently reduce the number of support vectors, so that the generalization error can be reduced drastically. The proposed methodology suggests the use of a new model selection criterion based on the estimation of the probability of error of the SVM classifier. For comparison, we considered two more model selection criteria: GACV ('Generalized Approximate Cross-Validation') and VC ('Vapnik-Chernovenkis') dimension. These criteria are algebraic estimates of upper bounds of the expected error. For the former, we also propose a new minimization scheme. The experiments conducted on a bi-class problem show that we can adequately choose the SVM hyper-parameters using the empirical error criterion. Moreover, it turns out that the criterion produces a less complex model with fewer support vectors. For multi-class data, the optimization strategy is adapted to the one-against-one data partitioning. The approach is then evaluated on images of handwritten digits from the USPS database.

Journal ArticleDOI
TL;DR: The proposed scheme is shown to outperform significantly the fusion approach without considering quality signals and a relative improvement of approximately 20% is obtained on the publicly available MCYT bimodal database.
Abstract: A novel score-level fusion strategy based on quality measures for multimodal biometric authentication is presented. In the proposed method, the fusion function is adapted every time an authentication claim is performed based on the estimated quality of the sensed biometric signals at this time. Experimental results combining written signatures and quality-labelled fingerprints are reported. The proposed scheme is shown to outperform significantly the fusion approach without considering quality signals. In particular, a relative improvement of approximately 20% is obtained on the publicly available MCYT bimodal database.

Journal ArticleDOI
TL;DR: This paper characterize user's identity through the simultaneous use of three major palmprint representations and achieve better performance than either one individually and proposes product of sum rule which achieves betterperformance than any other fixed combination rules.
Abstract: Although several palmprint representations have been proposed for personal authentication, there is little agreement on which palmprint representation can provide best representation for reliable authentication. In this paper, we characterize user's identity through the simultaneous use of three major palmprint representations and achieve better performance than either one individually. This paper also investigates comparative performance between Gabor, line and appearance based palmprint representations and using their score and decision level fusion. The combination of various representations may not always lead to higher performance as the features from the same image may be correlated. Therefore we also propose product of sum rule which achieves better performance than any other fixed combination rules. Our experimental results on the database of 100 users achieve 34.56% improvement in performance (equal error rate) as compared to the case when features from single palmprint representation are employed. The proposed usage of multiple palmprint representations, especially on the peg-free and non-contact imaging setup, achieves promising results and demonstrates its usefulness.

Journal ArticleDOI
TL;DR: A novel shape similarity retrieval algorithm that can be used to match and recognize 2D objects using a new multi-resolution polygonal shape descriptor that is invariant to scale, rotation and translation.
Abstract: In this paper, we are going to present a novel shape similarity retrieval algorithm that can be used to match and recognize 2D objects. The match process uses a new multi-resolution polygonal shape descriptor that is invariant to scale, rotation and translation. The shape descriptor equally segments the contour of any shape, regardless of its complexity, and captures three features around its center including the distance and slope relative to the center. All parameters are normalized relative to the max values. The novel shape matching algorithm uses the shape descriptor and applies it by linearly scanning a stored set of shapes and measuring the similarity using elastic comparisons of shape segments. Similarity measurement is achieved by the sum of differences distance measure. The multi-resolution segmentation provides flexibility for applications that have different time and space requirements while maintaining high accuracy results and the elastic matching adds an advantage when matching partially occluded shapes. We applied our algorithms on many test databases including the MPEG-7 shape core experiment and achieved the highest result reported with a score of 84.33% for the MPEG-7 Part B similarity test.

Journal ArticleDOI
TL;DR: It is shown that the use of an average deformation model leads to an improvement in the alignment between impressions originating from the same finger.
Abstract: The process of automatic fingerprint matching is affected by the nonlinear deformation introduced in the image during fingerprint sensing. Given several template impressions of a finger, we estimate the ''average'' deformation of each template impression by comparing it with the rest of the impressions of that finger. The average deformation is developed using the thin plate spline (TPS) model and is based on minutia point correspondences between pairs of fingerprint impressions. The estimated average deformation is utilized to pre-distort the minutiae points in the template image before matching it with the minutiae points in the query image. We show that the use of an average deformation model leads to a better alignment between the template and query minutiae points. An index of deformation is proposed for choosing the deformation model with the least variability arising from a set of template impressions corresponding to a finger. Our experimental data consists of 1600 fingerprints corresponding to 50 different fingers collected over a period of 2 weeks. It is shown that the average deformation model leads to an improvement in the alignment between impressions originating from the same finger.

Journal ArticleDOI
TL;DR: This article presents an overview of Probabilistic Automata (PA) and discrete Hidden Markov Models (HMMs), and aims at clarifying the links between them, and presents several learning models, which formalize the problem of PA induction or, equivalently, theproblem of HMM topology induction and parameter estimation.
Abstract: This article presents an overview of Probabilistic Automata (PA) and discrete Hidden Markov Models (HMMs), and aims at clarifying the links between them. The first part of this work concentrates on probability distributions generated by these models. Necessary and sufficient conditions for an automaton to define a probabilistic language are detailed. It is proved that probabilistic deterministic automata (PDFA) form a proper subclass of probabilistic non-deterministic automata (PNFA). Two families of equivalent models are described next. On one hand, HMMs and PNFA with no final probabilities generate distributions over complete finite prefix-free sets. On the other hand, HMMs with final probabilities and probabilistic automata generate distributions over strings of finite length. The second part of this article presents several learning models, which formalize the problem of PA induction or, equivalently, the problem of HMM topology induction and parameter estimation. These learning models include the PAC and identification with probability 1 frameworks. Links with Bayesian learning are also discussed. The last part of this article presents an overview of induction algorithms for PA or HMMs using state merging, state splitting, parameter pruning and error-correcting techniques.

Journal ArticleDOI
TL;DR: Experiments show that the proposed method for assisting in human identification using dental radiographs is effective for dental image classification and teeth segmentation, provides good results for separating each tooth into crown and root, and provides a good tool for human identification.
Abstract: This paper presents a system for assisting in human identification using dental radiographs. The goal of the system is to archive antemortem (AM) dental images and enable content-based retrieval of AM images that have similar teeth shapes to a given postmortem (PM) dental image. During archiving, the system classifies the dental images to bitewing, periapical, and panoramic views. It then segments the teeth and the bones in the bitewing images, separates each tooth into the crown and the root, and stores the contours of the teeth in the database. During retrieval, the proposed system retrieves from the AM database the images with the most similar teeth to the PM image based on Hausdorff distance measure between the teeth contours. Experiments on a small database show that our method is effective for dental image classification and teeth segmentation, provides good results for separating each tooth into crown and root, and provides a good tool for human identification.

Journal ArticleDOI
TL;DR: A system for automating that process by identifying people from dental X-ray images by retrieves the best matches from an antemortem (AM) database and developed a new method for teeth separation based on integral projection.
Abstract: Forensic odontology is the branch of forensics that deals with human identification based on dental features. In this paper, we present a system for automating that process by identifying people from dental X-ray images. Given a dental image of a postmortem (PM), the proposed system retrieves the best matches from an antemortem (AM) database. The system automatically segments dental X-ray images into individual teeth and extracts the contour of each tooth. Features are extracted from each tooth and are used for retrieval. We developed a new method for teeth separation based on integral projection. We also developed a new method for representing and matching teeth contours using signature vectors obtained at salient points on the contours of the teeth. During retrieval, the AM radiographs that have signatures closer to the PM are found and presented to the user. Matching scores are generated based on the distance between the signature vectors of AM and PM teeth. Experimental results on a small database of dental radiographs are encouraging.

Journal ArticleDOI
TL;DR: This work proposes a novel adaptive approach for character segmentation and feature vector extraction from seriously degraded images that can detect fragmented, overlapped, or connected character and adaptively apply for one of three algorithms without manual fine-tuning.
Abstract: This work proposes a novel adaptive approach for character segmentation and feature vector extraction from seriously degraded images. An algorithm based on the histogram automatically detects fragments and merges these fragments before segmenting the fragmented characters. A morphological thickening algorithm automatically locates reference lines for separating the overlapped characters. A morphological thinning algorithm and the segmentation cost calculation automatically determine the baseline for segmenting the connected characters. Basically, our approach can detect fragmented, overlapped, or connected character and adaptively apply for one of three algorithms without manual fine-tuning. Seriously degraded images as license plate images taken from real world are used in the experiments to evaluate the robustness, the flexibility and the effectiveness of our approach. The system approach output data as feature vectors keep useful information more accurately to be used as input data in an automatic pattern recognition system.

Journal ArticleDOI
TL;DR: A novel approach to the problem of automatic off-line signature verification and forgery detection based on fuzzy modeling that employs the Takagi-Sugeno (TS) model is proposed, finding that TS model with multiple rules is better thanTS model with single rule for detecting three types of forgeries.
Abstract: Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance.

Journal ArticleDOI
TL;DR: The comprehensive experiments completed on ORL, Yale, and CNU (Chungbuk National University) face databases show improved classification rates and reduced sensitivity to variations between face images caused by changes in illumination and viewing directions.
Abstract: In this study, we are concerned with face recognition using fuzzy fisherface approach and its fuzzy set based augmentation. The well-known fisherface method is relatively insensitive to substantial variations in light direction, face pose, and facial expression. This is accomplished by using both principal component analysis and Fisher's linear discriminant analysis. What makes most of the methods of face recognition (including the fisherface approach) similar is an assumption about the same level of typicality (relevance) of each face to the corresponding class (category). We propose to incorporate a gradual level of assignment to class being regarded as a membership grade with anticipation that such discrimination helps improve classification results. More specifically, when operating on feature vectors resulting from the PCA transformation we complete a Fuzzy K-nearest neighbor class assignment that produces the corresponding degrees of class membership. The comprehensive experiments completed on ORL, Yale, and CNU (Chungbuk National University) face databases show improved classification rates and reduced sensitivity to variations between face images caused by changes in illumination and viewing directions. The performance is compared vis-a-vis other commonly used methods, such as eigenface and fisherface.

Journal ArticleDOI
TL;DR: A novel illumination compensation algorithm, which can compensate for the uneven illuminations on human faces and reconstruct face images in normal lighting conditions and eliminate the influence of uneven illumination while retaining the shape information about a human face is proposed.
Abstract: This paper proposes a novel illumination compensation algorithm, which can compensate for the uneven illuminations on human faces and reconstruct face images in normal lighting conditions. A simple yet effective local contrast enhancement method, namely block-based histogram equalization (BHE), is first proposed. The resulting image processed using BHE is then compared with the original face image processed using histogram equalization (HE) to estimate the category of its light source. In our scheme, we divide the light source for a human face into 65 categories. Based on the category identified, a corresponding lighting compensation model is used to reconstruct an image that will visually be under normal illumination. In order to eliminate the influence of uneven illumination while retaining the shape information about a human face, a 2D face shape model is used. Experimental results show that, with the use of principal component analysis for face recognition, the recognition rate can be improved by 53.3% to 62.6% when our proposed algorithm for lighting compensation is used.

Journal ArticleDOI
TL;DR: Experimental results on classifying data of a surface inspection task and data sets from the UCI repository show that Bayesian network classifiers are competitive with selective k-NN classifiers concerning classification accuracy.
Abstract: In this paper Bayesian network classifiers are compared to the k-nearest neighbor (k-NN) classifier, which is based on a subset of features. This subset is established by means of sequential feature selection methods. Experimental results on classifying data of a surface inspection task and data sets from the UCI repository show that Bayesian network classifiers are competitive with selective k-NN classifiers concerning classification accuracy. The k-NN classifier performs well in the case where the number of samples for learning the parameters of the Bayesian network is small. Bayesian network classifiers outperform selective k-NN methods in terms of memory requirements and computational demands. This paper demonstrates the strength of Bayesian networks for classification.

Journal ArticleDOI
TL;DR: A new method using Gabor filters for character recognition in gray-scale images is proposed and has excellent performance on both low-quality machine-printed character recognition and cursive handwritten character recognition.
Abstract: A new method using Gabor filters for character recognition in gray-scale images is proposed in this paper. Features are extracted directly from gray-scale character images by Gabor filters which are specially designed from statistical information of character structures. An adaptive sigmoid function is applied to the outputs of Gabor filters to achieve better performance on low-quality images. In order to enhance the discriminability of the extracted features, the positive and the negative real parts of the outputs from the Gabor filters are used separately to construct histogram features. Experiments show us that the proposed method has excellent performance on both low-quality machine-printed character recognition and cursive handwritten character recognition.