scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Multi scale feature extraction and enhancement using SVD towards secure Face Recognition system

TL;DR: Singular Value Decomposition is used to deal with surrounding illumination and wavelets are employed to aid the KPCA in capturing the Multi Scale Features there by making the System robust to pose and illumination variation.
Abstract: Biometric devices provide secure mechanism towards gaining access. One of the Biometric features is Face and the system implemented is Face Recognition system. The Classical Face Recognition System is implemented with Principal Component Analysis and is successful. PCA is a linear method of extracting the features in a lower dimension space and is severely affected by the Pose and surrounding illumination variation. To implement effective face recognition system, pose variation is to be considered and the problem is well addressed with Kernel PCA (nonlinear PCA). KPCA extracts features in a higher dimension space, there by the system is rugged to pose variation. The illumination variation is accounted for capture range of the front end device and its surrounding and is not dealt in KPCA. In this work Singular Value Decomposition is used to deal with surrounding illumination and wavelets are employed to aid the KPCA in capturing the Multi Scale Features there by making the System robust to pose and illumination variation. To show the performance, the proposed method is tested on YaleB, ORL Databases. The results obtained show the impact of the method and is compared with PCA, KPCA.
Citations
More filters
Journal ArticleDOI
01 Jan 2013
TL;DR: Performance analysis of Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) for face recognition was carried out on various current PCA and LDA based face recognition algorithms using standard public databases.
Abstract: Analysing the face recognition rate of various current face recognition algorithms is absolutely critical in developing new robust algorithms In his paper we report performance analysis of Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) for face recognition This analysis was carried out on various current PCA and LDA based face recognition algorithms using standard public databases Among various PCA algorithms analyzed, Manual face localization used on ORL and SHEFFIELD database consisting of 100 components gives the best face recognition rate of 100%, the next best was 9970% face recognition rate using PCA based Immune Networks (PCA-IN) on ORL database Among various LDA algorithms analyzed, Illumination Adaptive Linear Discriminant Analysis (IALDA) gives the best face recognition rate of 989% on CMU PIE database, the next best was 98125% using Fuzzy Fisherface through genetic algorithm on ORL database In this paper we report performance analysis of various current PCA and LDA based algorithms for face recognition The evaluation parameter for the study is face recognition rate on various standard public databases The remaining of the paper is organized as follows: Section II provides a brief overview of PCA, Section III presents PCA algorithms analysed, Section IV provides brief overview of LDA, Section V presents LDA algorithms analysed Section VI presents performance analysis of various PCA and LDA based algorithms finally Section VII draws the conclusion II PRINCIPAL COMPONENT ANALYSIS (PCA)

41 citations


Cites methods from "Multi scale feature extraction and ..."

  • ...Kernel PCA (KPCA) [3] is a method widely used....

    [...]

Journal ArticleDOI
TL;DR: In this work, NVIDIA graphical processing unit (GPU) GeForce GTX 1050Ti is used to offload segmentation process and part of Laws texture feature calculations in stratified squamous epithelium biopsy image classifier (SSE-BIC) from CPU to accommodate parallel processing and results showed that parallel implementation is about 13.04X times faster than the serial CPU implementation.

3 citations

Dissertation
01 Jan 2014
TL;DR: The proposed associative learning model can produce high learning performance in terms of combining heterogeneous data (face–speech) and opens possibilities to expand RL in the field of biometric authentication.
Abstract: We propose an associative learning model that can integrate facial images with speech signals to target a subject in a reinforcement learning (RL) paradigm. Through this approach, the rules of learning will involve associating paired stimuli (stimulus–stimulus, i.e., face–speech), which is also known as predictor-choice pairs. Prior to a learning simulation, we extract the features of the biometrics used in the study. For facial features, we experiment by using two approaches: principal component analysis (PCA)-based Eigenfaces and singular value decomposition (SVD). For speech features, we use wavelet packet decomposition (WPD). The experiments show that the PCA-based Eigenfaces feature extraction approach produces better results than SVD. We implement the proposed learning model by using the Spike- Timing-Dependent Plasticity (STDP) algorithm, which depends on the time and rate of pre-post synaptic spikes. The key contribution of our study is the implementation of learning rules via STDP and firing rate in spatiotemporal neural networks based on the Izhikevich spiking model. In our learning, we implement learning for response group association by following the reward-modulated STDP in terms of RL, wherein the firing rate of the response groups determines the reward that will be given. We perform a number of experiments that use existing face samples from the Olivetti Research Laboratory (ORL) dataset, and speech samples from TIDigits. After several experiments and simulations are performed to recognize a subject, the results show that the proposed learning model can associate the predictor (face) with the choice (speech) at optimum performance rates of 77.26% and 82.66% for training and testing, respectively. We also perform learning by using real data, that is, an experiment is conducted on a sample of face–speech data, which have been collected in a manner similar to that of the initial data. The performance results are 79.11% and 77.33% for training and testing, respectively. Based on these results, the proposed learning model can produce high learning performance in terms of combining heterogeneous data (face–speech). This finding opens possibilities to expand RL in the field of biometric authentication

1 citations

Journal Article
TL;DR: This paper represents a robust method of face recognition using gabor feature extraction, kernel PCA and K-NN classifier, which uses ORL database.
Abstract: Face recognition is always a hot topic in research. In this paper, we represent a robust method of face recognition using gabor feature extraction, kernel PCA and K-NN classifier. Gabor features are calculated for each face images then it’s polynomial kernel function is calculated, it is directly applied to the K-NN classifier. The effectiveness of the proposed method is demonstrated by the experimental results on testing large number of images. The result shows good recognition rate. The proposed method uses ORL database. Keyword Gabor filter, Kernel Principal Component Analysis, K-NN Classifier, ORL Dataset, Polynomial Kernel Function, Cos Distance. ________________________________________________________________________________________________________
References
More filters
Journal ArticleDOI
TL;DR: A generative appearance-based method for recognizing human faces under variation in lighting and viewpoint that exploits the fact that the set of images of an object in fixed pose but under all possible illumination conditions, is a convex cone in the space of images.
Abstract: We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions.

5,027 citations


"Multi scale feature extraction and ..." refers methods in this paper

  • ...In next step, the wavelet decomposition is applied to the entire database to reduce memory occupation and prior to that all the images are cropped [ 16 ] from uneven size to 128x128....

    [...]

Journal ArticleDOI
TL;DR: Through adopting a polynomial kernel, the principal components can be computed within the space spanned by high-order correlations of input pixels making up a facial image, thereby producing a good performance.
Abstract: A kernel principal component analysis (PCA) was previously proposed as a nonlinear extension of a PCA. The basic idea is to first map the input space into a feature space via nonlinear mapping and then compute the principal components in that feature space. This article adopts the kernel PCA as a mechanism for extracting facial features. Through adopting a polynomial kernel, the principal components can be computed within the space spanned by high-order correlations of input pixels making up a facial image, thereby producing a good performance.

520 citations

Journal ArticleDOI
TL;DR: In this letter, a new satellite image contrast enhancement technique based on the discrete wavelet transform (DWT) and singular value decomposition has been proposed and it reconstructs the enhanced image by applying inverse DWT.
Abstract: In this letter, a new satellite image contrast enhancement technique based on the discrete wavelet transform (DWT) and singular value decomposition has been proposed. The technique decomposes the input image into the four frequency subbands by using DWT and estimates the singular value matrix of the low-low subband image, and, then, it reconstructs the enhanced image by applying inverse DWT. The technique is compared with conventional image equalization techniques such as standard general histogram equalization and local histogram equalization, as well as state-of-the-art techniques such as brightness preserving dynamic histogram equalization and singular value equalization. The experimental results show the superiority of the proposed method over conventional and state-of-the-art techniques.

310 citations


"Multi scale feature extraction and ..." refers methods in this paper

  • ...In this paper illumination problem is addressed using SVD [ 8 ] with wavelets [7] integrated into the KPCA to extract Multiscale features towards secure Face Recognition system....

    [...]

  • ...For fast and better face recognition from the face database, the training and testing images are enhanced by SVD method [ 8 ]....

    [...]

Journal ArticleDOI
TL;DR: It is shown that the SVs contain little useful information for face recognition and most important information is encoded in the two orthogonal matrices of the SVD, and a new method based on this finding is proposed.

135 citations


"Multi scale feature extraction and ..." refers background in this paper

  • ...A contains the intensity information [ 12 ] of a given image....

    [...]

Proceedings ArticleDOI
16 Dec 2008
TL;DR: A novel image equalization technique which is based on singular value decomposition (SVD) and compared with the standard grayscale histogram equalization (GHE) method suggests that the proposed SVE method clearly outperforms the GHE method.
Abstract: In this paper, a novel image equalization technique which is based on singular value decomposition (SVD) is proposed. The singular value matrix represents the intensity information of the given image and any change on the singular values change the intensity of the input image. The proposed technique converts the image into the SVD domain and after normalizing the singular value matrix it reconstructs the image in the spatial domain by using the updated singular value matrix. The technique is called the singular value equalization (SVE) and compared with the standard grayscale histogram equalization (GHE) method. The visual and quantitative results suggest that the proposed SVE method clearly outperforms the GHE method.

124 citations


"Multi scale feature extraction and ..." refers methods in this paper

  • ...In [ 10-11 ] a method is proposed to deal with illumination using SVD....

    [...]

  • ...The singular-value-based image equalization (SVE) technique [ 10 ], [11] is based on equalizing the singular value matrix obtained by singular value decomposition (SVD)....

    [...]

  • ...,where is M the number of input data (the centering method in F can be found in [ 10 ] and [13])....

    [...]