scispace - formally typeset
Search or ask a question
Book ChapterDOI

Face recognition based on kernelized extreme learning machine

TL;DR: Simulation results show that the kernelized ELM outperforms LS-SVM in terms of both recognition prediction accuracy and training speed.
Abstract: The original extreme learning machine (ELM), based on least square solutions, is an efficient learning algorithm used in "generalized" single-hidden layer feedforward networks (SLFNs) which need not be neuron alike. Latest development[1] shows that ELM can be implemented with kernels. Kernlized ELM can be seen as a variant of the conventional LS-SVM without the output bias b. In this paper, the performance comparison of LS-SVM and kernelized ELM is conducted over a benchmarking face recognition dataset. Simulation results show that the kernelized ELM outperforms LS-SVM in terms of both recognition prediction accuracy and training speed.
Citations
More filters
Journal ArticleDOI
TL;DR: A weighted ELM which is able to deal with data with imbalanced class distribution while maintain the good performance on well balanced data as unweighted ELM and generalized to cost sensitive learning.

627 citations


Cites background from "Face recognition based on kernelize..."

  • ...Unweighted extreme learning machine (ELM) with non-kernel or kernel hidden nodes have been demonstrated on various datasets, including face recognition [17,18], protein series [19], etc....

    [...]

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors used an ELM regression model to map polymetallic prospectivity of the Lalingzaohuo district in Qinghai Province in China using a Quad-Core CPU 1.8 GHz laptop computer.

95 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed EKM-EELM algorithm has superior performances on classification rate than some other traditional methods for image classification.

74 citations


Cites background from "Face recognition based on kernelize..."

  • ...As a particular open topic in the image processing, image classification by learning algorithms brings out a great deal of research due to its promising applications (see [7,8,17,19,22,23,28,29,32,31])....

    [...]

  • ...[22], Zong and Huang [31,32] have applied ELM in the face recognition to improve the accuracy rate....

    [...]

Proceedings ArticleDOI
01 Jan 2017
TL;DR: This work has presented efficient contour aware segmentation approach based based on fully conventional network whereas for classification the authors have used extreme machine learning based on CNN features extracted from each segmented cell.
Abstract: Recent advancement in genomics technologies has opened a new realm for early detection of diseases that shows potential to overcome the drawbacks of manual detection technologies. In this work, we have presented efficient contour aware segmentation approach based based on fully conventional network whereas for classification we have used extreme machine learning based on CNN features extracted from each segmented cell. We have evaluated system performance based on segmentation and classification on publicly available dataset. Experiment was conducted on 64000 blood cells and dataset is divided into 80% for training and 20% for testing. Segmentation results are compared with the manual segmentation and found that proposed approach provided with 98.12% and 98.16% for RBC and WBC respectively whereas classification accuracy is shown on publicly available dataset 94.71% and 98.68% for RBC & its abnormalities detection and WBC respectively.

68 citations


Cites methods from "Face recognition based on kernelize..."

  • ...Several authors have used ELM as a classifier for the classification of different images processing task and provided promising results [23] [12] [27] [11]....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes and evaluates a fast classifier, extreme learning machine (ELM), to classify individual and combined finger movements on amputees and non-amputees, and shows the most accurate ELM classifier is radial basis function ELM (RBF-ELM).

53 citations

References
More filters
Journal ArticleDOI
TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Abstract: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

37,861 citations


"Face recognition based on kernelize..." refers background in this paper

  • ...Vectors xi for which ti(w · φ(xi) + b) = 1 are termed support vectors [12]....

    [...]

  • ...The essence of SVM [12] is to maximize the separating margin of the two classes in the feature space and to minimize the training error, which is equivalent to:...

    [...]

  • ...Support Vector Machine [12] and its variants [13,14] have demonstrated good performance on classification tasks....

    [...]

Book
16 Jul 1998
TL;DR: Thorough, well-organized, and completely up to date, this book examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks.
Abstract: From the Publisher: This book represents the most comprehensive treatment available of neural networks from an engineering perspective. Thorough, well-organized, and completely up to date, it examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks. Written in a concise and fluid manner, by a foremost engineering textbook author, to make the material more accessible, this book is ideal for professional engineers and graduate students entering this exciting field. Computer experiments, problems, worked examples, a bibliography, photographs, and illustrations reinforce key concepts.

29,130 citations

Journal ArticleDOI
01 Jan 1988-Nature
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract: We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

23,814 citations

Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations


"Face recognition based on kernelize..." refers methods in this paper

  • ...In the literature of face recognition, classifiers such as Nearest Neighbor [3,6,8] and Support Vector Machine (SVM) [9,10,11] have been mainly used....

    [...]

  • ...In PCA, the eigen decomposition of the covariance matrix of the data is manipulated....

    [...]

  • ...After PCA projection, the dimension of the image is reduced to 45, 75 or 105 when 3, 5 or 7 images per person are used for training, respectively....

    [...]

  • ...Among various dimensionality reduction methods [3,4,5], two classic approaches are principal component analysis [6] and linear discriminant analysis [7], which have been widely used in pattern recognition tasks....

    [...]

  • ...Principal component analysis (PCA) [6] and linear discriminant analysis (LDA) [7] are two representative approaches....

    [...]

Journal ArticleDOI
TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Abstract: We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed "Fisherface" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.

11,674 citations