scispace - formally typeset
Search or ask a question
Proceedings Article

Locality Preserving Projections

09 Dec 2003-Vol. 16, pp 153-160
TL;DR: These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold.
Abstract: Many problems in information processing involve some form of dimensionality reduction. In this paper, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. This is borne out by illustrative examples on some high dimensional data sets.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.
Abstract: We propose an appearance-based face recognition method called the Laplacianface approach. By using locality preserving projections (LPP), the face images are mapped into a face subspace for analysis. Different from principal component analysis (PCA) and linear discriminant analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.

3,314 citations


Cites background or methods from "Locality Preserving Projections"

  • ...Index Terms—Face recognition, principal component analysis, linear discriminant analysis, locality preserving projections, face manifold, subspace learning....

    [...]

  • ...The objective of a dimensionality-reducing mapping is to unfold the sheet and to make its low-dimensional structure explicit....

    [...]

  • ...Linear combinations are particular, attractive because they are simple to compute and analytically tractable....

    [...]

  • ...We call SW the within-class scatter matrix and SB the between-class scatter matrix....

    [...]

01 Jan 2009
TL;DR: The results of the experiments reveal that nonlinear techniques perform well on selected artificial tasks, but that this strong performance does not necessarily extend to real-world tasks.
Abstract: In recent years, a variety of nonlinear dimensionality reduction techniques have been proposed that aim to address the limitations of traditional techniques such as PCA and classical scaling. The paper presents a review and systematic comparison of these techniques. The performances of the nonlinear techniques are investigated on artificial and natural tasks. The results of the experiments reveal that nonlinear techniques perform well on selected artificial tasks, but that this strong performance does not necessarily extend to real-world tasks. The paper explains these results by identifying weaknesses of current nonlinear techniques, and suggests how the performance of nonlinear dimensionality reduction techniques may be improved.

2,141 citations


Cites methods from "Locality Preserving Projections"

  • ...In addition, variants of Laplacian Eigenmaps may be applied to supervised or semi-supervised learning problems [33, 11]....

    [...]

  • ...The Laplacian Eigenmaps algorithm first constructs a neighborhood graph G in which every datapoint xi is connected to its k nearest neighbors....

    [...]

  • ...A linear variant of Laplacian Eigenmaps is presented in [59]....

    [...]

  • ...Laplacian Eigenmaps and Hessian LLE are also intimately related: they only differ in the type of differential operator they define on the data manifold....

    [...]

  • ...Third, the spectral techniques Kernel PCA, Isomap, LLE, and Laplacian Eigenmaps can all be viewed upon as special cases of the more general problem of learning eigenfunctions [14, 57]....

    [...]

Journal ArticleDOI
TL;DR: In GNMF, an affinity graph is constructed to encode the geometrical information and a matrix factorization is sought, which respects the graph structure, and the empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.
Abstract: Matrix factorization techniques have been frequently applied in information retrieval, computer vision, and pattern recognition. Among them, Nonnegative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space. One then hopes to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called Graph Regularized Nonnegative Matrix Factorization (GNMF), for this purpose. In GNMF, an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization, which respects the graph structure. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.

1,870 citations


Cites methods from "Locality Preserving Projections"

  • ...This assumption is usually referred to as local invariance assumption [1], [19], [7], which plays an essential role in the development of various kinds of algorithms including dimensionality reduction algorithms [1] and semi-supervised learning algorithms [2], [46], [45]....

    [...]

Proceedings Article
05 Dec 2005
TL;DR: This paper proposes a "filter" method for feature selection which is independent of any learning algorithm, based on the observation that, in many real world classification problems, data from the same class are often close to each other.
Abstract: In supervised learning scenarios, feature selection has been studied widely in the literature. Selecting features in unsupervised learning scenarios is a much harder problem, due to the absence of class labels that would guide the search for relevant information. And, almost all of previous unsupervised feature selection methods are "wrapper" techniques that require a learning algorithm to evaluate the candidate feature subsets. In this paper, we propose a "filter" method for feature selection which is independent of any learning algorithm. Our method can be performed in either supervised or unsupervised fashion. The proposed method is based on the observation that, in many real world classification problems, data from the same class are often close to each other. The importance of a feature is evaluated by its power of locality preserving, or, Laplacian Score. We compare our method with data variance (unsupervised) and Fisher score (supervised) on two data sets. Experimental results demonstrate the effectiveness and efficiency of our algorithm.

1,817 citations


Cites methods from "Locality Preserving Projections"

  • ...Laplacian Score (LS) is fundamentally based on Laplacian Eigenmaps [1] and Locality Preserving Projection [3]....

    [...]

  • ...2 Laplacian Score Laplacian Score (LS) is fundamentally based on Laplacian Ei genmaps [1] and Locality Preserving Projection [3]....

    [...]

Proceedings ArticleDOI
17 Oct 2005
TL;DR: This paper proposes a novel subspace learning algorithm called neighborhood preserving embedding (NPE), which aims at preserving the local neighborhood structure on the data manifold and is less sensitive to outliers than principal component analysis (PCA).
Abstract: Recently there has been a lot of interest in geometrically motivated approaches to data analysis in high dimensional spaces. We consider the case where data is drawn from sampling a probability distribution that has support on or near a submanifold of Euclidean space. In this paper, we propose a novel subspace learning algorithm called neighborhood preserving embedding (NPE). Different from principal component analysis (PCA) which aims at preserving the global Euclidean structure, NPE aims at preserving the local neighborhood structure on the data manifold. Therefore, NPE is less sensitive to outliers than PCA. Also, comparing to the recently proposed manifold learning algorithms such as Isomap and locally linear embedding, NPE is defined everywhere, rather than only on the training data points. Furthermore, NPE may be conducted in the original space or in the reproducing kernel Hilbert space into which data points are mapped. This gives rise to kernel NPE. Several experiments on face database demonstrate the effectiveness of our algorithm

1,555 citations


Cites background or methods from "Locality Preserving Projections"

  • ...Connections to Locality Preserving Projection Locality Preserving Projection (LPP) is a recently proposed linear dimensionality reduction algorithm [5]....

    [...]

  • ...Thus, the algorithms preserving local structure, such as NPE and LPP, outperform the algorithms preserving global structure such as PCA and LDA....

    [...]

  • ...LPP is a linear approximation to Laplacian Eigenmaps [2]....

    [...]

  • ...NPE shares some similar properties with the Locality Preserving Projection (LPP) algorithm [5]....

    [...]

  • ...As we described previously, both NPE and LPP [5] try to linearly approximate the eigenfunctions of the Laplace Beltrami operator L on the manifold....

    [...]

References
More filters
Book
16 Jul 1998
TL;DR: Thorough, well-organized, and completely up to date, this book examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks.
Abstract: From the Publisher: This book represents the most comprehensive treatment available of neural networks from an engineering perspective. Thorough, well-organized, and completely up to date, it examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks. Written in a concise and fluid manner, by a foremost engineering textbook author, to make the material more accessible, this book is ideal for professional engineers and graduate students entering this exciting field. Computer experiments, problems, worked examples, a bibliography, photographs, and illustrations reinforce key concepts.

29,130 citations


"Locality Preserving Projections" refers background in this paper

  • ...2 network based approaches [32], set up a nonlinear optimization problem whose solution is typically obtained by gradient descent that is only guaranteed to produce a local optimum....

    [...]

Journal ArticleDOI
22 Dec 2000-Science
TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Abstract: Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.

15,106 citations


"Locality Preserving Projections" refers background or methods in this paper

  • ...Our approach also has a major advantage over recent nonparametric techniques for global nonlinear dimensionality reduction such as [2][5][6]; its eigenproblem has a size which scales as the dimensionality of the data points rather than the number of data points....

    [...]

  • ...Recall that nonlinear dimensionality reduction techniques like ISOMAP[6], LLE[5], Laplacian eigenmaps[2] are defined only on the training data points and it is unclear how to evaluate the map for new test points....

    [...]

  • ...The face image data set used here is the same as that used in [5]....

    [...]

Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations


"Locality Preserving Projections" refers methods in this paper

  • ...PCA and LDA are the two most widely used subspace learning techniques for face recognition [1][7]....

    [...]

Journal ArticleDOI
22 Dec 2000-Science
TL;DR: An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure.
Abstract: Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs-30,000 auditory nerve fibers or 10(6) optic nerve fibers-a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.

13,652 citations


"Locality Preserving Projections" refers background or methods in this paper

  • ...Our approach also has a major advantage over recent nonparametric techniques for global nonlinear dimensionality reduction such as [2][5][6]; its eigenproblem has a size which scales as the dimensionality of the data points rather than the number of data points....

    [...]

  • ...Recall that nonlinear dimensionality reduction techniques like ISOMAP[6], LLE[5], Laplacian eigenmaps[2] are defined only on the training data points and it is unclear how to evaluate the map for new test points....

    [...]

01 Jan 1998

12,940 citations


"Locality Preserving Projections" refers methods in this paper

  • ...An experiment was conducted with the Multiple Features Database [3]....

    [...]