scispace - formally typeset
Book ChapterDOI

Multiple Projections Learning for Dimensional Reduction

TLDR
In this article, the authors proposed relaxed sparse locality presenting projection (RSLPP) which introduces two different projection matrices to better accomplish the two tasks of locality preserving and dimension reduction.
Abstract
Locality Preserving Projection (LPP) is a dimensional reduction method that has been widely used in various fields. While traditional LPP only uses a single projection matrix to reduce the dimension and preserve the locality structure of data, it may cause the single matrix may not handle these two tasks well at the same time. Therefore, in this paper, we proposed relaxed sparse locality presenting projection (RSLPP) which introduces two different projection matrices to better accomplish the two tasks. The addition of another projection matrix can help the original projection matrix has more freedom to select the appropriate feature for preserving the local structure of data. The experimental results on two data sets prove the effectiveness of the method.

read more

References
More filters
Proceedings Article

Locality Preserving Projections

TL;DR: These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold.
Journal ArticleDOI

A collaborative framework for 3D alignment and classification of heterogeneous subvolumes in cryo-electron tomography

TL;DR: The genetic identity of each virus particle present in the mixture can be assigned based solely on the structural information derived from single envelope glycoproteins displayed on the virus surface by the nuclear norm-based, collaborative alignment method presented here.
Proceedings ArticleDOI

Graph embedding: a general framework for dimensionality reduction

TL;DR: A new supervised algorithm, Marginal Fisher Analysis (MFA), is proposed, for dimensionality reduction by designing two graphs that characterize the intra-class compactness and inter-class separability, respectively.
Journal ArticleDOI

Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation

TL;DR: This paper addresses the problem of unsupervised domain transfer learning in which no labels are available in the target domain by the inexact augmented Lagrange multiplier method and can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise.
Journal ArticleDOI

Outlier-Robust PCA: The High-Dimensional Case

TL;DR: This work proposes a high-dimensional robust principal component analysis algorithm that is efficient, robust to contaminated points, and easily kernelizable, and achieves maximal robustness.
Related Papers (5)