About: Subspace topology is a(n) research topic. Over the lifetime, 17940 publication(s) have been published within this topic receiving 323816 citation(s).
TL;DR: Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.
Abstract: We propose an appearance-based face recognition method called the Laplacianface approach. By using locality preserving projections (LPP), the face images are mapped into a face subspace for analysis. Different from principal component analysis (PCA) and linear discriminant analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.
01 Dec 1993-Proteins
TL;DR: Analysis of extended molecular dynamics simulations of lysozyme in vacuo and in aqueous solution reveals that it is possible to separate the configurational space into two subspace: an “essential” subspace containing only a few degrees of freedom and the remaining space in which the motion has a narrow Gaussian distribution and which can be considered as “physically constrained.”
Abstract: Analysis of extended molecular dynamics (MD) simulations of lysozyme in vacuo and in aqueous solution reveals that it is possible to separate the configurational space into two subspaces: (1) an "essential" subspace containing only a few degrees of freedom in which anharmonic motion occurs that comprises most of the positional fluctuations; and (2) the remaining space in which the motion has a narrow Gaussian distribution and which can be considered as "physically constrained." If overall translation and rotation are eliminated, the two spaces can be constructed by a simple linear transformation in Cartesian coordinate space, which remains valid over several hundred picoseconds. The transformation follows from the covariance matrix of the positional deviations. The essential degrees of freedom seem to describe motions which are relevant for the function of the protein, while the physically constrained subspace merely describes irrelevant local fluctuations. The near-constraint behavior of the latter subspace allows the separation of equations of motion and promises the possibility of investigating independently the essential space and performing dynamic simulations only in this reduced space.
TL;DR: It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, it is proved that under certain conditions LRR can exactly recover the row space of the original data.
Abstract: In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.
22 Oct 2011-
TL;DR: This book focuses on the theory, implementation and applications of subspace identification algorithms for linear time-invariant finitedimensional dynamical systems, which allow for a fast, straightforward and accurate determination of linear multivariable models from measured inputoutput data.
Abstract: Subspace Identification for Linear Systems focuses on the theory, implementation and applications of subspace identification algorithms for linear time-invariant finitedimensional dynamical systems. These algorithms allow for a fast, straightforward and accurate determination of linear multivariable models from measured inputoutput data. The theory of subspace identification algorithms is presented in detail. Several chapters are devoted to deterministic, stochastic and combined deterministicstochastic subspace identification algorithms. For each case, the geometric properties are stated in a main 'subspace' Theorem. Relations to existing algorithms and literature are explored, as are the interconnections between different subspace algorithms. The subspace identification theory is linked to the theory of frequency weighted model reduction, which leads to new interpretations and insights. The implementation of subspace identification algorithms is discussed in terms of the robust and computationally efficient RQ and singular value decompositions, which are well-established algorithms from numerical linear algebra. The algorithms are implemented in combination with a whole set of classical identification algorithms,processing and validation tools in Xmath's ISID, a commercially available graphical user interface toolbox. The basic subspace algorithms in the book are also implemented in a set of MATLABÂ® files accompanying the book. An application of ISID to an industrial glass tube manufacturing process is presented in detail, illustrating the power and user-friendliness of the subspace identification algorithms and of their implementation in ISID. The identified model allows for an optimal control of the process, leading to a significant enhancement of the production quality. The applicability of subspace identification algorithms in industry is further illustrated with the application of the MATLABÂ® files to ten practical problems. Since all necessary data and MATLABÂ® files are included, the reader can easily step through these applications, and thus get more insight in the algorithms. Subspace Identification for Linear Systems is an important reference for all researchers in system theory, control theory, signal processing, automization,mechatronics, chemical, electrical, mechanical and aeronautical engineering.
22 Sep 2009-arXiv: Numerical Analysis
Abstract: Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.