Topic

# Manifold

About: Manifold is a research topic. Over the lifetime, 18786 publications have been published within this topic receiving 362855 citations.

##### Papers published on a yearly basis

##### Papers

More filters

•

TL;DR: The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance.

Abstract: UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology The result is a practical scalable algorithm that applies to real world data The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning

5,390 citations

•

09 Dec 2003TL;DR: These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold.

Abstract: Many problems in information processing involve some form of dimensionality reduction. In this paper, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. This is borne out by illustrative examples on some high dimensional data sets.

4,318 citations

••

TL;DR: A survey article on the area of global analysis defined by differentiable dynamical systems or equivalently the action (differentiable) of a Lie group G on a manifold M is presented in this paper.

Abstract: This is a survey article on the area of global analysis defined by differentiable dynamical systems or equivalently the action (differentiable) of a Lie group G on a manifold M. An action is a homomorphism G→Diff(M) such that the induced map G×M→M is differentiable. Here Diff(M) is the group of all diffeomorphisms of M and a diffeo- morphism is a differentiable map with a differentiable inverse. Everything will be discussed here from the C ∞ or C r point of view. All manifolds maps, etc. will be differentiable (C r , 1 ≦ r ≦ ∞) unless stated otherwise.

2,954 citations

•

01 Jun 1971

TL;DR: Foundations of Differentiable Manifolds and Lie Groups as discussed by the authors provides a clear, detailed, and careful development of the basic facts on manifold theory and Lie groups, including differentiable manifolds, tensors and differentiable forms.

Abstract: Foundations of Differentiable Manifolds and Lie Groups gives a clear, detailed, and careful development of the basic facts on manifold theory and Lie Groups. It includes differentiable manifolds, tensors and differentiable forms. Lie groups and homogenous spaces, integration on manifolds, and in addition provides a proof of the de Rham theorem via sheaf cohomology theory, and develops the local theory of elliptic operators culminating in a proof of the Hodge theorem. Those interested in any of the diverse areas of mathematics requiring the notion of a differentiable manifold will find this beginning graduate-level text extremely useful.

1,992 citations

••

19 Jul 2004TL;DR: This paper proposes a novel method for solving single-image super-resolution problems, given a low-resolution image as input, and recovers its high-resolution counterpart using a set of training examples, inspired by recent manifold teaming methods.

Abstract: In this paper, we propose a novel method for solving single-image super-resolution problems. Given a low-resolution image as input, we recover its high-resolution counterpart using a set of training examples. While this formulation resembles other learning-based methods for super-resolution, our method has been inspired by recent manifold teaming methods, particularly locally linear embedding (LLE). Specifically, small image patches in the lowand high-resolution images form manifolds with similar local geometry in two distinct feature spaces. As in LLE, local geometry is characterized by how a feature vector corresponding to a patch can be reconstructed by its neighbors in the feature space. Besides using the training image pairs to estimate the high-resolution embedding, we also enforce local compatibility and smoothness constraints between patches in the target high-resolution image through overlapping. Experiments show that our method is very flexible and gives good empirical results.

1,951 citations