scispace - formally typeset
Open AccessJournal ArticleDOI

Sparse PCA: Optimal rates and adaptive estimation

T. Tony Cai, +2 more
- 01 Dec 2013 - 
- Vol. 41, Iss: 6, pp 3074-3110
Reads0
Chats0
TLDR
In this paper, the authors considered both minimax and adaptive estimation of the principal subspace in the high dimensional setting and established the optimal rates of convergence for estimating the subspace which are sharp with respect to all the parameters, thus providing a complete characterization of the difficulty of the estimation problem in terms of the convergence rate.
Abstract
Principal component analysis (PCA) is one of the most commonly used statistical procedures with a wide range of applications. This paper considers both minimax and adaptive estimation of the principal subspace in the high dimensional setting. Under mild technical conditions, we first establish the optimal rates of convergence for estimating the principal subspace which are sharp with respect to all the parameters, thus providing a complete characterization of the difficulty of the estimation problem in term of the convergence rate. The lower bound is obtained by calculating the local metric entropy and an application of Fano’s lemma. The rate optimal estimator is constructed using aggregation, which, however, might not be computationally feasible. We then introduce an adaptive procedure for estimating the principal subspace which is fully data driven and can be computed efficiently. It is shown that the estimator attains the optimal rates of convergence simultaneously over a large collection of the parameter spaces. A key idea in our construction is a reduction scheme which reduces the sparse PCA problem to a high-dimensional multivariate regression problem. This method is potentially also useful for other related problems.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A useful variant of the Davis--Kahan theorem for statisticians

TL;DR: In this paper, the authors present a variant of the Davis-Kahan theorem that relies only on a population eigenvalue separation condition, making it more natural and convenient for direct application in statistical contexts, and provide an improvement in many cases to the usual bound.
Posted Content

Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees

TL;DR: This work provides a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution to the factorized optimization problem with rank constraints.
Journal Article

Truncated power method for sparse eigenvalue problems

TL;DR: In this paper, the authors proposed a truncated power method that can approximately solve the underlying nonconvex optimization problem of sparse eigenvalue problem, which is to extract dominant (largest) sparse Eigenvectors with at most k non-zero components.
Journal ArticleDOI

An overview of the estimation of large covariance and precision matrices

TL;DR: In this article, the authors provide a selective review of several recent developments on the estimation of large covariance and precision matrices, focusing on two general approaches: a rank-based method and a factor-model based method.
Proceedings Article

Complexity Theoretic Lower Bounds for Sparse Principal Component Detection

TL;DR: The performance of a test is measured by the smallest signal strength that it can detect and a computationally efficient method based on semidefinite programming is proposed and it is proved that the statistical performance of this test cannot be strictly improved by any computationallyefficient method.
References
More filters
Book ChapterDOI

Sparse Principal Component Analysis with Missing Observations

TL;DR: The first information-theoretic lower bound for the sparse PCA problem with missing observations is established and the properties of a BIC type estimator that does not require any prior knowledge on the sparsity of the unknown first principal component or any imputation of the missing observations are studied.
Journal ArticleDOI

Sparse Variable PCA Using Geodesic Steepest Descent

TL;DR: A new svPCA is proposed, which is based on a statistical model, and this gives access to a range of modeling and inferential tools, and a novel form of Bayesian information criterion (BIC) for tuning parameter selection.
Related Papers (5)