scispace - formally typeset
Open AccessDissertation

Optimization Methods for Sparse Approximation

Yong Zhang
Reads0
Chats0
TLDR
This thesis proposes a novel augmented Lagrangian method for solving the l1-norm relaxation problems of the original l0 minimization problems and applies it to the proposed formulation of sparse principal component analysis (PCA), and establishes some convergence results for both inner and outer methods.
Abstract
In the last two decades, there are numerous applications in which sparse solutions are concerned. Mathematically, all these applications can be formulated into the l0 minimization problems. In this thesis, we first propose a novel augmented Lagrangian (AL) method for solving the l1-norm relaxation problems of the original l0 minimization problems and apply it to our proposed formulation of sparse principal component analysis (PCA). We next propose penalty decomposition (PD) methods for solving the original l0 minimization problems in which a sequence of penalty subproblems are solved by a block coordinate descent (BCD) method. For the AL method, we show that under some regularity assumptions, it converges to a stationary point. Additionally, we propose two nonmonotone gradient methods for solving the AL subproblems, and establish their global and local convergence. Moreover, we apply the AL method to our proposed formulation of sparse PCA and compare our approach with several existing methods on synthetic, Pitprops, and gene expression data, respectively. The computational results demonstrate that the sparse principal components (PCs) produced by our approach substantially outperform those by other methods in terms of total explained variance, correlation of PCs, and orthogonality of loading vectors. For the PD methods, under some suitable assumptions, we establish some convergence results for both inner (the BCD method) and outer (the PD method) iterations, respectively. We test the performance of our PD methods by applying them to sparse logistic regression, sparse inverse covariance selection, and compressed sensing problems. The computational results demonstrate that when solutions of same cardinality are sought, our approach applied to the l0-based models generally has better solution quality and/or speed than the existing approaches that are applied to the corresponding l1-based models. Finally, we adapt the PD method to solve our proposed wavelet frame based image

read more

Citations
More filters

Singular Value Decomposition for Genome-Wide Expression Data Processing and Modeling

TL;DR: Using singular value decomposition in transforming genome-wide expression data from genes x arrays space to reduced diagonalized "eigengenes" x "eigenarrays" space gives a global picture of the dynamics of gene expression, in which individual genes and arrays appear to be classified into groups of similar regulation and function, or similar cellular state and biological phenotype.
Journal ArticleDOI

Multi-Tissue Decomposition of Diffusion MRI Signals via $\ell _{0}$ Sparse-Group Estimation

TL;DR: An algorithm for solving the ℓ0 sparse-group estimation problem and applying it to the tissue signal separation problem in DMRI is presented and results confirm that the method yields estimates of FODFs and volume fractions of tissue compartments with improved robustness and accuracy.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book

Matrix computations

Gene H. Golub
Journal ArticleDOI

Pattern Recognition and Machine Learning

Radford M. Neal
- 01 Aug 2007 - 
TL;DR: This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
Journal ArticleDOI

Nonlinear total variation based noise removal algorithms

TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.
Journal ArticleDOI

Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information

TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Related Papers (5)