scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: This work considers bipartite matching algorithms for computing permutations of a sparse matrix so that the diagonal of the permuted matrix has entries of large absolute value and considers scaling techniques to increase the relative values of the diagonal entries.
Abstract: We consider bipartite matching algorithms for computing permutations of a sparse matrix so that the diagonal of the permuted matrix has entries of large absolute value. We discuss various strategies for this and consider their implementation as computer codes. We also consider scaling techniques to further increase the relative values of the diagonal entries. Numerical experiments show the effect of the reorderings and the scaling on the solution of sparse equations by a direct method and by preconditioned iterative techniques.

280 citations

Posted Content
TL;DR: In this article, rank-aware algorithms for sparse multiple measurement vector (MMV) problems were proposed and compared with rank-blind algorithms, such as SOMP and mixed norm minimization techniques.
Abstract: In this paper we revisit the sparse multiple measurement vector (MMV) problem where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of the single channel sparse recovery problem which lies at the heart of the emerging field of compressed sensing. However the sparse approximation problem has origins which include links to the field of array signal processing where we find the inspiration for a new family of MMV algorithms based on the MUSIC algorithm. We highlight the role of the rank of the coefficient matrix X in determining the difficulty of the recovery problem. We derive the necessary and sufficient conditions for the uniqueness of the sparse MMV solution, which indicates that the larger the rank of X the less sparse X needs to be to ensure uniqueness. We also show that the larger the rank of X the less the computational effort required to solve the MMV problem through a combinatorial search. In the second part of the paper we consider practical suboptimal algorithms for solving the sparse MMV problem. We examine the rank awareness of popular algorithms such as SOMP and mixed norm minimization techniques and show them to be rank blind in terms of worst case analysis. We then consider a family of greedy algorithms that are rank aware. The simplest such algorithm is a discrete version of MUSIC and is guaranteed to recover the sparse vectors in the full rank MMV case under mild conditions. We extend this idea to develop a rank aware pursuit algorithm that naturally reduces to Order Recursive Matching Pursuit (ORMP) in the single measurement case and also provides guaranteed recovery in the full rank multi-measurement case. Numerical simulations demonstrate that the rank aware algorithms are significantly better than existing algorithms in dealing with multiple measurements.

280 citations

Proceedings ArticleDOI
23 Jun 2013
TL;DR: In this paper, a fast convolutional sparse coding algorithm with globally optimal sub problems and super-linear convergence is proposed for sparse coding with signal processing and augmented Lagrange methods.
Abstract: Sparse coding has become an increasingly popular method in learning and vision for a variety of classification, reconstruction and coding tasks. The canonical approach intrinsically assumes independence between observations during learning. For many natural signals however, sparse coding is applied to sub-elements ( i.e. patches) of the signal, where such an assumption is invalid. Convolutional sparse coding explicitly models local interactions through the convolution operator, however the resulting optimization problem is considerably more complex than traditional sparse coding. In this paper, we draw upon ideas from signal processing and Augmented Lagrange Methods (ALMs) to produce a fast algorithm with globally optimal sub problems and super-linear convergence.

277 citations

Proceedings ArticleDOI
22 Mar 2006
TL;DR: In this paper, the best known guarantees for exact reconstruction of a sparse signal f from few nonadaptive universal linear measurements were shown. But these guarantees involve huge constants, in spite of very good performance of the algorithms in practice.
Abstract: This paper proves best known guarantees for exact reconstruction of a sparse signal f from few non-adaptive universal linear measurements. We consider Fourier measurements (random sample of frequencies of f) and random Gaussian measurements. The method for reconstruction that has recently gained momentum in the sparse approximation theory is to relax this highly non-convex problem to a convex problem, and then solve it as a linear program. What are best guarantees for the reconstruction problem to be equivalent to its convex relaxation is an open question. Recent work shows that the number of measurements k(r,n) needed to exactly reconstruct any r-sparse signal f of length n from its linear measurements with convex relaxation is usually O(r poly log (n)). However, known guarantees involve huge constants, in spite of very good performance of the algorithms in practice. In attempt to reconcile theory with practice, we prove the first guarantees for universal measurements (i.e. which work for all sparse functions) with reasonable constants. For Gaussian measurements, k(r,n) lsim 11.7 r [1.5 + log(n/r)], which is optimal up to constants. For Fourier measurements, we prove the best known bound k(r, n) = O(r log(n) middot log2(r) log(r log n)), which is optimal within the log log n and log3 r factors. Our arguments are based on the technique of geometric functional analysis and probability in Banach spaces.

276 citations

Journal ArticleDOI
TL;DR: A class of nonconvex penalty functions that maintain the convexity of the least squares cost function to be minimized, and avoids the systematic underestimation characteristic of L1 norm regularization are proposed.
Abstract: Sparse approximate solutions to linear equations are classically obtained via L1 norm regularized least squares, but this method often underestimates the true solution As an alternative to the L1 norm, this paper proposes a class of nonconvex penalty functions that maintain the convexity of the least squares cost function to be minimized, and avoids the systematic underestimation characteristic of L1 norm regularization The proposed penalty function is a multivariate generalization of the minimax-concave penalty It is defined in terms of a new multivariate generalization of the Huber function, which in turn is defined via infimal convolution The proposed sparse-regularized least squares cost function can be minimized by proximal algorithms comprising simple computations

276 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371