scispace - formally typeset
Search or ask a question
Topic

Compressed sensing

About: Compressed sensing is a research topic. Over the lifetime, 16412 publications have been published within this topic receiving 358493 citations. The topic is also known as: compressive sensing & compressive sampling.


Papers
More filters
Proceedings ArticleDOI
23 Jun 2008
TL;DR: It is shown that a small set of randomly chosen raw patches from training images of similar statistical nature to the input image generally serve as a good dictionary, in the sense that the computed representation is sparse and the recovered high-resolution image is competitive or even superior in quality to images produced by other SR methods.
Abstract: This paper addresses the problem of generating a super-resolution (SR) image from a single low-resolution input image. We approach this problem from the perspective of compressed sensing. The low-resolution image is viewed as downsampled version of a high-resolution image, whose patches are assumed to have a sparse representation with respect to an over-complete dictionary of prototype signal-atoms. The principle of compressed sensing ensures that under mild conditions, the sparse representation can be correctly recovered from the downsampled signal. We will demonstrate the effectiveness of sparsity as a prior for regularizing the otherwise ill-posed super-resolution problem. We further show that a small set of randomly chosen raw patches from training images of similar statistical nature to the input image generally serve as a good dictionary, in the sense that the computed representation is sparse and the recovered high-resolution image is competitive or even superior in quality to images produced by other SR methods.

1,546 citations

Proceedings Article
01 Mar 2008
TL;DR: This paper overviews the recent work on compressive sensing, a new approach to data acquisition in which analog signals are digitized for processing not via uniform sampling but via measurements using more general, even random, test functions.
Abstract: This paper overviews the recent work on compressive sensing, a new approach to data acquisition in which analog signals are digitized for processing not via uniform sampling but via measurements using more general, even random, test functions. In stark contrast with conventional wisdom, the new theory asserts that one can combine "low-rate sampling" with digital computational power for efficient and accurate signal acquisition. Compressive sensing systems directly translate analog data into a compressed digital form; all we need to do is "decompress" the measured data through an optimization on a digital computer. The implications of compressive sensing are promising for many applications and enable the design of new kinds of analog-to-digital converters, cameras, and imaging systems.

1,537 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed simple and extremely efficient methods for solving the basis pursuit problem, which is used in compressed sensing, using Bregman iterative regularization, and they gave a very accurate solution after solving only a very small number of instances of the unconstrained problem.
Abstract: We propose simple and extremely efficient methods for solving the basis pursuit problem $\min\{\|u\|_1 : Au = f, u\in\mathbb{R}^n\},$ which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem $\min_{u\in\mathbb{R}^n} \mu\|u\|_1+\frac{1}{2}\|Au-f^k\|_2^2$ for given matrix $A$ and vector $f^k$. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving $A$ and $A^\top$ can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.

1,510 citations

Journal ArticleDOI
TL;DR: This work analyzes the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern of a vector beta* based on observations contaminated by noise, and establishes precise conditions on the problem dimension p, the number k of nonzero elements in beta*, and the number of observations n.
Abstract: The problem of consistently estimating the sparsity pattern of a vector beta* isin Rp based on observations contaminated by noise arises in various contexts, including signal denoising, sparse approximation, compressed sensing, and model selection. We analyze the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish precise conditions on the problem dimension p, the number k of nonzero elements in beta*, and the number of observations n that are necessary and sufficient for sparsity pattern recovery using the Lasso. We first analyze the case of observations made using deterministic design matrices and sub-Gaussian additive noise, and provide sufficient conditions for support recovery and linfin-error bounds, as well as results showing the necessity of incoherence and bounds on the minimum value. We then turn to the case of random designs, in which each row of the design is drawn from a N (0, Sigma) ensemble. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we compute explicit values of thresholds 0 0, if n > 2 (thetasu + delta) klog (p- k), then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n < 2 (thetasl - delta)klog (p - k), then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble (Sigma = Iptimesp), we show that thetasl = thetas

1,438 citations

Journal ArticleDOI
TL;DR: Stagewise Orthogonal Matching Pursuit (StOMP) successively transforms the signal into a negligible residual, and numerical examples showing that StOMP rapidly and reliably finds sparse solutions in compressed sensing, decoding of error-correcting codes, and overcomplete representation are given.
Abstract: Finding the sparsest solution to underdetermined systems of linear equations y = Φx is NP-hard in general. We show here that for systems with “typical”/“random” Φ, a good approximation to the sparsest solution is obtained by applying a fixed number of standard operations from linear algebra. Our proposal, Stagewise Orthogonal Matching Pursuit (StOMP), successively transforms the signal into a negligible residual. Starting with initial residual r0 = y, at the s -th stage it forms the “matched filter” ΦTrs-1, identifies all coordinates with amplitudes exceeding a specially chosen threshold, solves a least-squares problem using the selected coordinates, and subtracts the least-squares fit, producing a new residual. After a fixed number of stages (e.g., 10), it stops. In contrast to Orthogonal Matching Pursuit (OMP), many coefficients can enter the model at each stage in StOMP while only one enters per stage in OMP; and StOMP takes a fixed number of stages (e.g., 10), while OMP can take many (e.g., n). We give both theoretical and empirical support for the large-system effectiveness of StOMP. We give numerical examples showing that StOMP rapidly and reliably finds sparse solutions in compressed sensing, decoding of error-correcting codes, and overcomplete representation.

1,416 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
88% related
Feature extraction
111.8K papers, 2.1M citations
88% related
Deep learning
79.8K papers, 2.1M citations
88% related
Image processing
229.9K papers, 3.5M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023611
20221,348
2021823
20201,114
20191,387