scispace - formally typeset
Search or ask a question

Showing papers by "Joel A. Tropp published in 2006"


Journal ArticleDOI
TL;DR: A method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program, which can be completed in polynomial time with standard scientific software.
Abstract: This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and numerical analysis

1,536 citations


Journal ArticleDOI
TL;DR: This paper proposes a greedy pursuit algorithm, called simultaneous orthogonal matching pursuit (S-OMP), for simultaneous sparse approximation, and presents some numerical experiments that demonstrate how a sparse model for the input signals can be identified more reliably given several input signals.

1,422 citations


Journal ArticleDOI
TL;DR: This paper presents theoretical and numerical results for a greedy pursuit algorithm, called simultaneous orthogonal matching pursuit, and develops conditions under which convex relaxation computes good solutions to simultaneous sparse approximation problems.

857 citations



Proceedings ArticleDOI
14 May 2006
TL;DR: A new technique for efficiently acquiring and reconstructing signals based on convolution with a fixed FIR filter having random taps, which is sufficiently generic to summarize many types of compressible signals and generalizes to streaming and continuous-time signals.
Abstract: We propose and study a new technique for efficiently acquiring and reconstructing signals based on convolution with a fixed FIR filter having random taps. The method is designed for sparse and compressible signals, i.e., ones that are well approximated by a short linear combination of vectors from an orthonormal basis. Signal reconstruction involves a non-linear Orthogonal Matching Pursuit algorithm that we implement efficiently by exploiting the nonadaptive, time-invariant structure of the measurement process. While simpler and more efficient than other random acquisition techniques like Compressed Sensing, random filtering is sufficiently generic to summarize many types of compressible signals and generalizes to streaming and continuous-time signals. Extensive numerical experiments demonstrate its efficacy for acquiring and reconstructing signals sparse in the time, frequency, and wavelet domains, as well as piecewise smooth signals and Poisson processes.

322 citations


Proceedings ArticleDOI
14 May 2006
TL;DR: A sufficient condition for which general IT exactly recovers a sparse signal is presented, in which the cumulative coherence function naturally arises, and previous results concerning the orthogonal matching pursuit and basis pursuit algorithms to IT algorithms are extended.
Abstract: The well-known shrinkage technique is still relevant for contemporary signal processing problems over redundant dictionaries. We present theoretical and empirical analyses for two iterative algorithms for sparse approximation that use shrinkage. The GENERAL IT algorithm amounts to a Landweber iteration with nonlinear shrinkage at each iteration step. The BLOCK IT algorithm arises in morphological components analysis. A sufficient condition for which General IT exactly recovers a sparse signal is presented, in which the cumulative coherence function naturally arises. This analysis extends previous results concerning the Orthogonal Matching Pursuit (OMP) and Basis Pursuit (BP) algorithms to IT algorithms.

155 citations


Posted Content
TL;DR: A new method for recovering msparse signals that is simultaneously uniform and quick is developed, and vectors of support m in dimension d can be linearly embedded into O(m log d) dimensions with polylogarithmic distortion.
Abstract: This paper develops a new method for recovering m-sparse signals that is simultaneously uniform and quick. We present a reconstruction algorithm whose run time, O(m log^2(m) log^2(d)), is sublinear in the length d of the signal. The reconstruction error is within a logarithmic factor (in m) of the optimal m-term approximation error in l_1. In particular, the algorithm recovers m-sparse signals perfectly and noisy signals are recovered with polylogarithmic distortion. Our algorithm makes O(m log^2 (d)) measurements, which is within a logarithmic factor of optimal. We also present a small-space implementation of the algorithm. These sketching techniques and the corresponding reconstruction algorithms provide an algorithmic dimension reduction in the l_1 norm. In particular, vectors of support m in dimension d can be linearly embedded into O(m log^2 d) dimensions with polylogarithmic distortion. We can reconstruct a vector from its low-dimensional sketch in time O(m log^2(m) log^2(d)). Furthermore, this reconstruction is stable and robust under small perturbations.

134 citations


Proceedings Article
19 Aug 2006
TL;DR: In this article, the authors present a reconstruction algorithm whose run time, O(m log(m) log(d)), is sublinear in the length d of the signal and reconstruction error is within a logarithmic factor (in m) of the optimal m-term approximation error in the norm of the m-sparse signal.
Abstract: Using a number of different algorithms, we can recover approximately a sparse signal with limited noise, ie, a vector of length d with at least d−m zeros or near-zeros, using little more than m log(d) nonadaptive linear measurements rather than the d measurements needed to recover an arbitrary signal of length d We focus on two important properties of such algorithms • Uniformity A single measurement matrix should work simultaneously for all signals • Computational Efficiency The time to recover such an msparse signal should be close to the obvious lower bound, m log(d/m) This paper develops a new method for recovering msparse signals that is simultaneously uniform and quick We present a reconstruction algorithm whose run time, O(m log(m) log(d)), is sublinear in the length d of the signal The reconstruction error is within a logarithmic factor (in m) of the optimal m-term approximation error in `1 In particular, the algorithm recovers m-sparse signals perfectly and noisy signals are recovered with polylogarithmic distortion Our algorithm makes O(m log(d)) measurements, which is within a logarithmic factor of optimal We also present a smallspace implementation of the algorithm These sketching techniques and the corresponding reconstruction algorithms provide an algorithmic dimension reduction in the `1 norm In particular, vectors of support m in dimension d can be linearly embedded into O(m log d) dimensions with polylogarithmic distortion We can reconstruct a vector from its low-dimensional sketch in time O(m log(m) log(d)) Furthermore, this reconstruction is stable and robust under small perturbations

111 citations


Patent
25 Oct 2006
TL;DR: In this paper, the authors demonstrate and reduce to practice methods to extract information directly from an analog or digital signal based on altering our notion of sampling to replace uniform time samples with more general linear functionals.
Abstract: A typical data acquisition system takes periodic samples of a signal, image, or other data, often at the so-called Nyquist/Shannon sampling rate of two times the data bandwidth in order to ensure that no information is lost. In applications involving wideband signals, the Nyquist/Shannon sampling rate is very high, even though the signals may have a simple underlying structure. Recent developments in mathematics and signal processing have uncovered a solution to this Nyquist/Shannon sampling rate bottlenck for signals that are sparse or compressible in some representation. We demonstrate and reduce to practice methods to extract information directly from an analog or digital signal based on altering our notion of sampling to replace uniform time samples with more general linear functionals. One embodiment of our invention is a low-rate analog-to-information converter that can replace the high-rate analog-to-digital converter in certain applications involving wideband signals. Another embodiment is an encoding scheme for wideband discrete-time signals that condenses their information content.

111 citations


Posted Content
TL;DR: In this paper, the authors present a new proof of an important result due to Bourgain and Tzafriri that provides a partial solution to the Kadison-Singer problem.
Abstract: This note presents a new proof of an important result due to Bourgain and Tzafriri that provides a partial solution to the Kadison--Singer problem. The result shows that every unit-norm matrix whose entries are relatively small in comparison with its dimension can be paved by a partition of constant size. That is, the coordinates can be partitioned into a constant number of blocks so that the restriction of the matrix to each block of coordinates has norm less than one half. The original proof of Bourgain and Tzafriri involves a long, delicate calculation. The new proof relies on the systematic use of symmetrization and (noncommutative) Khintchine inequalities to estimate the norms of some random matrices.

33 citations


Proceedings ArticleDOI
22 Mar 2006
TL;DR: Experiments show that random filtering is effective at acquiring sparse and compressible signals and has the potential for implementation in analog hardware, and so it may have a role to play in new types of analog/digital converters.
Abstract: This paper discusses random filtering, a recently proposed method for directly acquiring a compressed version of a digital signal. The technique is based on convolution of the signal with a fixed FIR filter having random taps, followed by downsampling. Experiments show that random filtering is effective at acquiring sparse and compressible signals. This process has the potential for implementation in analog hardware, and so it may have a role to play in new types of analog/digital converters.

Proceedings ArticleDOI
14 May 2006
TL;DR: This paper presents a specific row-action method and provides extensive empirical evidence that it is an effective technique for signal reconstruction and offers several advantages over interior-point methods, including minimal storage and computational requirements, scalability, and robustness.
Abstract: Compressed Sensing uses a small number of random, linear measurements to acquire a sparse signal. Nonlinear algorithms, such as l 1 minimization, are used to reconstruct the signal from the measured data. This paper proposes row-action methods as a computational approach to solving the l 1 optimization problem. This paper presents a specific row-action method and provides extensive empirical evidence that it is an effective technique for signal reconstruction. This approach offers several advantages over interior-point methods, including minimal storage and computational requirements, scalability, and robustness.

Proceedings ArticleDOI
04 May 2006
TL;DR: A new method, called Chaining Pursuit, for sketching both m-sparse and compressible signals with O(m polylog d) nonadaptive linear measurements with an error proportional to the optimal m-term approximation error is developed.
Abstract: It has recently been observed that sparse and compressible signals can be sketched using very few nonadaptive linear measurements in comparison with the length of the signal. This sketch can be viewed as an embedding of an entire class of compressible signals into a low-dimensional space. In particular, d -dimensional signals with m nonzero entries (m -sparse signals) can be embedded in O (m log d ) dimensions. To date, most algorithms for approximating or reconstructing the signal from the sketch, such as the linear programming approach proposed by Candes-Tao and Donoho, require time polynomial in the signal length. This paper develops a new method, called Chaining Pursuit, for sketching both m -sparse and compressible signals with O (m polylog d ) nonadaptive linear measurements. The algorithm can reconstruct the original signal in time O (m polylog d ) with an error proportional to the optimal m -term approximation error. In particular, m -sparse signals are recovered perfectly and compressible signals are recovered with polylogarithmic distortion. Moreover, the algorithm can operate in small space O (m polylog d ), so it is appropriate for streaming data.© (2006) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.