scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Compressed Sensing Off the Grid

TL;DR: This paper investigates the problem of estimating the frequency components of a mixture of s complex sinusoids from a random subset of n regularly spaced samples and proposes an atomic norm minimization approach to exactly recover the unobserved samples and identify the unknown frequencies.
Abstract: This paper investigates the problem of estimating the frequency components of a mixture of s complex sinusoids from a random subset of n regularly spaced samples. Unlike previous work in compressed sensing, the frequencies are not assumed to lie on a grid, but can assume any values in the normalized frequency domain [0, 1]. An atomic norm minimization approach is proposed to exactly recover the unobserved samples and identify the unknown frequencies, which is then reformulated as an exact semidefinite program. Even with this continuous dictionary, it is shown that O(slog s log n) random samples are sufficient to guarantee exact frequency localization with high probability, provided the frequencies are well separated. Extensive numerical experiments are performed to illustrate the effectiveness of the proposed method.
Citations
More filters
Journal ArticleDOI
TL;DR: The spectral CS (SCS) recovery framework for arbitrary frequencysparse signals is introduced and it is demonstrated that SCS signicantly outperforms current state-of-the-art CS algorithms based on the DFT while providing provable bounds on the number of measurements required for stable recovery.

452 citations


Cites background from "Compressed Sensing Off the Grid"

  • ...A very recent contributions in the direction of optimization-based approaches shows promising results [73]....

    [...]

Journal ArticleDOI
Junho Lee1, Gye-Tae Gil1, Yong Hoon Lee1
TL;DR: An efficient open-loop channel estimator for a millimeter-wave (mm-wave) hybrid multiple-input multiple-output (MIMO) system consisting of radio-frequency beamformers with large antenna arrays followed by a baseband MIMO processor is proposed.
Abstract: We propose an efficient open-loop channel estimator for a millimeter-wave (mm-wave) hybrid multiple-input multiple-output (MIMO) system consisting of radio-frequency (RF) beamformers with large antenna arrays followed by a baseband MIMO processor. A sparse signal recovery problem exploiting the sparse nature of mm-wave channels is formulated for channel estimation based on the parametric channel model with quantized angles of departures/arrivals (AoDs/AoAs), called the angle grids. The problem is solved by the orthogonal matching pursuit (OMP) algorithm employing a redundant dictionary consisting of array response vectors with finely quantized angle grids. We suggest the use of non-uniformly quantized angle grids and show that such grids reduce the coherence of the redundant dictionary. The lower and upper bounds of the sum-of-squared errors of the proposed OMP-based estimator are derived analytically: the lower bound is derived by considering the oracle estimator that assumes the knowledge of AoDs/AoAs, and the upper bound is derived based on the results of the OMP performance guarantees. The design of training vectors (or sensing matrix) is particularly important in hybrid MIMO systems, because the RF beamformer prevents the use of independent and identically distributed random training vectors, which are popular in compressed sensing. We design training vectors so that the total coherence of the equivalent sensing matrix is minimized for a given RF beamforming matrix, which is assumed to be unitary. It is observed that the estimation accuracy can be improved significantly by randomly permuting the columns of the RF beamforming matrix. The simulation results demonstrate the advantage of the proposed OMP with a redundant dictionary over the existing methods such as the least squares method and the OMP based on the virtual channel model.

447 citations

Journal ArticleDOI
TL;DR: A novel virtual array interpolation-based algorithm for coprime array direction-of-arrival (DOA) estimation using the Hermitian positive semi-definite Toeplitz condition and an atomic norm minimization problem with respect to the equivalent virtual measurement vector is formulated.
Abstract: Coprime arrays can achieve an increased number of degrees of freedom by deriving the equivalent signals of a virtual array. However, most existing methods fail to utilize all information received by the coprime array due to the non-uniformity of the derived virtual array, resulting in an inevitable estimation performance loss. To address this issue, we propose a novel virtual array interpolation-based algorithm for coprime array direction-of-arrival (DOA) estimation in this paper. The idea of array interpolation is employed to construct a virtual uniform linear array such that all virtual sensors in the non-uniform virtual array can be utilized, based on which the atomic norm of the second-order virtual array signals is defined. By investigating the properties of virtual domain atomic norm, it is proved that the covariance matrix of the interpolated virtual array is related to the virtual measurements under the Hermitian positive semi-definite Toeplitz condition. Accordingly, an atomic norm minimization problem with respect to the equivalent virtual measurement vector is formulated to reconstruct the interpolated virtual array covariance matrix in a gridless manner, where the reconstructed covariance matrix enables off-grid DOA estimation. Simulation results demonstrate the performance advantages of the proposed DOA estimation algorithm for coprime arrays.

394 citations


Cites background from "Compressed Sensing Off the Grid"

  • ...We would like to emphasize the striking differences between the virtual domain atoms and those in the physical domain [32]–[34]....

    [...]

  • ...where D ∈ CL×K is a Vandermonde matrix, and C = diag(ck ) with ck ≥ 0 [32]....

    [...]

  • ...Different from the atomic norm of the physical array received signals [32]–[34], the derived equivalent virtual array signals belong to the second-order statistics, which means that only the real-valued signal power is available rather than the complexvalued signal waveforms....

    [...]

Journal ArticleDOI
TL;DR: This tutorial-style overview highlights the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees and reviews two contrasting approaches: two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and global landscape analysis and initialization-free algorithms.
Abstract: Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, and robust principal component analysis. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.

369 citations

Journal ArticleDOI
TL;DR: This paper develops a novel algorithm, called enhanced matrix completion (EMaC), based on structured matrix completion that does not require prior knowledge of the model order to recover a spectrally sparse signal from a small random subset of its n time domain samples.
Abstract: This paper explores the problem of spectral compressed sensing, which aims to recover a spectrally sparse signal from a small random subset of its n time domain samples. The signal of interest is assumed to be a superposition of r multidimensional complex sinusoids, while the underlying frequencies can assume any continuous values in the normalized frequency domain. Conventional compressed sensing paradigms suffer from the basis mismatch issue when imposing a discrete dictionary on the Fourier representation. To address this issue, we develop a novel algorithm, called enhanced matrix completion (EMaC), based on structured matrix completion that does not require prior knowledge of the model order. The algorithm starts by arranging the data into a low-rank enhanced form exhibiting multifold Hankel structure, and then attempts recovery via nuclear norm minimization. Under mild incoherence conditions, EMaC allows perfect recovery as soon as the number of samples exceeds the order of r log 4 n, and is stable against bounded noise. Even if a constant portion of samples are corrupted with arbitrary magnitude, EMaC still allows exact recovery, provided that the sample complexity exceeds the order of r 2 log 3 n. Along the way, our results demonstrate the power of convex relaxation in completing a low-rank multifold Hankel or Toeplitz matrix from minimal observed entries. The performance of our algorithm and its applicability to super resolution are further validated by numerical experiments.

253 citations

References
More filters
Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Abstract: This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.

14,587 citations

Journal ArticleDOI
TL;DR: The theory of compressive sampling, also known as compressed sensing or CS, is surveyed, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition.
Abstract: Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.

9,686 citations


"Compressed Sensing Off the Grid" refers methods in this paper

  • ...Keywords: Atomic norm, basis mismatch, compressed sensing, continuous dictionary, line spectral estimation, nuclear norm relaxation, Prony’s method, sparsity....

    [...]

Journal ArticleDOI
TL;DR: It is proved that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries, and that objects other than signals and images can be perfectly reconstructed from very limited information.
Abstract: We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.

5,274 citations

Journal ArticleDOI
TL;DR: It is shown that performance profiles combine the best features of other tools for performance evaluation to create a single tool for benchmarking and comparing optimization software.
Abstract: We propose performance profiles — distribution functions for a performance metric — as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.

3,729 citations


"Compressed Sensing Off the Grid" refers background in this paper

  • ...The performance profile proposed in [20] visually presents the performance of a set of algorithms under a variety of experimental conditions....

    [...]