scispace - formally typeset
Open AccessJournal ArticleDOI

One-Bit Compressed Sensing by Linear Programming

Yaniv Plan, +1 more
- 01 Aug 2013 - 
- Vol. 66, Iss: 8, pp 1275-1297
Reads0
Chats0
TLDR
In this paper, the authors give the first computationally tractable and almost optimal solution to the problem of one-bit compressed sensing, showing how to accurately recover an s-sparse vector x ∈ R n from the signs of O(s log 2 (n/s) random linear measurements of x.
Abstract
ONE-BIT COMPRESSED SENSING BY LINEAR PROGRAMMING arXiv:1109.4299v5 [cs.IT] 16 Mar 2012 YANIV PLAN AND ROMAN VERSHYNIN Abstract. We give the first computationally tractable and almost optimal solution to the problem of one-bit compressed sensing, showing how to accurately recover an s-sparse vector x ∈ R n from the signs of O(s log 2 (n/s)) random linear measurements of x. The recovery is achieved by a simple linear program. This result extends to approximately sparse vectors x. Our result is universal in the sense that with high probability, one measurement scheme will successfully recover all sparse vectors simultaneously. The argument is based on solving an equivalent geometric problem on random hyperplane tessellations. 1. Introduction Compressed sensing is a modern paradigm of data acquisition, which is having an impact on several disciplines, see [21]. The scientist has access to a measurement vector v ∈ R m obtained as v = Ax, where A is a given m × n measurement matrix and x ∈ R n is an unknown signal that one needs to recover from v. One would like to take m n, rendering A non-invertible; the key ingredient to successful recovery of x is take into account its assumed structure – sparsity. Thus one assumes that x has at most s nonzero entries, although the support pattern is unknown. The strongest known results are for random measurement matrices A. In particular, if A has Gaussian i.i.d. entries, then we may take m = O(s log(n/s)) and still recover x exactly with high probability [10, 7]; see [26] for an overview. Furthermore, this recovery may be achieved in polynomial time by solving the convex minimization program min x 0 1 subject to Ax 0 = v. Stability results are also available when noise is added to the problem [9, 8, 3, 27]. However, while the focus of compressed sensing is signal recovery with minimal information, the classical set-up (1.1), (1.2) assumes infinite bit precision of the measurements. This disaccord raises an important question: how many bits per measurement (i.e. per coordinate of v) are sufficient for tractable and accurate sparse recovery? This paper shows that one bit per measurement is enough. There are many applications where such severe quantization may be inherent or preferred — analog-to-digital conversion [20, 18], binomial regression in statistical modeling and threshold group testing [12], to name a few. 1.1. Main results. This paper demonstrates that a simple modification of the convex program (1.2) is able to accurately estimate x from extremely quantized measurement vector y = sign(Ax). Date: September 19, 2011. 2000 Mathematics Subject Classification. 94A12; 60D05; 90C25. Y.P. is supported by an NSF Postdoctoral Research Fellowship under award No. 1103909. R.V. is supported by NSF grants DMS 0918623 and 1001829.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Quantized Compressed Sensing by Rectified Linear Units

TL;DR: In this paper, a convex program based on rectified linear units (ReLUs) for two different quantization schemes, namely one-bit and uniform multi-bit quantization, is proposed.
Posted Content

Deep One-bit Compressive Autoencoding

TL;DR: This paper considers the design of a one-bit compressive autoencoder, and proposes a novel hybrid model-based and data-driven methodology that allows for learning the latent-parameters of an iterative optimization algorithm specifically designed for the problem of one- bit sparse signal recovery.
Journal ArticleDOI

Robust 1-Bit Compressed Sensing via Hinge Loss Minimization

TL;DR: In this article, the authors considered the problem of estimating a structured high-dimensional signal from noisy 1$-bit Gaussian measurements using a simple convex program which uses the hinge loss function as data fidelity term.
Posted Content

The recovery of complex sparse signals from few phaseless measurements.

TL;DR: In this paper, the authors study the stable recovery of complex sparse signals from as few phaseless measurements as possible and show that one can recover complex signals from complex Gaussian random quadratic measurements with high probability.
Proceedings ArticleDOI

Compressive spectrum estimation using quantized measurements

TL;DR: It is shown that under the Gaussian measurement model, the signal can be reconstructed accurately with high probability, as soon as the number of quantized measurements exceeds the order of K log n, where K is thenumber of frequencies and n is the signal dimension.
References
More filters
Journal ArticleDOI

Stable signal recovery from incomplete and inaccurate measurements

TL;DR: In this paper, the authors considered the problem of recovering a vector x ∈ R^m from incomplete and contaminated observations y = Ax ∈ e + e, where e is an error term.
Journal ArticleDOI

Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

TL;DR: If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Journal ArticleDOI

Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming

TL;DR: This algorithm gives the first substantial progress in approximating MAX CUT in nearly twenty years, and represents the first use of semidefinite programming in the design of approximation algorithms.
Journal ArticleDOI

The Dantzig selector: Statistical estimation when p is much larger than n

TL;DR: In many important statistical applications, the number of variables or parameters p is much larger than the total number of observations n as discussed by the authors, and it is possible to estimate β reliably based on the noisy data y.
Book ChapterDOI

Introduction to the non-asymptotic analysis of random matrices.

TL;DR: This is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory, particularly for the problem of estimating covariance matrices in statistics and for validating probabilistic constructions of measurementMatrices in compressed sensing.
Related Papers (5)