scispace - formally typeset
Open AccessJournal ArticleDOI

One-Bit Compressed Sensing by Linear Programming

Yaniv Plan, +1 more
- 01 Aug 2013 - 
- Vol. 66, Iss: 8, pp 1275-1297
Reads0
Chats0
TLDR
In this paper, the authors give the first computationally tractable and almost optimal solution to the problem of one-bit compressed sensing, showing how to accurately recover an s-sparse vector x ∈ R n from the signs of O(s log 2 (n/s) random linear measurements of x.
Abstract
ONE-BIT COMPRESSED SENSING BY LINEAR PROGRAMMING arXiv:1109.4299v5 [cs.IT] 16 Mar 2012 YANIV PLAN AND ROMAN VERSHYNIN Abstract. We give the first computationally tractable and almost optimal solution to the problem of one-bit compressed sensing, showing how to accurately recover an s-sparse vector x ∈ R n from the signs of O(s log 2 (n/s)) random linear measurements of x. The recovery is achieved by a simple linear program. This result extends to approximately sparse vectors x. Our result is universal in the sense that with high probability, one measurement scheme will successfully recover all sparse vectors simultaneously. The argument is based on solving an equivalent geometric problem on random hyperplane tessellations. 1. Introduction Compressed sensing is a modern paradigm of data acquisition, which is having an impact on several disciplines, see [21]. The scientist has access to a measurement vector v ∈ R m obtained as v = Ax, where A is a given m × n measurement matrix and x ∈ R n is an unknown signal that one needs to recover from v. One would like to take m n, rendering A non-invertible; the key ingredient to successful recovery of x is take into account its assumed structure – sparsity. Thus one assumes that x has at most s nonzero entries, although the support pattern is unknown. The strongest known results are for random measurement matrices A. In particular, if A has Gaussian i.i.d. entries, then we may take m = O(s log(n/s)) and still recover x exactly with high probability [10, 7]; see [26] for an overview. Furthermore, this recovery may be achieved in polynomial time by solving the convex minimization program min x 0 1 subject to Ax 0 = v. Stability results are also available when noise is added to the problem [9, 8, 3, 27]. However, while the focus of compressed sensing is signal recovery with minimal information, the classical set-up (1.1), (1.2) assumes infinite bit precision of the measurements. This disaccord raises an important question: how many bits per measurement (i.e. per coordinate of v) are sufficient for tractable and accurate sparse recovery? This paper shows that one bit per measurement is enough. There are many applications where such severe quantization may be inherent or preferred — analog-to-digital conversion [20, 18], binomial regression in statistical modeling and threshold group testing [12], to name a few. 1.1. Main results. This paper demonstrates that a simple modification of the convex program (1.2) is able to accurately estimate x from extremely quantized measurement vector y = sign(Ax). Date: September 19, 2011. 2000 Mathematics Subject Classification. 94A12; 60D05; 90C25. Y.P. is supported by an NSF Postdoctoral Research Fellowship under award No. 1103909. R.V. is supported by NSF grants DMS 0918623 and 1001829.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

A Simple Analysis for Exp-concave Empirical Minimization with Arbitrary Convex Regularizer

TL;DR: A simple analysis of fast rates for stochastic composite optimization over a finite-dimensional bounded convex set with exponential concave loss functions and an arbitrary convex regularization results in a fast rate for empirical risk minimization.
Journal ArticleDOI

One-Bit Radar Imaging Via Adaptive Binary Iterative Hard Thresholding

TL;DR: In this article, the authors proposed an adaptive-BIHT algorithm to mitigate artifacts and improve the one-bit radar imaging performance, which can provide superior imaging performance with suppressed artifacts compared with the conventional BIHT method.
Journal ArticleDOI

Approximate Message Passing With Parameter Estimation for Heavily Quantized Measurements

TL;DR: In this article , the authors take advantage of the approximate message passing (AMP) framework to achieve this goal given its high computational efficiency and state-of-the-art performance.
Journal ArticleDOI

AdaBoost and robust one-bit compressed sensing

TL;DR: In this article , the authors considered binary classification in robust one-bit compressed sensing with adversarial errors, and derived prediction error bounds through its relation to the max $\ell\_1$-margin classifier.
Journal ArticleDOI

Error bounds of block sparse signal recovery based on q-ratio block constrained minimal singular values

TL;DR: In this paper, the q-ratio block constrained minimal singular values (BCMSV) measure was introduced as a new measure of measurement matrix in compressive sensing of block sparse/compressive signals and presented an algorithm for computing this measure.
References
More filters
Journal ArticleDOI

Stable signal recovery from incomplete and inaccurate measurements

TL;DR: In this paper, the authors considered the problem of recovering a vector x ∈ R^m from incomplete and contaminated observations y = Ax ∈ e + e, where e is an error term.
Journal ArticleDOI

Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

TL;DR: If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Journal ArticleDOI

Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming

TL;DR: This algorithm gives the first substantial progress in approximating MAX CUT in nearly twenty years, and represents the first use of semidefinite programming in the design of approximation algorithms.
Journal ArticleDOI

The Dantzig selector: Statistical estimation when p is much larger than n

TL;DR: In many important statistical applications, the number of variables or parameters p is much larger than the total number of observations n as discussed by the authors, and it is possible to estimate β reliably based on the noisy data y.
Book ChapterDOI

Introduction to the non-asymptotic analysis of random matrices.

TL;DR: This is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory, particularly for the problem of estimating covariance matrices in statistics and for validating probabilistic constructions of measurementMatrices in compressed sensing.
Related Papers (5)