scispace - formally typeset
Open AccessJournal ArticleDOI

One-Bit Compressed Sensing by Linear Programming

Yaniv Plan, +1 more
- 01 Aug 2013 - 
- Vol. 66, Iss: 8, pp 1275-1297
Reads0
Chats0
TLDR
In this paper, the authors give the first computationally tractable and almost optimal solution to the problem of one-bit compressed sensing, showing how to accurately recover an s-sparse vector x ∈ R n from the signs of O(s log 2 (n/s) random linear measurements of x.
Abstract
ONE-BIT COMPRESSED SENSING BY LINEAR PROGRAMMING arXiv:1109.4299v5 [cs.IT] 16 Mar 2012 YANIV PLAN AND ROMAN VERSHYNIN Abstract. We give the first computationally tractable and almost optimal solution to the problem of one-bit compressed sensing, showing how to accurately recover an s-sparse vector x ∈ R n from the signs of O(s log 2 (n/s)) random linear measurements of x. The recovery is achieved by a simple linear program. This result extends to approximately sparse vectors x. Our result is universal in the sense that with high probability, one measurement scheme will successfully recover all sparse vectors simultaneously. The argument is based on solving an equivalent geometric problem on random hyperplane tessellations. 1. Introduction Compressed sensing is a modern paradigm of data acquisition, which is having an impact on several disciplines, see [21]. The scientist has access to a measurement vector v ∈ R m obtained as v = Ax, where A is a given m × n measurement matrix and x ∈ R n is an unknown signal that one needs to recover from v. One would like to take m n, rendering A non-invertible; the key ingredient to successful recovery of x is take into account its assumed structure – sparsity. Thus one assumes that x has at most s nonzero entries, although the support pattern is unknown. The strongest known results are for random measurement matrices A. In particular, if A has Gaussian i.i.d. entries, then we may take m = O(s log(n/s)) and still recover x exactly with high probability [10, 7]; see [26] for an overview. Furthermore, this recovery may be achieved in polynomial time by solving the convex minimization program min x 0 1 subject to Ax 0 = v. Stability results are also available when noise is added to the problem [9, 8, 3, 27]. However, while the focus of compressed sensing is signal recovery with minimal information, the classical set-up (1.1), (1.2) assumes infinite bit precision of the measurements. This disaccord raises an important question: how many bits per measurement (i.e. per coordinate of v) are sufficient for tractable and accurate sparse recovery? This paper shows that one bit per measurement is enough. There are many applications where such severe quantization may be inherent or preferred — analog-to-digital conversion [20, 18], binomial regression in statistical modeling and threshold group testing [12], to name a few. 1.1. Main results. This paper demonstrates that a simple modification of the convex program (1.2) is able to accurately estimate x from extremely quantized measurement vector y = sign(Ax). Date: September 19, 2011. 2000 Mathematics Subject Classification. 94A12; 60D05; 90C25. Y.P. is supported by an NSF Postdoctoral Research Fellowship under award No. 1103909. R.V. is supported by NSF grants DMS 0918623 and 1001829.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

A Max-Norm Constrained Minimization Approach to 1-Bit Matrix Completion

TL;DR: In this paper, the problem of noisy 1-bit matrix completion under a general non-uniform sampling distribution using the max-norm as a convex relaxation for the rank is considered.
Journal ArticleDOI

High-dimensional estimation with geometric constraints

TL;DR: A general model where it is only assumed that each observation y_i may depend on a_i only through , which leads to the intriguing conclusion that in the high noise regime, an unknown non-linearity in the observations does not significantly reduce one's ability to determine the signal, even when the non- linearity may be non-invertible.
Posted Content

Quantized Iterative Hard Thresholding: Bridging 1-bit and High-Resolution Quantized Compressed Sensing

TL;DR: It is shown that reconstructing a sparse signal from quantized compressive measurement can be achieved in an unified formalism whatever the (scalar) quantization resolution, i.e., from 1-bit to high resolution assumption.
Journal ArticleDOI

Unknown Sparsity in Compressed Sensing: Denoising and Inference

TL;DR: In this paper, a new deconvolution-based method for estimating unknown sparsity was proposed, which has wider applicability and sharper theoretical guarantees than the original sparsity measure.
Journal ArticleDOI

Online Censoring for Large-Scale Regressions with Application to Streaming Big Data

TL;DR: This work introduces means of identifying and omitting less informative observations in an online and data-adaptive fashion and joint censoring and estimation is put forth to solve large-scale linear regressions in a centralized setup.
References
More filters
Journal ArticleDOI

Stable signal recovery from incomplete and inaccurate measurements

TL;DR: In this paper, the authors considered the problem of recovering a vector x ∈ R^m from incomplete and contaminated observations y = Ax ∈ e + e, where e is an error term.
Journal ArticleDOI

Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

TL;DR: If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Journal ArticleDOI

Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming

TL;DR: This algorithm gives the first substantial progress in approximating MAX CUT in nearly twenty years, and represents the first use of semidefinite programming in the design of approximation algorithms.
Journal ArticleDOI

The Dantzig selector: Statistical estimation when p is much larger than n

TL;DR: In many important statistical applications, the number of variables or parameters p is much larger than the total number of observations n as discussed by the authors, and it is possible to estimate β reliably based on the noisy data y.
Book ChapterDOI

Introduction to the non-asymptotic analysis of random matrices.

TL;DR: This is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory, particularly for the problem of estimating covariance matrices in statistics and for validating probabilistic constructions of measurementMatrices in compressed sensing.
Related Papers (5)