scispace - formally typeset
Open AccessJournal ArticleDOI

One-Bit Compressed Sensing by Linear Programming

Yaniv Plan, +1 more
- 01 Aug 2013 - 
- Vol. 66, Iss: 8, pp 1275-1297
Reads0
Chats0
TLDR
In this paper, the authors give the first computationally tractable and almost optimal solution to the problem of one-bit compressed sensing, showing how to accurately recover an s-sparse vector x ∈ R n from the signs of O(s log 2 (n/s) random linear measurements of x.
Abstract
ONE-BIT COMPRESSED SENSING BY LINEAR PROGRAMMING arXiv:1109.4299v5 [cs.IT] 16 Mar 2012 YANIV PLAN AND ROMAN VERSHYNIN Abstract. We give the first computationally tractable and almost optimal solution to the problem of one-bit compressed sensing, showing how to accurately recover an s-sparse vector x ∈ R n from the signs of O(s log 2 (n/s)) random linear measurements of x. The recovery is achieved by a simple linear program. This result extends to approximately sparse vectors x. Our result is universal in the sense that with high probability, one measurement scheme will successfully recover all sparse vectors simultaneously. The argument is based on solving an equivalent geometric problem on random hyperplane tessellations. 1. Introduction Compressed sensing is a modern paradigm of data acquisition, which is having an impact on several disciplines, see [21]. The scientist has access to a measurement vector v ∈ R m obtained as v = Ax, where A is a given m × n measurement matrix and x ∈ R n is an unknown signal that one needs to recover from v. One would like to take m n, rendering A non-invertible; the key ingredient to successful recovery of x is take into account its assumed structure – sparsity. Thus one assumes that x has at most s nonzero entries, although the support pattern is unknown. The strongest known results are for random measurement matrices A. In particular, if A has Gaussian i.i.d. entries, then we may take m = O(s log(n/s)) and still recover x exactly with high probability [10, 7]; see [26] for an overview. Furthermore, this recovery may be achieved in polynomial time by solving the convex minimization program min x 0 1 subject to Ax 0 = v. Stability results are also available when noise is added to the problem [9, 8, 3, 27]. However, while the focus of compressed sensing is signal recovery with minimal information, the classical set-up (1.1), (1.2) assumes infinite bit precision of the measurements. This disaccord raises an important question: how many bits per measurement (i.e. per coordinate of v) are sufficient for tractable and accurate sparse recovery? This paper shows that one bit per measurement is enough. There are many applications where such severe quantization may be inherent or preferred — analog-to-digital conversion [20, 18], binomial regression in statistical modeling and threshold group testing [12], to name a few. 1.1. Main results. This paper demonstrates that a simple modification of the convex program (1.2) is able to accurately estimate x from extremely quantized measurement vector y = sign(Ax). Date: September 19, 2011. 2000 Mathematics Subject Classification. 94A12; 60D05; 90C25. Y.P. is supported by an NSF Postdoctoral Research Fellowship under award No. 1103909. R.V. is supported by NSF grants DMS 0918623 and 1001829.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

One-Bit compressive sampling with time-varying thresholds for multiple sinusoids

TL;DR: A novel one-bit RELAX algorithm is developed for multi-tone parameter estimation and is shown to have excellent performance via a numerical example.
Journal ArticleDOI

Sparse signal recovery from one-bit quantized data: An iterative reweighted algorithm $

TL;DR: A log-sum penalty function is employed, also referred to as the Gaussian entropy, to encourage sparsity in the algorithm development and the logistic function is introduced to quantify the consistency between the measured one-bit quantized data and the reconstructed signal.
Journal ArticleDOI

Fast Algorithms for Demixing Sparse Signals From Nonlinear Observations

TL;DR: This paper provides fast algorithms for recovery of the constituents of a pair of sparse signals from noisy, nonlinear observations of their superposition with rigorous theoretical analysis and derive (nearly) tight upper bounds on the sample complexity of the proposed algorithms.
Journal ArticleDOI

Sparse Proteomics Analysis – a compressed sensing-based approach for feature selection and classification of high-dimensional proteomics mass spectrometry data

TL;DR: This work presents a new algorithm, Sparse Proteomics Analysis (SPA), based on the theory of compressed sensing that allows for a minimal discriminating set of features from mass spectrometry data-sets and shows that its performance is competitive with standard algorithms for analyzing proteomics data.
Journal ArticleDOI

Model-Based Deep Learning for One-Bit Compressive Sensing

TL;DR: This work develops hybrid model-based deep learning architectures based on the deep unfolding methodology that have the ability to adaptively learn the proper quantization thresholds, paving the way for amplitude recovery in one-bit compressive sensing.
References
More filters
Journal ArticleDOI

Stable signal recovery from incomplete and inaccurate measurements

TL;DR: In this paper, the authors considered the problem of recovering a vector x ∈ R^m from incomplete and contaminated observations y = Ax ∈ e + e, where e is an error term.
Journal ArticleDOI

Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

TL;DR: If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Journal ArticleDOI

Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming

TL;DR: This algorithm gives the first substantial progress in approximating MAX CUT in nearly twenty years, and represents the first use of semidefinite programming in the design of approximation algorithms.
Journal ArticleDOI

The Dantzig selector: Statistical estimation when p is much larger than n

TL;DR: In many important statistical applications, the number of variables or parameters p is much larger than the total number of observations n as discussed by the authors, and it is possible to estimate β reliably based on the noisy data y.
Book ChapterDOI

Introduction to the non-asymptotic analysis of random matrices.

TL;DR: This is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory, particularly for the problem of estimating covariance matrices in statistics and for validating probabilistic constructions of measurementMatrices in compressed sensing.
Related Papers (5)