scispace - formally typeset
Search or ask a question
Author

Martin J. Strauss

Other affiliations: AT&T Labs, Rutgers University, Alcatel-Lucent  ...read more
Bio: Martin J. Strauss is an academic researcher from University of Michigan. The author has contributed to research in topics: Approximation algorithm & Sparse approximation. The author has an hindex of 46, co-authored 113 publications receiving 10513 citations. Previous affiliations of Martin J. Strauss include AT&T Labs & Rutgers University.


Papers
More filters
Book ChapterDOI
31 May 1998
TL;DR: A definition of protocol divertibility is given that applies to arbitrary 2-party protocols and is compatible with Okamoto and Ohta's definition in the case of interactive zero-knowledge proofs and generalizes to cover several protocols not normally associated with divertibility.
Abstract: First, we introduce the notion of divertibility as a protocol property as opposed to the existing notion as a language property (see Okamoto, Ohta [OO90]) We give a definition of protocol divertibility that applies to arbitrary 2-party protocols and is compatible with Okamoto and Ohta's definition in the case of interactive zero-knowledge proofs Other important examples falling under the new definition are blind signature protocols We propose a sufficiency criterion for divertibility that is satisfied by many existing protocols and which, surprisingly, generalizes to cover several protocols not normally associated with divertibility (eg, Diffie-Hellman key exchange) Next, we introduce atomic proxy cryptography, in which an atomic proxy function, in conjunction with a public proxy key, converts ciphertexts (messages or signatures) for one key into ciphertexts for another Proxy keys, once generated, may be made public and proxy functions applied in untrusted environments We present atomic proxy functions for discrete-log-based encryption, identification, and signature schemes It is not clear whether atomic proxy functions exist in general for all public-key cryptosystems Finally, we discuss the relationship between divertibility and proxy cryptography

1,533 citations

Journal ArticleDOI
TL;DR: This paper proposes a greedy pursuit algorithm, called simultaneous orthogonal matching pursuit (S-OMP), for simultaneous sparse approximation, and presents some numerical experiments that demonstrate how a sparse model for the input signals can be identified more reliably given several input signals.

1,422 citations

Proceedings Article
11 Sep 2001
TL;DR: This work presents general “sketch” based methods for capturing various linear projections of the data and use them to provide pointwise and rangesum estimation of data streams.
Abstract: We present techniques for computing small space representations of massive data streams. These are inspired by traditional wavelet-based approximations that consist of specific linear projections of the underlying data. We present general “sketch” based methods for capturing various linear projections of the data and use them to provide pointwise and rangesum estimation of data streams. These methods use small amounts of space and per-item time while streaming through the data, and provide accurate representation as our experiments with real data streams show.

451 citations

Proceedings ArticleDOI
01 Sep 2008
TL;DR: In this article, the notion of Restricted Isometry Property was extended from the Euclidean lscr2 norm to the Manhattan Lscr1 norm, which is essentially equivalent to the combinatorial notion of expansion of the sparse graph underlying the measurement matrix.
Abstract: There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach utilizes geometric properties of the measurement matrix Phi. A notable example is the Restricted Isometry Property, which states that the mapping Phi preserves the Euclidean norm of sparse signals; it is known that random dense matrices satisfy this constraint with high probability. On the other hand, the combinatorial approach utilizes sparse matrices, interpreted as adjacency matrices of sparse (possibly random) graphs, and uses combinatorial techniques to recover an approximation to the signal. In this paper we present a unification of these two approaches. To this end, we extend the notion of Restricted Isometry Property from the Euclidean lscr2 norm to the Manhattan lscr1 norm. Then we show that this new lscr1 -based property is essentially equivalent to the combinatorial notion of expansion of the sparse graph underlying the measurement matrix. At the same time we show that the new property suffices to guarantee correctness of both geometric and combinatorial recovery algorithms. As a result, we obtain new measurement matrix constructions and algorithms for signal recovery which, compared to previous algorithms, are superior in either the number of measurements or computational efficiency of decoders.

445 citations

Proceedings ArticleDOI
11 Jun 2007
TL;DR: A randomized measurement ensemble and a signal reconstruction algorithm that satisfy four requirements and is a fully polynomial approximation scheme with respect to this type of error bound.
Abstract: Compressed Sensing is a new paradigm for acquiring the compressible signals that arise in many applications. These signals can be approximated using an amount of information much smaller than the nominal dimension of the signal. Traditional approaches acquire the entire signal and process it to extract the information. The new approach acquires a small number of nonadaptive linear measurements of the signal and uses sophisticated algorithms to determine its information content. Emerging technologies can compute these general linear measurements of a signal at unit cost per measurement.This paper exhibits a randomized measurement ensemble and a signal reconstruction algorithm that satisfy four requirements: 1. The measurement ensemble succeeds for all signals, with high probability over the random choices in its construction. 2. The number of measurements of the signal is optimal, except for a factor polylogarithmic in the signal length. 3. The running time of the algorithm is polynomial in the amount of information in the signal and polylogarithmic in the signal length. 4. The recovery algorithm offers the strongest possible type of error guarantee. Moreover, it is a fully polynomial approximation scheme with respect to this type of error bound.Emerging applications demand this level of performance. Yet no otheralgorithm in the literature simultaneously achieves all four of these desiderata.

337 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Abstract: This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.

14,587 citations

Journal ArticleDOI
TL;DR: The theory of compressive sampling, also known as compressed sensing or CS, is surveyed, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition.
Abstract: Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.

9,686 citations

Journal ArticleDOI
TL;DR: It is demonstrated theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal.
Abstract: This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

8,604 citations