scispace - formally typeset
D

David L. Donoho

Researcher at Stanford University

Publications -  273
Citations -  115802

David L. Donoho is an academic researcher from Stanford University. The author has contributed to research in topics: Wavelet & Compressed sensing. The author has an hindex of 110, co-authored 271 publications receiving 108027 citations. Previous affiliations of David L. Donoho include University of California, Berkeley & Western Geophysical.

Papers
More filters
Journal ArticleDOI

Cosmological non-Gaussian signature detection: comparing performance of different statistical tests

TL;DR: Two models for transform-domain coefficients are considered: a power-law model which seems suited to the wavelet coefficients of simulated cosmic strings, and a sparse mixture model, which seems suitable for the curvelet coefficient of filamentary structure.
Journal ArticleDOI

Asymptotic minimaxity of False Discovery Rate thresholding for sparse exponential data

TL;DR: In this article, the authors apply FDR thresholding to a non-Gaussian vector whose coordinates X_i, i=1,..., n, are independent exponential with individual means, and study minimax estimation over parameter spaces defined by constraints on the per-coordinate p-norm of
Proceedings ArticleDOI

Solution of l 1 Minimization Problems by LARS/Homotopy Methods

TL;DR: This work considers l1 minimization by using LARS, Lasso, and homotopy methods and finds that whenever the number k of nonzeros in the sparsest solution is less than d/2log(n) then LARS/homotopy obtains the spARSest solution in k steps each of complexity O(d2).
Journal ArticleDOI

On Lebesgue-type inequalities for greedy approximation

TL;DR: In this paper, the efficiency of greedy algorithms with respect to redundant dictionaries in Hilbert spaces was studied and upper estimates for the errors of the pure greedy algorithm and the orthogonal greedy algorithm in terms of the best m-term approximations were obtained.
Posted Content

Neural Proximal Gradient Descent for Compressive Imaging

TL;DR: This work develops a proximal map that works well with real images based on residual networks with recurrent blocks, and significantly outperforms conventional non-recurrent deep ResNets by 2dB SNR and outperforms state-of-the-art compressed-sensing Wavelet-based methods by 4 dB SNR.