scispace - formally typeset
Search or ask a question

Showing papers by "Iain M. Johnstone published in 2013"


Journal ArticleDOI
TL;DR: In this article, the authors present a formula that characterizes the allowed undersampling of generalized sparse objects for approximate message passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser.
Abstract: Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to approximate message passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization-the firm shrinkage nonlinearity and the minimax nonlinearity-and also nonscalar denoisers-block thresholding, monotone regression, and total variation minimization. Let the variables e = k/N and δ = n/N denote the generalized sparsity and undersampling fractions for sampling the k-generalized-sparse N-vector x0 according to y=Ax0. Here, A is an n×N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve δ = δ(e) separating successful from unsuccessful reconstruction of x0 by AMP is given by δ = M(e|Denoiser) where M(e|Denoiser) denotes the per-coordinate minimax mean squared error (MSE) of the specified, optimally tuned denoiser in the directly observed problem y = x + z. In short, the phase transition of a noiseless undersampling problem is identical to the minimax MSE in a denoising problem. We prove that this formula follows from state evolution and present numerical results validating it in a wide range of settings. The above formula generates numerous new insights, both in the scalar and in the nonscalar cases.

207 citations


Journal ArticleDOI
01 Sep 2013-Cytokine
TL;DR: In context of infection, there was decreased expression of CD3 and increased expression of PD-1 and CD69 on infected cells correlating with increased phosphorylation of proteins in the TCR pathway similar to changes observed during T cell activation, and infected naive T cells appeared to be transitioning to the memory-like phenotype.

1 citations


Posted Content
TL;DR: In this article, the linear inverse problem of estimating an unknown signal from noisy measurements on a linear operator admits a wavelet-vaguelette decomposition (WVD) was formulated in the Gaussian sequence model and proposed estimation based on complexity penalized regression on a level-by-level basis.
Abstract: Author(s): Johnstone, Iain M.; Paul, Debashis | Abstract: We consider the linear inverse problem of estimating an unknown signal $f$ from noisy measurements on $Kf$ where the linear operator $K$ admits a wavelet-vaguelette decomposition (WVD). We formulate the problem in the Gaussian sequence model and propose estimation based on complexity penalized regression on a level-by-level basis. We adopt squared error loss and show that the estimator achieves exact rate-adaptive optimality as $f$ varies over a wide range of Besov function classes.

1 citations