scispace - formally typeset
Search or ask a question

Showing papers by "Iain M. Johnstone published in 1992"


Journal ArticleDOI
TL;DR: In this paper, it was shown that near-blackness is required for signal-to-noise enhancements and for superresolution, and that minimum /1-norm reconstruction may exploit near blackness to an even greater extent.
Abstract: SUMMARY Maximum entropy (ME) inversion is a non-linear inversion technique for inverse problems where the object to be recovered is known to be positive. It has been applied in areas ranging from radio astronomy to various forms of spectroscopy, sometimes with dramatic success. In some cases, ME has attained an order of magnitude finer resolution and/or an order of magnitude smaller noise level than that obtainable by standard linear methods. The dramatic successes all seem to occur in cases where the object to be recovered is 'nearly black': essentially zero in the vast majority of samples. We show that near-blackness is required, both for signal-to-noise enhancements and for superresolution. However, other methods-in particular, minimum /1-norm reconstruction-may exploit near-blackness to an even greater extent.

392 citations


Journal ArticleDOI
TL;DR: In this article, nonparametric information bounds are defined for the smoothing parameter h 0, which minimizes the squared error of a kernel or smoothing spline estimator, and asymptotically efficient estimators of h 0 are presented.
Abstract: A striking feature of curve estimation is that the smoothing parameter h 0 , which minimizes the squared error of a kernel or smoothing spline estimator, is very difficult to estimate. This is manifest both in slow rates of convergence and in high variability of standard methods such as cross-validation. We quantify this difficulty by describing nonparametric information bounds and exhibit asymptotically efficient estimators of h 0 that attain the bounds. The efficient estimators are substantially less variable than cross-validation (and other current procedures) and simulations suggest that they may offer improvements at moderate sample sizes, at least in terms of minimizing the squared error

54 citations


01 Jan 1992
TL;DR: In this article, un cadre ou des estimateurs explicitement construits a partir des coefficients d'ondelettes se revelent strictement plus efficaces that les estimateurs habituels (noyaux, series orthogonales,...).
Abstract: Nous proposons un cadre ou des estimateurs explicitement construits a partir des coefficients d'ondelettes se revelent strictement plus efficaces que les estimateurs habituels (noyaux, series orthogonales, ...). Ce cadre utilise fortement le fait que des contraintes de regularites de types Besov se traduisent en terme de coefficients d'ondelettes sous une forme geometrique simple dans des espaces de suites

36 citations


Journal ArticleDOI
TL;DR: In this paper, the authors give analytical and numerical results for small intervals when p = 1.26p(T), where T is a simplex or a hyperrectangle and p(T) 2 p2/(p + A(fQ)), where A(Q) is the principal eigenvalue of the Laplace operator on the polydisc transform.
Abstract: and minimax risk p(T), we give analytical and numerical results for small intervals when p = 1. Usually, however, approximations are needed. If T is "rectangulary convex" at 0, there exist linear estimators with risk at most 1.26p(T). For general T, p(T) 2 p2/(p + A(fQ)), where A(Q) is the principal eigenvalue of the Laplace operator on the polydisc transform Q = Ql(T), a domain in twice-p-dimensional space. The bound is asymptotically sharp: p(mT) = p - A(l)/m + o(m'-). Explicit forms are given for T a simplex or a hyperrectangle. We explore the curious parallel of the results for T with those for a Gaussian vector of double the dimension lying in fl.

28 citations