scispace - formally typeset
Open AccessReportDOI

Comparison and anti-concentration bounds for maxima of Gaussian random vectors

TLDR
In this paper, the authors give explicit comparisons of expectations of smooth functions and distribution functions of maxima of Gaussian random vectors without any restriction on the covariance matrices, and establish an anti-concentration inequality for the maximum of a Gaussian vector, which derives a useful upper bound on the Levy concentration function for the Gaussian maximum.
Abstract
Slepian and Sudakov–Fernique type inequalities, which compare expectations of maxima of Gaussian random vectors under certain restrictions on the covariance matrices, play an important role in probability theory, especially in empirical process and extreme value theories. Here we give explicit comparisons of expectations of smooth functions and distribution functions of maxima of Gaussian random vectors without any restriction on the covariance matrices. We also establish an anti-concentration inequality for the maximum of a Gaussian random vector, which derives a useful upper bound on the Levy concentration function for the Gaussian maximum. The bound is dimension-free and applies to vectors with arbitrary covariance matrices. This anti-concentration inequality plays a crucial role in establishing bounds on the Kolmogorov distance between maxima of Gaussian random vectors. These results have immediate applications in mathematical statistics. As an example of application, we establish a conditional multiplier central limit theorem for maxima of sums of independent random vectors where the dimension of the vectors is possibly much larger than the sample size.

read more

Content maybe subject to copyright    Report

Citations
More filters
ReportDOI

Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors

TL;DR: It is demonstrated how the Gaussian approximations and the multiplier bootstrap can be used for modern high dimensional estimation, multiple hypothesis testing, and adaptive specification testing.
ReportDOI

Gaussian approximation of suprema of empirical processes

TL;DR: An abstract approximation theorem that is applicable to a wide variety of problems, primarily in statistics, is proved and the bound in the main approximation theorem is non-asymptotic and the theorem does not require uniform boundedness of the class of functions.

Supplement to \gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors"

TL;DR: The Gaussian approximations and the multiplier bootstrap can be used for modern high-dimensional estimation, multiple hypothesis testing, and adaptive specification testing and contain nonasymptotic bounds on approximation errors.
Posted Content

Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning.

TL;DR: It is proved that self-distillation can also be viewed as implicitly combining ensemble and knowledge distillation to improve test accuracy, and it sheds light on how ensemble works in deep learning in a way that is completely different from traditional theorems.
Journal ArticleDOI

Uniform post-selection inference for least absolute deviation regression and other Z-estimation problems

TL;DR: In this paper, Neyman's orthogonal score test is applied to a high-dimensional sparse median regression model with homoscedastic errors and uniformly valid confidence regions for regression coefficients are developed.
References
More filters
Book

Weak Convergence and Empirical Processes: With Applications to Statistics

TL;DR: In this article, the authors define the Ball Sigma-Field and Measurability of Suprema and show that it is possible to achieve convergence almost surely and in probability.
BookDOI

Weak Convergence and Empirical Processes

TL;DR: This chapter discusses Convergence: Weak, Almost Uniform, and in Probability, which focuses on the part of Convergence of the Donsker Property which is concerned with Uniformity and Metrization.
Journal ArticleDOI

The Dantzig selector: Statistical estimation when p is much larger than n

TL;DR: In many important statistical applications, the number of variables or parameters p is much larger than the total number of observations n as discussed by the authors, and it is possible to estimate β reliably based on the noisy data y.
Journal ArticleDOI

Estimation of the Mean of a Multivariate Normal Distribution

Charles Stein
- 01 Nov 1981 - 
TL;DR: In this article, an unbiased estimate of risk is obtained for an arbitrary estimate, and certain special classes of estimates are then discussed, such as smoothing by using moving averages and trimmed analogs of the James-Stein estimate.
Book

Statistics for High-Dimensional Data: Methods, Theory and Applications

TL;DR: This book presents a detailed account of recently developed approaches, including the Lasso and versions of it for various models, boosting methods, undirected graphical modeling, and procedures controlling false positive selections.