scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Book
01 Jan 1999
TL;DR: In this paper, Donsker's theorem, metric entropy and inequalities, and the universal and uniform central limit theorems are discussed. But they do not consider the two-sample case, the bootstrap case, and confidence sets.
Abstract: Preface 1. Introduction: Donsker's theorem, metric entropy and inequalities 2. Gaussian measures and processes sample continuity 3. Foundations of uniform central limit theorems: Donsker classes 4. Vapnik-Cervonenkis combinatorics 5. Measurability 6. Limit theorems for Vapnik-Cervonenkis and related classes 7. Metric entropy, with inclusion and bracketing 8. Approximation of functions and sets 9. Sums in general Banach spaces and invariance principles 10. Universal and uniform central limit theorems 11. The two-sample case, the bootstrap, and confidence sets 12. Classes of sets or functions too large for central limit theorems Appendices Subject index Author index Index of notation.

697 citations

Journal ArticleDOI
TL;DR: In this article, a uniform asymptotic series for the probability distribution of the sum of a large number of independent random variables is derived, which is based on the fact that the major components of the distribution are determined by a saddle point and a singularity.
Abstract: In the present paper a uniform asymptotic series is derived for the probability distribution of the sum of a large number of independent random variables. In contrast to the usual Edgeworth-type series, the uniform series gives good accuracy throughout its entire domain. Our derivation uses the fact that the major components of the distribution are determined by a saddle point and a singularity at the origin. The analogous series for the probability density, due to Daniels, depends only on the saddle point. Two illustrative examples are presented that show excellent agreement with the exact distributions.

696 citations


Cites methods from "A Measure of Asymptotic Efficiency ..."

  • ...Roberts [9] has used Uo to deal with communication problems and it appears implicitly in the Chernoff bound [4]....

    [...]

Proceedings ArticleDOI
11 May 1981
TL;DR: This paper shows that there exists an N-processor computer that can simulate arbitrary N- processor parallel computations with only a factor of O(log N) loss of runtime efficiency, and isolates a combinatorial problem that lies at the heart of this question.
Abstract: In this paper we isolate a combinatorial problem that, we believe, lies at the heart of this question and provide some encouragingly positive solutions to it. We show that there exists an N-processor realistic computer that can simulate arbitrary idealistic N-processor parallel computations with only a factor of O(log N) loss of runtime efficiency. The main innovation is an O(log N) time randomized routing algorithm. Previous approaches were based on sorting or permutation networks, and implied loss factors of order at least (log N)2.

694 citations


Cites background from "A Measure of Asymptotic Efficiency ..."

  • ...Proof The first inequality is due to Chernoff [ 4 ]....

    [...]

Book
29 May 2015
TL;DR: The matrix concentration inequalities as discussed by the authors are a family of matrix inequalities that can be found in many areas of theoretical, applied, and computational mathematics. But they are not suitable for the analysis of random matrices.
Abstract: Random matrices now play a role in many areas of theoretical, applied,and computational mathematics. Therefore, it is desirable to have toolsfor studying random matrices that are flexible, easy to use, and powerful.Over the last fifteen years, researchers have developed a remarkablefamily of results, called matrix concentration inequalities, that achieveall of these goals.This monograph offers an invitation to the field of matrix concentrationinequalities. It begins with some history of random matrix theory;it describes a flexible model for random matrices that is suitablefor many problems; and it discusses the most important matrix concentrationresults. To demonstrate the value of these techniques, thepresentation includes examples drawn from statistics, machine learning,optimization, combinatorics, algorithms, scientific computing, andbeyond.

690 citations

Journal ArticleDOI
TL;DR: A random graph process in which vertices are added to the graph one at a time and joined to a fixed number m of earlier vertices, where each earlier vertex is chosen with probability proportional to its degree is considered.
Abstract: We consider a random graph process in which vertices are added to the graph one at a time and joined to a fixed number m of earlier vertices, where each earlier vertex is chosen with probability proportional to its degree. This process was introduced by Barabasi and Albert [3], as a simple model of the growth of real-world graphs such as the world-wide web. Computer experiments presented by Barabasi, Albert and Jeong [1,5] and heuristic arguments given by Newman, Strogatz and Watts [23] suggest that after n steps the resulting graph should have diameter approximately logn. We show that while this holds for m=1, for m≥2 the diameter is asymptotically log n/log logn.

652 citations


Cites methods from "A Measure of Asymptotic Efficiency ..."

  • ...The main probabilistic tool we shall use in the rest of the paper is the following lemma given by Janson [17], which may be deduced from the Chernoff bounds [15]....

    [...]

References
More filters