scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Journal ArticleDOI
Edith Cohen1
TL;DR: This work presents anO(m) time randomized (Monte Carlo) algorithm that estimates, with small relative error, the sizes of all reachability sets and the transitive closure.

448 citations


Additional excerpts

  • ...Applying Chernoff 's bound [4] we obtain...

    [...]

01 Jan 1957
TL;DR: This paper investigates the relationship between upper and lower bounds and error probability for a modified decoding procedure, in which the receiver lists L messages, rather than one, after reception, which implies that for large L, the average of all codes is almost as good as the best code.
Abstract: Shannon's fundamental coding theorem for noisy channels states that such a channel has a capacity C, and that for any transmission rate R less than C it is possible for the receiver to use a received sequence of n symbols to select one of the 2 n R possible transmitted sequences, with an error probability Pe which can be made arbitrarily small by increasing n, keeping R and C fixed. Recently upper and lower bounds have been found for the best obtainable Pe as a function of C,R and n. This paper investigates this relationship for a modified decoding procedure , in which the receiver lists L messages, rather than one, after reception. In this case for given C and R, it is possible to choose L large enough so that the ratio of upper and lower bounds to the error probability is arbitrarily near to 1 for all large n. This implies that for large L, the average of all codes is almost as good as the best code, and in fact that almost all codes are almost as good as the best code.

440 citations


Cites background from "A Measure of Asymptotic Efficiency ..."

  • ...Shannon introduced the author to the work of Chernoff and Cramer (7,8), which makes the bound of Appendix B possible....

    [...]

  • ...A result due to Chernoff (7), which may also be derived from Cramer (8), provides bounds on the tails of distributions of sums of random variables in terms of the moment-generating function of the parent distribution....

    [...]

Book
25 Nov 2010
TL;DR: This book introduces the reader to a recent theory in Computer Vision yielding elementary techniques to analyse digital images inspired from and are a mathematical formalization of the Gestalt theory, which had never been formalized.
Abstract: This book introduces the reader to a recent theory in Computer Vision yielding elementary techniques to analyse digital images. These techniques are inspired from and are a mathematical formalization of the Gestalt theory. Gestalt theory, which had never been formalized is a rigorous realm of vision psychology developped between 1923 and 1975. From the mathematical viewpoint the closest field to it is stochastic geometry, involving basic probability and statistics, in the context of image analysis. The authors maintain a public software, MegaWave, containing implementations of most of the image analysis techniques developped in the book. The book is intended for researchers and engineers. It is mathematically self-contained and requires only the basic notions in probability and calculus.

435 citations

Journal ArticleDOI
TL;DR: It is proved that if the complexity class co -NP is contained in IP[k] for some constant k, then the polynomial-time hierarchy collapses to the second level and if the Graph Isomorphism problem is NP-complete, then this hierarchy collapses.

434 citations

Proceedings ArticleDOI
21 Oct 1985
TL;DR: A bottom-up algorithm to handle trees which has two major advantages over the top-down approach: the control structure is straight forward and easier to implement facilitating new algorithms using fewer processors and less time; and problems for which it was too difficult or too complicated to find polylog parallel algorithms are now easy.
Abstract: : Trees play a fundamental role in many computations, both for sequential as well as parallel problems. The classic paradigm applied to generate parallel algorithms in the presence of trees has been divide-conquer; finding a 1/3 - 2/3 separator and recursively solving the two subproblems. A now classic example is Brent's work on parallel evaluation of arithmetic expressions. This top-down approach has several complications, one of which is finding the separators. We define dynamic expression evaluation as the task of evaluating the expression with no free preprocessing. If we apply Brent's method, finding the separators seems to add a factor of log n to the running time. We give a bottom-up algorithm to handle trees. That is, all modifications to the tree are done locally. This bottom-up approach which we call CONTRACT has two major advantages over the top-down approach: (1) the control structure is straight forward and easier to implement facilitating new algorithms using fewer processors and less time; and (2) problems for which it was too difficult or too complicated to find polylog parallel algorithms are now easy.

433 citations

References
More filters