scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Journal ArticleDOI
TL;DR: The notion of embeddings into voting rules: functions that receive an agent's utility function and return the agent's vote are presented and it is established that very low distortion can be obtained using randomizedembeddings, especially when the number of agents is large compared to thenumber of alternatives.

106 citations


Additional excerpts

  • ...(Chernoff 1952) Pr [X ≥ (1 + δ)µ] ≤ ( e 1+δ )(1+δ)µ ....

    [...]

  • ...(Chernoff 1952) Pr [X ≥ (1 + δ)μ] ≤ ( e 1+δ )(1+δ)μ ....

    [...]

Proceedings ArticleDOI
09 Oct 1990
TL;DR: The results show that plurality voting is the most powerful of these techniques and is, in fact, optimal for a certain class of probability distributions.
Abstract: The problem of voting is studied for both the exact and inexact cases Optimal solutions based on explicit computation of condition probabilities are given The most commonly used strategies, ie majority, median, and plurality are compared quantitatively The results show that plurality voting is the most powerful of these techniques and is, in fact, optimal for a certain class of probability distributions An efficient method of implementing a generalized plurality voter when nonfaulty processes can produce differing answers is also given >

105 citations

Journal ArticleDOI
TL;DR: A Monte-Carlo algorithm is described that produces a “good in the relative sense” estimate of the permanent of A and has running time ${\operatorname{poly}(n)2^{{n / 2}} $, where ${\ operatornam{poly}}( n)$ denotes a function that grows polynomially with n.
Abstract: Let A be an $n \times n$ matrix with 0-1 valued entries, and let ${\operatorname{per}}(A)$ be the permanent of A. This paper describes a Monte-Carlo algorithm that produces a “good in the relative sense” estimate of ${\operatorname{per}}(A)$ and has running time ${\operatorname{poly}}(n)2^{{n / 2}} $, where ${\operatorname{poly}}(n)$ denotes a function that grows polynomially with n.

105 citations

Book ChapterDOI
Luc Devroye1
01 Jan 1998
TL;DR: A partial overview of some results from the rich theory of branching processes is given and their use in the probabilistic analysis of algorithms and data structures is illustrated.
Abstract: We give a partial overview of some results from the rich theory of branching processes and illustrate their use in the probabilistic analysis of algorithms and data structures. The branching processes we discuss include the Galton-Watson process, the branching random walk, the Crump-Mode-Jagers process, and conditional branching processes. The applications include the analysis of the height of random binary search trees, random m-ary search trees, quadtrees, union-find trees, uniform random recursive trees and plane-oriented recursive trees. All these trees have heights that grow logarithmically in the size of the tree. A different behavior is observed for the combinatorial models of trees, where one considers the uniform distribution over all trees in a certain family of trees. In many cases, such trees are distributed like trees in a Galton-Watson process conditioned on the tree size. This fact allows us to review Cayley trees (random labeled free trees), random binary trees, random unary-binary trees, random oriented plane trees, and indeed many other species of uniform trees. We also review a combinatorial optimization problem first suggested by Karp and Pearl. The analysis there is particularly beautiful and shows the flexibility of even the simplest branching processes.

104 citations

Book ChapterDOI
01 Sep 2010
TL;DR: In this article, the authors give a combinatorial proof of the Chernoff-Hoeffding concentration bound, which says that the sum of independent {0, 1}-valued random variables is highly concentrated around the expected value.
Abstract: We give a combinatorial proof of the Chernoff-Hoeffding concentration bound [9,16], which says that the sum of independent {0, 1}- valued random variables is highly concentrated around the expected value. Unlike the standard proofs, our proof does not use the method of higher moments, but rather uses a simple and intuitive counting argument. In addition, our proof is constructive in the following sense: if the sum of the given random variables is not concentrated around the expectation, then we can efficiently find (with high probability) a subset of the random variables that are statistically dependent. As simple corollaries, we also get the concentration bounds for [0, 1]-valued random variables and Azuma's inequality for martingales [4]. We interpret the Chernoff-Hoeffding bound as a statement about Direct Product Theorems. Informally, a Direct Product Theorem says that the complexity of solving all k instances of a hard problem increases exponentially with k; a Threshold Direct Product Theorem says that it is exponentially hard in k to solve even a significant fraction of the given k instances of a hard problem. We show the equivalence between optimal Direct Product Theorems and optimal Threshold Direct Product Theorems. As an application of this connection, we get the Chernoff bound for expander walks [12] from the (simpler to prove) hitting property [2], as well as an optimal (in a certain range of parameters) Threshold Direct Product Theorem for weakly verifiable puzzles from the optimal Direct Product Theorem [8]. We also get a simple constructive proof of Unger's result [38] saying that XOR Lemmas imply Threshold Direct Product Theorems.

104 citations

References
More filters