scispace - formally typeset
Open AccessJournal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

Herman Chernoff
- 01 Dec 1952 - 
- Vol. 23, Iss: 4, pp 493-507
Reads0
Chats0
TLDR
In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract
In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.

read more

Citations
More filters
Journal ArticleDOI

Modulating robustness in control design: Principles and algorithms

TL;DR: In this paper, the authors describe quantitative tools to drive the user's choice toward a suitable compromise in a probabilistic cost function, where a cost value can only be guaranteed with a certain probability.
Book ChapterDOI

Competitive Auctions for Multiple Digital Goods

TL;DR: This paper shows auctions that are competitive for multiple items (e.g., concurrent broadcast of several movies) that are more sophisticated than in the single item case, and require solving an interesting optimization problem.
Journal ArticleDOI

How Generalizable Is Your Experiment? An Index for Comparing Experimental Samples and Populations:

TL;DR: Although a large-scale experiment can provide an estimate of the average causal impact for a program, the sample of sites included in the experiment is often not drawn randomly from the inference process.
Journal Article

How generalizable is your experiment? An index for comparing experimental samples and populations

TL;DR: Although a large-scale experiment can provide an estimate of the average causal impact for a program, the sample of sites included in the experiment is often not drawn randomly from the inference process as mentioned in this paper.
Book

Scalable Algorithms for Data and Network Analysis

TL;DR: This tutorial surveys a family of algorithmic techniques for the design of provably-good scalable algorithms and illustrates the use of these techniques by a few basic problems that are fundamental in network analysis, particularly for the identification of significant nodes and coherent clusters/communities insocial and information networks.
References
More filters