scispace - formally typeset
Open AccessJournal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

Herman Chernoff
- 01 Dec 1952 - 
- Vol. 23, Iss: 4, pp 493-507
Reads0
Chats0
TLDR
In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract
In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.

read more

Citations
More filters
Posted Content

What Can We Learn Privately

TL;DR: In this paper, it was shown that a concept class is learnable by a local algorithm if and only if it is learnedable in the statistical query (SQ) model.
Journal ArticleDOI

A quartet of semigroups for model specification, robustness, prices of risk, and model detection

TL;DR: In this article, the authors use a statistical theory of detection to quantify how much model misspecification the decision maker should fear, given his historical data record, and establish a tight link between the market price of uncertainty and a bound on the error in statistically discriminating between an approximating and a worst case model.
Journal ArticleDOI

The Efficiency of Some Nonparametric Competitors of the t-Test

TL;DR: In this article, the authors show that the Pitman efficiency of the Kruskal-Wallis test never falls below 0.864, and that the same result holds for the location parameter of a single symmetric distribution.
References
More filters