scispace - formally typeset
Open AccessJournal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

Herman Chernoff
- 01 Dec 1952 - 
- Vol. 23, Iss: 4, pp 493-507
Reads0
Chats0
TLDR
In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract
In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.

read more

Citations
More filters
Book

A Study of Statistical Zero-Knowledge Proofs

TL;DR: This thesis is a detailed investigation of statistical zero-knowledge proofs, which are zero- knowledge proofs in which the condition that the verifier “learns nothing” is interpreted in a strong statistical sense.
Journal ArticleDOI

An analysis of reduced error pruning

TL;DR: This paper clarifies the different variants of the Reduced Error Pruning algorithm, brings new insight to its algorithmic properties, analyses the algorithm with less imposed assumptions than before, and includes the previously overlooked empty subtrees to the analysis.
Journal ArticleDOI

An asymptotic property of model selection criteria

TL;DR: It is shown that the optimal rate of convergence is simultaneously achieved for log-densities in Sobolev spaces W/sub 2//sup s/(U) without knowing the smoothness parameter s and norm parameter U in advance.
Journal ArticleDOI

Efficient Estimates and Optimum Inference Procedures in Large Samples

TL;DR: In this article, various orders of efficiency are defined depending on degrees of closeness, and properties of estimates satisfying these criteria are studied, and it is found that, under some conditions, the maximum likelihood estimate has some optimum properties which distinguish it from all other large sample estimates, when used as a substitute for the whole sample in drawing inference about unknown parameters.
Book ChapterDOI

New Developments in Generalized Information Measures

TL;DR: This chapter discusses new developments in generalized information measures and presents the arithmetic-geometric mean divergence measure and its unified ( r, s )-generalizations.
References
More filters