scispace - formally typeset
Open AccessJournal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

Herman Chernoff
- 01 Dec 1952 - 
- Vol. 23, Iss: 4, pp 493-507
TLDR
In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract
In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.

read more

Citations
More filters
Journal ArticleDOI

Quantization for decentralized hypothesis testing under communication constraints

TL;DR: The following problem is addressed: given that the peripheral encoders that satisfy capacity constraints are scalar quantizers, how should they be designed in order that the central test to be performed on their output indices is most powerful?
Journal ArticleDOI

Randomized Strategies for Probabilistic Solutions of Uncertain Feasibility and Optimization Problems

TL;DR: A randomized algorithm is proposed that provides a probabilistic solution circumventing the potential conservatism of the bounds previously derived, and it is proved that the required sample size is inversely proportional to the accuracy for fixed confidence.
Journal ArticleDOI

Remote preparation of quantum states

TL;DR: The paper includes an extensive discussion of the results, including the impact of the choice of model on the resources, the topic of obliviousness, and an application to private quantum channels and quantum data hiding.
Journal ArticleDOI

Decentralized Detection With Censoring Sensors

TL;DR: The uncertainty in the distribution of the observations typically encountered in practice is addressed by determining the optimal sensor decision rules and fusion rule for three formulations: a robust formulation, generalized likelihood ratio tests, and a locally optimum formulation.
Journal ArticleDOI

EL inference for partially identified models: Large deviations optimality and bootstrap validity

TL;DR: A canonical large deviations criterion for optimality is considered and it is shown that inference based on the empirical likelihood ratio statistic is optimal and a new empirical likelihood bootstrap is introduced that provides a valid resampling method for moment inequality models and overcomes the implementation challenges that arise as a result of non-pivotal limit distributions.
References
More filters