scispace - formally typeset
Open AccessJournal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

Herman Chernoff
- 01 Dec 1952 - 
- Vol. 23, Iss: 4, pp 493-507
Reads0
Chats0
TLDR
In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract
In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.

read more

Citations
More filters
Book ChapterDOI

On attribute efficient and non-adaptive learning of parities and DNF expressions

TL;DR: Recently, this article showed that attribute-efficient learning of parity functions with respect to the uniform distribution is equivalent to decoding high-rate random linear codes from low number of errors, a long-standing open problem in coding theory.
Journal ArticleDOI

Measures of trajectory ensemble disparity in nonequilibrium statistical dynamics

TL;DR: In this paper, the statistical and physical significance of several of these measures, in particular the relative entropy (dissipation), Jeffreys divergence (hysteresis), Jensen-Shannon divergence (time-asymmetry), Chernoff divergence (work cumulant generating function), and Renyi divergence, is reviewed.
Journal ArticleDOI

Neighborhood-based uncertainty generation in social networks

TL;DR: A framework is introduced to transform an uncertain network into a deterministic weighted network where the weights on edges can be measured by Jaccard-like index and a novel sampling scheme is proposed which enables the development of efficient algorithms to measure uncertainty in networks.
Proceedings ArticleDOI

Efficient randomized algorithms for the repeated median line estimator

TL;DR: This paper presents the best known theoretical algorithm and a practical subquadratic algorithm for computing a 50% breakdown point line estimators, the Siegel or repeated median line estimator, and presents an O(n\log n) randomized expected-time algorithm, where n is the number of given points.
Journal ArticleDOI

Finite-key security analysis for quantum key distribution with leaky sources

TL;DR: This work provides a finite-key security analysis for QKD which is valid against arbitrary information leakage from the state preparation process of the legitimate users, and evaluates the security of a leaky decoy-state BB84 protocol with biased basis choice.
References
More filters