scispace - formally typeset
Open AccessJournal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

Herman Chernoff
- 01 Dec 1952 - 
- Vol. 23, Iss: 4, pp 493-507
Reads0
Chats0
TLDR
In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract
In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.

read more

Citations
More filters
Proceedings ArticleDOI

COOLCAT: an entropy-based algorithm for categorical clustering

TL;DR: The connection between clustering categorical data and entropy is explored: clusters of similar poi lower entropy than those of dissimilar ones, and an incremental heuristic algorithm, COOLCAT, which is capable of efficiently clustering large data sets of records with categorical attributes, and data streams.

Compressive Sensing with structured random matrices

Holger Rauhut
TL;DR: These notes give a mathematical introduction to compressive sensing focusing on recovery using `1-minimization and structured random matrices and techniques for proving probabilistic estimates for condition numbers of structuredrandom matrices.
Journal ArticleDOI

A lower bound for radio broadcast

TL;DR: This paper proves the existence of a family of radius-2 networks on n vertices for which any broadcast schedule requires at least Omega((log n/ log log n)2) rounds of transmissions.
Journal ArticleDOI

A general lower bound on the number of examples needed for learning

TL;DR: In this paper, a lower bound of Ω ((1/∆)ln(1/δ)+VCdim(C )/ε) was shown for distribution-free learning of a concept class C, where VCdim( C ) is the Vapnik-Chervonenkis dimension and ǫ and à are the accuracy and confidence parameters.
Journal ArticleDOI

A trade-off between space and efficiency for routing tables

TL;DR: It is proved that any routing scheme for general networks that achieves a stretch factor k ≥ 1 must use a total of &OHgr; bits of routing information in the networks, which is a trade-off between the efficiency of a routing scheme and its space requirements.
References
More filters