A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations
Reads0
Chats0
TLDR
In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.Abstract:
In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.read more
Citations
More filters
Journal ArticleDOI
Deterministic approximations of probability inequalities
TL;DR: A simple general framework for deriving explicit deterministic approximations of probability inequalities of the formP(ξ⩾a) ⩽ α is presented and approximate deterministic surrogates for these problems are provided.
Journal ArticleDOI
Learning Boolean formulas
TL;DR: It is proved that monomials cannot be efficiently learned from negative examples alone, even if the negative examples are uniformly distributed.
Proceedings ArticleDOI
Two prover protocols: low error at affordable rates
Uriel Feige,Joe Kilian +1 more
TL;DR: An upper bound on the number of parallel repetitions that suffice in order to reduce the error of miss-match proof systems from p to � is introduced.
Book ChapterDOI
Commitment Capacity of Discrete Memoryless Channels
TL;DR: In this paper, the problem of characterising the optimal rate at which a discrete memoryless channel can be used for bit commitment was investigated, and it was shown that the answer is very intuitive: it is the maximum equivocation of the channel (after removing trivial redundancy), even when unlimited noiseless bidirectional side communication is allowed.
Journal ArticleDOI
Quantum limits on postselected, probabilistic quantum metrology
TL;DR: In this paper, it was shown that probabilistic metrology can never improve quantum limits on estimation of a single parameter, both on average and asymptotically in number of trials, if performance is judged relative to mean-square estimation error.
References
More filters