scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
01 Jan 2008
TL;DR: This tutorial discusses a fam- ily valid inequalities for a integer programming formulations for a special but large class of chance-constrained problems that have demonstrated significant computa- tional advantages.
Abstract: Various applications in reliability and risk management give rise to optimization prob- lems with constraints involving random parameters, which are required to be satisfied with a prespecified probability threshold. There are two main difficulties with such chance-constrained problems. First, checking feasibility of a given candidate solution exactly is, in general, impossible because this requires evaluating quantiles of random functions. Second, the feasible region induced by chance constraints is, in general, nonconvex, leading to severe optimization challenges. In this tutorial, we discuss an approach based on solving approximating problems using Monte Carlo samples of the random data. This scheme can be used to yield both feasible solutions and statistical optimality bounds with high confidence using modest sample sizes. The approximat- ing problem is itself a chance-constrained problem, albeit with a finite distribution of modest support, and is an NP-hard combinatorial optimization problem. We adopt integer-programming-based methods for its solution. In particular, we discuss a fam- ily valid inequalities for a integer programming formulations for a special but large class of chance-constrained problems that have demonstrated significant computa- tional advantages.

137 citations


Cites background from "A Measure of Asymptotic Efficiency ..."

  • ...Recall that by the Chernoff inequality [9] for k > Np, B(k; q,N) ≥ 1− exp { −N(k/N − q)(2)/(2q) } ....

    [...]

01 Jan 2016
TL;DR: In this paper, a theory concerning the existence and value of lim n→∞ a n (J) has been developed for the case when V is finite-dimensional and X 1 is bounded.
Abstract: Let X 1 , X 2 , ··· be a sequence of i.i.d. random vectors taking values in a space V, let X - n = (X 1 + ··· + X n )/n, and for J ⊂ V let a n (J) = n -1 log P(X - n ∈ J). A powerful theory concerning the existence and value of lim n→∞ a n (J) has been developed by Lanford for the case when V is finite-dimensional and X 1 is bounded. The present paper is both an exposition of Lanford's theory and an extension of it to the general case. A number of examples are considered; these include the cases when X 1 is a Brownian motion or Brownian bridge on the real line, and the case when X - n is the empirical distribution function based on the first n values in an i.i.d. sequence of random variables (the Sanov problem).

137 citations

Journal ArticleDOI
TL;DR: A physically meaningful distinguishability measure and its corresponding metric in the space of states is defined and the latter is shown to coincide with the Wigner-Yanase metric.
Abstract: Hypothesis testing is a fundamental issue in statistical inference and has been a crucial element in the development of information sciences. The Chernoff bound gives the minimal Bayesian error probability when discriminating two hypotheses given a large number of observations. Recently the combined work of Audenaert et al. [Phys. Rev. Lett. 98, 160501 (2007)] and Nussbaum and Szkola [e-print arXiv:quant-ph/0607216] has proved the quantum analog of this bound, which applies when the hypotheses correspond to two quantum states. Based on this quantum Chernoff bound, we define a physically meaningful distinguishability measure and its corresponding metric in the space of states; the latter is shown to coincide with the Wigner-Yanase metric. Along the same lines, we define a second, more easily implementable, distinguishability measure based on the error probability of discrimination when the same local measurement is performed on every copy. We study some general properties of these measures, including the probability distribution of density matrices, defined via the volume element induced by the metric. It is shown that the Bures and the local-measurement based metrics are always proportional. Finally, we illustrate their use in the paradigmatic cases of qubits and Gaussian infinite-dimensional states.

136 citations

Journal ArticleDOI
TL;DR: In this paper, the asymptotic normality of the likelihood ratio goodness-of-fit statistic is demonstrated for testing the fit of log-linear models with closed form maximum likelihood estimates in sparse contingency tables.
Abstract: The asymptotic normality of the likelihood ratio goodness-of-fit statistic is demonstrated for testing the fit of log-linear models with closed form maximum likelihood estimates in sparse contingency tables Unlike the traditional chi-squared theory, the number of categories in the table increases as the sample size increases, but not all of the expected frequencies are required to become large Some results of a small Monte Carlo study are presented The traditional chi-squared approximation is reasonably accurate for the Pearson statistic for many sparse tables, but cases are presented for which it fails The normal approximation can be much more accurate than the chi-squared approximation for the likelihood ratio statistic, but the bias of estimated moments is a potential problem for very sparse tables

134 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that the largest spacing induced by the order statistics of random variables on a sequence of independent uniformly distributed random variables is 1/k \quad\text{almost surely}, where k is the number of times iterated logarithm.
Abstract: Let $X_1, X_2, \cdots$ be a sequence of independent uniformly distributed random variables on $\lbrack 0, 1\rbrack$, and let $K_n$ be the $k$th largest spacing induced by the order statistics of $X_1, \cdots, X_{n - 1}$. We show that $\lim \sup(nK_n - \log n)/2 \log_2n = 1/k \quad\text{almost surely},$ and $\lim \inf(nK_n - \log n + \log_3n) = c \quad\text{almost surely},$ where $-\log 2 \leq c \leq 0$, and $\log_j$ is the $j$ times iterated logarithm.

134 citations

References
More filters