scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of discriminating between two different states of a finite quantum system in the setting of large numbers of copies, and find a closed form expression for the asymptotic exponential rate at which the error probability tends to zero.
Abstract: We consider the problem of discriminating between two different states of a finite quantum system in the setting of large numbers of copies, and find a closed form expression for the asymptotic exponential rate at which the error probability tends to zero. This leads to the identification of the quantum generalisation of the classical Chernoff distance, which is the corresponding quantity in classical symmetric hypothesis testing. The proof relies on two new techniques introduced by the authors, which are also well suited to tackle the corresponding problem in asymmetric hypothesis testing, yielding the quantum generalisation of the classical Hoeffding bound. This has been done by Hayashi and Nagaoka for the special case where the states have full support. The goal of this paper is to present the proofs of these results in a unified way and in full generality, allowing hypothesis states with different supports. From the quantum Hoeffding bound, we then easily derive quantum Stein’s Lemma and quantum Sanov’s theorem. We give an in-depth treatment of the properties of the quantum Chernoff distance, and argue that it is a natural distance measure on the set of density operators, with a clear operational meaning.

230 citations

Journal ArticleDOI
TL;DR: This paper provides an annotated bibliography for investigations based on or related to divergence measures for statistical data processing and inference problems.

229 citations

Proceedings ArticleDOI
01 Feb 1989
TL;DR: It is proved that for Boolean formulae, finite automata, and constant depth threshold circuits (simplified neural nets), this problem is computationally as difficult as the quadratic residue problem, inverting the RSA function and factoring Blum integers.

227 citations

Journal ArticleDOI
TL;DR: A general class of Bayesian lower bounds on moments of the error in parameter estimation is formulated, and it is shown that the Cramer-Rao, the Bhattacharyya, the Bobrovsky-Zakai, and the Weiss-Weinstein lower bounds are special cases in the class.
Abstract: A general class of Bayesian lower bounds on moments of the error in parameter estimation is formulated, and it is shown that the Cramer-Rao, the Bhattacharyya, the Bobrovsky-Zakai, and the Weiss-Weinstein lower bounds are special cases in the class. The bounds can be applied to the estimation of vector parameters and any given function of the parameters. The extension of these bounds to multiple parameter is discussed. >

223 citations

Journal ArticleDOI
TL;DR: In this paper, the authors study how decision-makers' concerns about robustness affect prices and quantities in a stochastic growth model and show that the dynamic evolution of the risk-return trade-off is dominated by movements in the growth state probabilities.
Abstract: We study how decision-makers’ concerns about robustness affect prices and quantities in a stochastic growth model.In the model economy, growth rates in technology are altered by infrequent large shocks and continuous small shocks.An investor observes movements in the technology level but cannot perfectly distinguish their sources.Instead the investor solves a signal extraction problem.We depart from most of the macroeconomics and finance literature by presuming that the investor treats the specification of technology evolution as an approximation.To promote a decision rule that is robust to model misspecification, an investor acts as if a malevolent player threatens to perturb the actual data-generating process relative to his approximating model.We study how a concern about robustness alters asset prices.We show that the dynamic evolution of the risk-return trade-off is dominated by movements in the growth-state probabilities and that the evolution of the dividend-price ratio is driven primarily by the capital-technology ratio. This article shows how decision-makers’ concerns about model misspecification can affect prices and quantities in a dynamic economy.We use the familiar stochastic growth model of Brock and Mirman (1972) and Merton (1975) as a laboratory.Technology is specified as a continuous-time hidden Markov model (HMM), inducing investors to make inferences about the growth rate.They form their opinions about the growth rate from current and past observations of technology that are clouded by concurrently evolving small shocks modeled as Brownian motions.We show how investors’ desire to make their decision rules robust to misspecification of the evo

221 citations

References
More filters