scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Proceedings ArticleDOI
10 Jul 2016
TL;DR: The theory of optimally weighted ensemble estimation is generalized to derive two estimators that achieve the parametric rate when the densities are sufficiently smooth and an empirical estimator of Rényi-α divergence that outperforms the standard kernel density plug-in estimator, especially in higher dimensions.
Abstract: Recent work has focused on the problem of non-parametric estimation of divergence functionals. Many existing approaches are restrictive in their assumptions on the density support or require difficult calculations at the support boundary which must be known a priori. We derive the MSE convergence rate of a leave-one-out kernel density plug-in divergence functional estimator for general bounded density support sets where knowledge of the support boundary is not required. We generalize the theory of optimally weighted ensemble estimation to derive two estimators that achieve the parametric rate when the densities are sufficiently smooth. The asymptotic distribution of these estimators and tuning parameter selection guidelines are provided. Based on the theory, we propose an empirical estimator of Renyi-α divergence that outperforms the standard kernel density plug-in estimator, especially in higher dimensions.

38 citations

01 Jan 2013
TL;DR: Zusammenfassung 1 as discussed by the authors, 1.1.1] and 1.2.1 [1], 2.3.4.1, 3.
Abstract: Zusammenfassung 1

38 citations


Cites background from "A Measure of Asymptotic Efficiency ..."

  • ...Let us define the following probabilistic distance measures D : S × S → R between X and Y as follows: • Chernoff Distance (CD) [Chernoff, 1952] with 0 < λ < 1 ∈ R:...

    [...]

Proceedings ArticleDOI
01 Dec 1983
TL;DR: A randomized algorithm that sorts on an N node network with constant valence in 0(log N) time and terminates within k within log N time with probability at least 1−N−&agr;.
Abstract: We give a randomized algorithm that sorts on an N node network with constant valence in 0(log N) time. More particularly the algorithm sorts N items on an N node cube-connected cycles graph and for some constant k for all large enough a it terminates within ka log N time with probability at least 1−N−a.

38 citations

Journal ArticleDOI
TL;DR: A polynomial time algorithm is presented that addresses issues in the context of the class of single controller stochastic games, providing the agent with near-optimal return.

38 citations

Proceedings ArticleDOI
10 Jul 2016
TL;DR: This work focuses on two types of asymptotic recovery guarantees: weak recovery: expected number of classification errors is and exact recovery: probability of classifying all indices correctly converges to one.
Abstract: We study the problem of recovering a hidden community of cardinality K from an n × n symmetric data matrix A, where for distinct indices i; j, A ij ∼ P if i; j both belong to the community and A ij ∼ Q otherwise, for two known probability distributions P and Q depending on n. We focus on two types of asymptotic recovery guarantees as n → ∞: (1) weak recovery: expected number of classification errors is o(K); (2) exact recovery: probability of classifying all indices correctly converges to one. Under mild assumptions on P and Q, and allowing the community size to scale sublinearly with n, we derive a set of sufficient conditions and a set of necessary conditions for recovery, which are asymptotically tight with sharp constants. The results hold in particular for the Gaussian case (P = N(µ, 1) and Q = N(0; 1)), and for the case of bounded log likelihood ratio, including the Bernoulli case (P = Bern(p) and Q = Bern(q)) whenever p/q and 1−p/1−q are bounded away from zero and infinity. An important algorithmic implication is that, whenever exact recovery is information theoretically possible, any algorithm that provides weak recovery when the community size is concentrated near K can be upgraded to achieve exact recovery in linear additional time by a simple voting procedure.

38 citations


Cites methods from "A Measure of Asymptotic Efficiency ..."

  • ...by the Chernoff index between P and Q [17]:...

    [...]

References
More filters