scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Journal ArticleDOI
TL;DR: A bit error probability analysis of direct detection optical receivers is presented employing avalanche photodiodes, and the conjugate distribution is developed that can be used to obtain numerically efficient Monte Carlo estimates of the bit-error probability via the importance sampling method.
Abstract: A bit error probability analysis of direct detection optical receivers is presented employing avalanche photodiodes. An asymptotic analysis for large signal intensities is presented. This analysis provides some useful insight into the balance between the Poisson statistics, the avalanche gain statistics, and the Gaussian thermal noise. The conjugate distribution is developed. It is obtained by applying the large-deviation exponential twisting formula. It is demonstrated that this conjugate distribution can be used to obtain numerically efficient Monte Carlo estimates of the bit-error probability via the importance sampling method. >

35 citations

Journal ArticleDOI
TL;DR: Latin American research in the "Big Data" problem is still incipient, but there is a significant body of recent works in the subjects of Pattern Recognition and related fields that indirectly addresses the problem.

35 citations


Cites methods from "A Measure of Asymptotic Efficiency ..."

  • ...The work proposes an alternative to the Fisher Discriminant (FD) [59] method, aimed at maximizing the Chernoff bound [36], which is not limited to a single dimension as FD....

    [...]

Journal ArticleDOI
TL;DR: A highly efficient method of evaluating probabilities of detection and the generalized Q -function to within an absolute accuracy \epsilon is presented.
Abstract: A highly efficient method of evaluating probabilities of detection and the generalized \cal Q -function to within an absolute accuracy \epsilon is presented. Many unnecessary computations are avoided when it can be determined by Chernoff bounds that the desired function is within \epsilon of 0 or 1. When this is not the case, efficient algorithms for the necessary computations are also provided.

35 citations

Journal ArticleDOI
TL;DR: It is shown without using these lemmas that n0:= 2 × 108 is sufficient to show that graphs of order n ≥ n0, where n0 is a very large constant.
Abstract: In 1962 Posa conjectured that every graph G on n vertices with minimum degree \documentclass{article} \usepackage{mathrsfs} \usepackage{amsmath,amssymb,amsfonts} \pagestyle{empty} \begin{document} \begin{align*}\delta(G)\ge \frac{2}{3}n\end{align*} \end{document} **image** contains the square of a hamiltonian cycle. In 1996 Fan and Kierstead proved the path version of Posa's Conjecture. They also proved that it would suffice to show that G contains the square of a cycle of length greater than \documentclass{article} \usepackage{mathrsfs} \usepackage{amsmath,amssymb,amsfonts} \pagestyle{empty} \begin{document} \begin{align*}\frac{2}{3}n\end{align*} \end{document} **image** . Still in 1996, Komlos, Sarkozy, and Szemeredi proved Posa's Conjecture, using the Regularity and Blow-up Lemmas, for graphs of order n ≥ n0, where n0 is a very large constant. Here we show without using these lemmas that n0:= 2 × 108 is sufficient. We are motivated by the recent work of Levitt, Sarkozy and Szemeredi, but our methods are based on techniques that were available in the 90's. © 2011 Wiley Periodicals, Inc. Random Struct. Alg., 2011 © 2011 Wiley Periodicals, Inc.

35 citations

Journal ArticleDOI
TL;DR: The central result is that, under general conditions, the statistics of the generalization error of the GEM machine obtained with the ideal GEM algorithm is universal, in the sense that it remains the same, independently of the (unknown) mechanism that generates the data.
Abstract: We introduce a general-purpose classifier that we call the Guaranteed Error Machine, or GEM, and two learning algorithms that are used for the training of GEM, a real GEM algorithm and an ideal GEM algorithm. The real GEM algorithm is for use in real applications, while the ideal GEM algorithm is introduced as a theoretical tool. Differently from most learning machines, GEM has a ternary-valued output, that is besides 0 and 1 it can return an unknown label, expressing doubt. Our central result is that, under general conditions, the statistics of the generalization error of the GEM machine obtained with the ideal GEM algorithm is universal, in the sense that it remains the same, independently of the (unknown) mechanism that generates the data. As a consequence, the user can select a desired level of generalization error and the learning machine is automatically adjusted so as to meet this desired level, and no knowledge of the data generation mechanism is required in this process; the adjustment is achieved by modulating the size of the region where the machine returns the unknown label. The key-point is that no conservatism is present in this process because the statistics of the generalization error is known. We further show that the generalization error of the machine obtained with the real algorithm is always no larger than the generalization error of the machine obtained with the ideal algorithm. Thus, the generalization error computed for the latter can be rigorously used as a bound for the former, and, moreover, it provably provides tight evaluations in typical cases.

35 citations


Cites background from "A Measure of Asymptotic Efficiency ..."

  • ...…k such that μN {PE(ŷN ) > } is a rare event with probability no more than a given δ, one can resort to the Chernoff bound for the Beta tail, see (Chernoff 1952) for the original reference or e.g. (Vidyasagar 1997), yielding the following corollary to Theorem 1: Corollary 1 Under the…...

    [...]

  • ...On the other hand, the Chernoff bound (Chernoff 1952 or Vidyasagar 1997) says that the tail for z > of a Beta(k,N − k) distribution is no more than exp(−[(N−1) +(1−k)]22(N−1) ), so leading to the conclusion that δ ≥ “tail for z > of a Beta(k,N − k)” ≥ μN {PE(ŷN ) > }....

    [...]

References
More filters