scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Journal ArticleDOI
TL;DR: The second-order information loss is calculated for Fisher-efficient estimators, and is decomposed into the sum of two non-negative terms: the exponential curvature of the estimator and the mixture curvature as mentioned in this paper.
Abstract: The differential-geometrical framework is given for analyzing statistical problems related to multi-parameter families of distributions. The dualistic structures of the exponential families and curved exponential families are elucidated from the geometrical viewpoint. The duality connected by the Legendre transformation is thus extended to include two kinds of affine connections and two kinds of curvatures. The second-order information loss is calculated for Fisher-efficient estimators, and is decomposed into the sum of two non-negative terms. One is related to the exponential curvature of the statistical model and the other is related to the mixture curvature of the estimator. Only the latter term depends on the estimator, and vanishes for the maximum-likelihood estimator. A set of statistics which recover the second-order information loss are given. The second-order efficiency also is obtained. The differential geometry of the function space of distributions is discussed.

383 citations

Journal ArticleDOI
TL;DR: It is shown that allocating an equal number of subtasks to each processor all at once has good efficiency, as a consequence of a rather general theorem which shows how some consequences of the central limit theorem hold even when one cannot prove that thecentral limit theorem applies.
Abstract: When using MIMD (multiple instruction, multiple data) parallel computers, one is often confronted with solving a task composed of many independent subtasks where it is necessary to synchronize the processors after all the subtasks have been completed. This paper studies how the subtasks should be allocated to the processors in order to minimize the expected time it takes to finish all the subtasks (sometimes called the makespan). We assume that the running times of the subtasks are independent, identically distributed, increasing failure rate random variables, and that assigning one or more subtasks to a processor entails some overhead, or communication time, that is independent of the number of subtasks allocated. Our analyses, which use ideas from renewal theory, reliability theory, order statistics, and the theory of large deviations, are valid for a wide class of distributions. We show that allocating an equal number of subtasks to each processor all at once has good efficiency. This appears as a consequence of a rather general theorem which shows how some consequences of the central limit theorem hold even when we cannot prove that the central limit theorem applies.

382 citations

Proceedings ArticleDOI
01 Jan 1993
TL;DR: The limited independence result implies that a reduced amount and weaker sources of randomness are sufficient for randomized algorithms whose analyses use the CH bounds, e.g., the analysis of randomized algorithms for random sampling and oblivious packet routing.
Abstract: Chernoff-Hoeffding bounds are fundamental tools used in bounding the tail probabilities of the sums of bounded and independent random variables. We present a simple technique which gives slightly better bounds than these and which, more importantly, requires only limited independence among the random variables, thereby importing a variety of standard results to the case of limited independence for free. Additional methods are also presented, and the aggregate results are very sharp and provide a better understanding of the proof techniques behind these bounds. They also yield improved bounds for various tail probability distributions and enable improved approximation algorithms for jobshop scheduling. The ``limited independence'''' result implies that weaker sources of randomness are sufficient for randomized algorithms whose analyses use the Chernoff-Hoeffding bounds; further, it leads to algorithms that require a reduced amount of randomness for any analysis which uses the Chernoff-Hoeffding bounds, e.g., the analysis of randomized algorithms for random sampling and oblivious packet routing.

372 citations

Book ChapterDOI
TL;DR: In this paper, an alternative approach to multifractals, extending and streamlining the original approach in Mandelbrot (1974), is presented, which involves the passage from geometric objects that are characterized primarily by one number, namely a fractal dimension, to geometric objects characterized by a function.
Abstract: This text is addressed to both the beginner and the seasoned professional, geology being used as the main but not the sole illustration. The goal is to present an alternative approach to multifractals, extending and streamlining the original approach in Mandelbrot (1974). The generalization from fractal sets to multifractal measures involves the passage from geometric objects that are characterized primarily by one number, namely a fractal dimension, to geometric objects that are characterized primarily by a function. The best is to choose the function ρ(α), which is a limit probability distribution that has been plotted suitably, on double logarithmic scales. The quantity α is called Holder exponent. In terms of the alternative function f(α) used in the approach of Frisch-Parisi and of Halsey et al., one has ρ(α) = f(α) − E for measures supported by the Euclidean space of dimension E. When f(α) ≥ 0, f(α) is a fractal dimension. However, one may have f(α) 1 to be a critical dimension for the cuts. An “enhanced multifractal diagram” is drawn, including f(α), a function called τ(q) and D q .

366 citations

References
More filters