scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Proceedings ArticleDOI
Beat Gfeller1, Elias Vicari1
12 Aug 2007
TL;DR: The algorithm shows that for computing a MIS, randomization is a viable alternative to distance information and is close to optimal.
Abstract: The efficient distributed construction of a maximal independent set (MIS) of a graph is of fundamental importance. We study the problem in the class of Growth-Bounded Graphs, which includes for example the well-known Unit Disk Graphs. In contrast to the fastest (time-optimal) existing approach [11], we assume that no geometric information (e.g., distances in the graph's embedding) is given. Instead, nodes employ randomization for their decisions. Our algorithm computes a MIS in O(log log n • log* n) rounds with very high probability for graphs with bounded growth, where n denotes the number of nodes in the graph. In view of Linial's Ω(log* n) lower bound for computing a MIS in ring networks [12], which was extended to randomized algorithms independently by Naor [18] and Linial [13], our solution is close to optimal.In a nutshell, our algorithm shows that for computing a MIS, randomization is a viable alternative to distance information.

80 citations


Additional excerpts

  • ...Using the Chernoff bound [4] P [X ≥ (1 + δ)E [X]] ≤ e−E[X]δ2/3 for 0 < δ ≤ 1, with δ = 1 and d ≥ 9k(2) ln(2) n, we obtain...

    [...]

Journal ArticleDOI
TL;DR: The methods presented in this paper can be utilized to estimate the error caused by using a finite pulse train approximation when the system performance is evaluated by simulation techniques.
Abstract: Simple upper and lower bounds on the distribution function of the sum of two random variables are presented in terms of the marginal distribution functions of the variables. These bounds are then used to obtain upper and lower bounds to the error probability of a coherent digital system in the presence of intersymbol interference and additive gaussian noise. The bounds are expressed in terms of the error probability obtained with a finite pulse train, and the bounds to the marginal distribution function of the residual pulse train. Since the difference between the upper and lower bounds can be shown to be a monotonically decreasing function of the number of pulses in the finite pulse train, the bounds can be used to compute the error probability of the system with arbitrarily small error. Also when the system performance is evaluated by simulation techniques, the methods presented in our paper can be utilized to estimate the error caused by using a finite pulse train approximation.

80 citations

Proceedings ArticleDOI
29 May 1995
TL;DR: This paper shows that within O(=) steps, the algorithm reduces the maximum dierence in tokens between any two nodes to at most O((d 2 logn)=), where is the global imbalance in tokens and n is the number of nodes in the network.
Abstract: This paper presents an analysis of the following load balancing algorithm. At each step, each node in a network examines the number of tokens at each of its neighbors and sends a token to each neighbor with at least 2d + 1 fewer tokens, where d is the maximum degree of any node in the network. We show that within O(=) steps, the algorithm reduces the maximum dierence in tokens between any two nodes to at most O((d 2 logn)=), where is the global imbalance in tokens (i.e., the maximum dierence between the number of tokens at any node initially and the average number of tokens), n is the number of nodes in the network, and is the edge expansion of the network. The time bound is tight in the sense that for any graph with edge expansion , and for any value , there exists an initial distribution of tokens with imbalance for which the time to reduce the imbalance to even =2 is at least ›(=). The bound on the nal imbalance is tight in the sense that there exists a class of networks that can be locally balanced everywhere (i.e., the maximum dierence in tokens between any two neighbors is at most 2d), while the global imbalance remains ›((d 2 logn)=). Furthermore, we show that upon reaching a state with a global imbalance of O((d 2 logn)=), the time for this algorithm to locally balance the network can be as large as ›(n 1=2 ). We extend our analysis to a variant of this algorithm for dynamic and asynchronous networks. We also present tight bounds for a randomized algorithm in which each node sends at most one token in each step.

80 citations


Cites methods from "A Measure of Asymptotic Efficiency ..."

  • ...Therefore, using standard Cherno bounds [10], we can show that in T 0 = 8aTd(log 0 + logn)e steps, T 0 > 1 with probability at most O(1=( 0)a + 1=na) for any constant a > 0....

    [...]

Proceedings ArticleDOI
01 Apr 1990
TL;DR: This work adopts a new approach to the "reconfiguration" approach, in which faults are identified and isolated in real time, and devised algorithms that work despite unreliable information.
Abstract: Fault-tolerance is an important consideration in large systems. Broadly, there are two approaches to coping with faults. The first is the "reconfiguration" approach [3, 9], in which faults are identified and isolated in real time. This is done concurrently with computation, and is often a significant overhead. A second, different approach is to devise algorithms that work despite unreliable information, without singling out the faulty information. This latter approach has been the focus of much recent work [6, 7, 10, 11, 12, 16]. Here we adopt a new approach to this latter paradigm.

80 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present an analysis of the limit behavior of the k(n)-th iterates of positive linear approximation operators Ln, as n and k tend to infinity.

79 citations

References
More filters