scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Journal Article
TL;DR: In this article, the authors studied the robustness of the randomized broadcasting algorithm against random node failures and showed that if the informed nodes are allowed to fail in some step with probability 1--p, then the broadcasting time increases by a factor of at most 6/p.
Abstract: One of the most frequently studied problems in the context of information dissemination in communication networks is the broadcasting problem. In this paper, we study the following randomized broadcasting protocol. At some time t an information r is placed at one of the nodes of a graph. In the succeeding steps, each informed node chooses one neighbor, independently and uniformly at random, and informs this neighbor by sending a copy of r to it. In this work, we develop tight bounds on the runtime of the algorithm described above, and analyze its robustness. First, it is shown that on Δ-regular graphs this algorithm requires at least log 2-1/Δ N+ log(Δ/Δ-1)Δ N -o(log N) rounds to inform all N nodes. For general graphs, we prove a slightly weaker lower bound and improve the upper bound of Feige et. al. [8] to (1+o(1))N In N which implies that K 1,N-1 is the worst-case graph. Furthermore, we determine the worst-case-ratio between the runtime of a fastest deterministic algorithm and the randomized one. This paper also contains an investigation of the robustness of this broadcasting algorithm against random node failures. We show that if the informed nodes are allowed to fail in some step with probability 1--p, then the broadcasting time increases by a factor of at most 6/p. Finally, the previous result is applied to state some asymptotically optimal upper bounds for the runtime of randomized broadcasting in Cartesian products of graphs and to determine the performance of agent based broadcasting [6] in graphs with good expansion properties.

51 citations

Proceedings ArticleDOI
01 Jun 1992
TL;DR: This paper presents randomized algorithms for kk routing, k-k sorting, and cut through routing on the mesh connected processor array and these algorithms have optimal queue length, namely k + o(k).
Abstract: In this paper we present randomized algorithms for kk routing, k-k sorting, and cut through routing on the mesh connected processor array. In these three problems, each processor is assumed to contain k packets at the beginning and k packets are destined for any processor node with k ≥ 1. We give two different algorithms for k-k routing that run in kn 2 +o(kn) and k2 n+o(kn) routing steps respectively. We also show that k-k sorting can be accomplished within k2 n + n + o(kn) steps and that cut through routing can be done in kn 2 + n k + 3 2n+ o(kn) steps. The stated resource bounds hold with high probability and for any k ≥ 8. The best known previous algorithms take almost twice as many routing steps in any case. For k ≤ 8 we derive new bounds which come close to the optimum. kn 2 is a known lower bound for all three problems, the bisection bound. Hence, our algorithms are very nearly optimal. All the above mentioned algorithms have optimal queue length, namely k + o(k). These algorithms also extend to higher dimensional meshes. The achieved improvements are made possible by novel algorithmic and analytical techniques.

51 citations

Journal ArticleDOI
TL;DR: It is proved that solutions to the graph p-Laplace equation are approximately Holder continuous with high probability and the viscosity solution machinery and the maximum principle on a graph are used.
Abstract: We study the game theoretic p-Laplacian for semi-supervised learning on graphs, and show that it is well-posed in the limit of finite labeled data and infinite unlabeled data. In particular, we show that the continuum limit of graph-based semi-supervised learning with the game theoretic p-Laplacian is a weighted version of the continuous p-Laplace equation. We also prove that solutions to the graph p-Laplace equation are approximately Holder continuous with high probability. Our proof uses the viscosity solution machinery and the maximum principle on a graph.

51 citations


Cites background from "A Measure of Asymptotic Efficiency ..."

  • ...Controlling the number of neighbors on a graph can be accomplished with a Chernoff bound [46]....

    [...]

Proceedings ArticleDOI
01 May 2000
TL;DR: This work adds a new back- propagation component to McCreight's algorithm and gives a high probability hashing scheme for large degrees, which gives the first randomized linear time algorithm for constructing suffix trees for parameterized strings.
Abstract: We consider suffix tree construction for situations with missing suffix links. Two ex- amples of such situations are suffix trees for parameterized strings and suffix trees for two-dimensional arrays. These trees also have the property that the node degrees may be large. We add a new back- propagation component to McCreight's algorithm and also give a high probability hashing scheme for large degrees. We show that these two features enable construction of suffix trees for general sit- uations with missing suffix links in O(n) time, with high probability. This gives the first randomized linear time algorithm for constructing suffix trees for parameterized strings.

51 citations


Cites methods from "A Measure of Asymptotic Efficiency ..."

  • ...To show the above claims on the number of rounds in each group, we will need the following property, obtained using the Chernoff bound [2]....

    [...]

Patent
27 Oct 2005
TL;DR: In this article, a randomized auction mechanism is used to determine both the number of goods that are sold and the selling price, and the auction mechanism automatically adapts to the bid distribution to yield revenue that is competitive with that which could be obtained if the vendor were able to determine the optimal fixed price for the goods.
Abstract: Systems and methods are provided for pricing, selling, and/or otherwise distributing electronic content using auction mechanisms. A randomized auction mechanism is used to determine both the number of goods that are sold and the selling price. The auction mechanism automatically adapts to the bid distribution to yield revenue that is competitive with that which could be obtained if the vendor were able to determine the optimal fixed price for the goods. In one embodiment a set of bids is randomly or quasi-randomly partitioned into two or more groups. An optimal threshold is determined for each group, and this threshold is then used to select winning bids from one or more of the other groups. In another embodiment, each bid is compared to a competing bid that is randomly or quasi-randomly selected from the set of bids. If the bid is less than the randomly-selected competing bid, the bid is rejected. Otherwise, the bid is accepted and the bidder buys the auctioned item at the price of the randomly-selected bid.

51 citations

References
More filters