scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Journal ArticleDOI
TL;DR: An improvement of the multilevel bucket shortest path algorithm of Denardo and Fox is presented and it is proved that if the input arc lengths come from a natural probability distribution, the new algorithm runs in linear average time while the original algorithm does not.
Abstract: We present an improvement of the multilevel bucket shortest path algorithm of Denardo and Fox [Oper. Res., 27 (1979), pp. 161-186] and justify this improvement both theoretically and experimentally. We prove that if the input arc lengths come from a natural probability distribution, the new algorithm runs in linear average time while the original algorithm does not. We also describe an implementation of the new algorithm. Our experimental data suggests that the new algorithm is preferable to the original one in practice. Furthermore, for integral arc lengths that fit into a word of today's computers, the performance is close to that of breadth-first search, suggesting limitations on further practical improvements.

45 citations

Journal ArticleDOI
TL;DR: The pessimistic estimator technique shows an O(mn)-time implementation of the conditional probability method on the RAM model of computation in case of m large deviation events associated to m unweighted sums of n indepependent Bernoulli trials.
Abstract: Raghavan's paper on derandomized approximation algorithms for 0–1 packing integer programs raised two challenging problems [11]: 1. Are there more examples of NP-hard combinatorial optimization problems for which derandomization yields constant factor approximations in polynomial-time ? 2. The pessimistic estimator technique shows an O(mn)-time implementation of the conditional probability method on the RAM model of computation in case of m large deviation events associated to m unweighted sums of n indepependent Bernoulli trials. Is there a fast algorithm also in case of rational weighted sums of Bernoulli trials ?

45 citations

Proceedings ArticleDOI
22 Oct 1990
TL;DR: A natural k-round tournament over n=2/sup k/ players is analyzed, and it is demonstrated that the tournament possesses a surprisingly strong ranking property.
Abstract: A natural k-round tournament over n=2/sup k/ players is analyzed, and it is demonstrated that the tournament possesses a surprisingly strong ranking property. The ranking property of this tournament is exploited by being used as a building block for efficient parallel sorting algorithms under a variety of different models of computation. Three important applications are provided. First, a sorting circuit of depth 7.44 log n, which sorts all but a superpolynomially small fraction of the n-factorial possible input permutations, is defined. Secondly, a randomized sorting algorithm that runs in O(log n) word steps with very high probability is given for the hypercube and related parallel computers (the butterfly, cube-connected cycles, and shuffle-exchange). Thirdly, a randomized algorithm that runs in O(m+log n)-bit steps with very high probability is given for sorting n O(m)-bit records on an n log n-node butterfly. >

45 citations

Journal ArticleDOI
TL;DR: In this paper, the authors used a joint likelihood technique to study the γ-ray spectra of a sample of nearby clusters searching for a relativistic cosmic-ray (CR)induced signal due to hadronic interactions in the intracluster medium.
Abstract: Galaxy clusters are the most massive bound systems known in the Universe and are believed to have formed through large scale structure formation. They host relativistic cosmic-ray (CR) populations and are gravitationally bound by large amounts of Dark Matter (DM), both providing conditions in which high-energy gamma rays may be produced either via CR interactions with the intracluster medium or through the annihilation or decay of DM particles.Prior to the launch of the Fermi satellite, predictions were optimistic that these sources would be established as γ-ray-bright objects by observations through its prime instrument, the Large Area Telescope (LAT). Yet, despite numerous efforts, even a single firm cluster detection is still pending. This thesis presents a number of studies based on data taken by the LAT over its now seven year mission aiming to discover these γ rays.Using a joint likelihood technique, we study the γ-ray spectra of a sample of nearby clusters searching for a CR-induced signal due to hadronic interactions in the intracluster medium. While we find excesses in some individual targets, we attribute none to the cluster. Hence, we constrain the maximum injection efficiency of hadrons being accelerated in structure formation shocks and the fraction of CR-to-thermal pressure. We also perform a refined search targeting the Coma cluster specifically due to its large variety of existing observations in other wavebands. In the latter case we find weak indications of an excess which however falls below the detection threshold.Because the cluster emission we consider is inherently extended, we need to take into account the imperfect modeling of the foreground emission, which may be particularly difficult such as is the case with the Virgo cluster. Here, we assess the systematics associated with the foreground uncertainties and derive limits based on an improved background model of the region. For the first time we derive limits on the γ-ray flux from CR and DM-interactions in which we take into account the dynamical state of the system. For DM we also include the contribution from substructure. The DM domain is further explored by searching for line-like features as they arise from the annihilation of DM into two photons in a large sample of clusters, including Virgo and Coma. Finding no evidence for γ-ray lines, we derive limits on the DM annihilation cross section that are roughly a factor 10 (100) above that derived from observations of the galactic center assuming an optimistic (conservative) scenario regarding the boost due to DM substructure.

45 citations

Journal ArticleDOI
TL;DR: This paper presents a novel load balancing algorithm that is unique in that each participating peer is based on the partial knowledge of the system to estimate the probability distributions of the capacities of peers and the loads of virtual servers, resulting in imperfect knowledge ofThe system state.
Abstract: With the notion of virtual servers, peers participating in a heterogeneous, structured peer-to-peer (P2P) network may host different numbers of virtual servers, and by migrating virtual servers, peers can balance their loads proportional to their capacities The existing and decentralized load balance algorithms designed for the heterogeneous, structured P2P networks either explicitly construct auxiliary networks to manipulate global information or implicitly demand the P2P substrates organized in a hierarchical fashion Without relying on any auxiliary networks and independent of the geometry of the P2P substrates, we present, in this paper, a novel load balancing algorithm that is unique in that each participating peer is based on the partial knowledge of the system to estimate the probability distributions of the capacities of peers and the loads of virtual servers, resulting in imperfect knowledge of the system state With the imperfect system state, peers can compute their expected loads and reallocate their loads in parallel Through extensive simulations, we compare our proposal to prior load balancing algorithms

44 citations


Additional excerpts

  • ...by Markov’s inequality, we have the Chernoff bound [23] as...

    [...]

References
More filters