scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Book
01 Jun 1994
TL;DR: This paper identifies adverse source-destination traffic patterns and proposes a scheme for obtaining relief by means of randomized routing of packets on simple extensions of the well-known omega networks.
Abstract: Parallel communication algorithms and networks are central to large-scale parallel computing and, also, data communications. This paper identifies adverse source-destination traffic patterns and proposes a scheme for obtaining relief by means of randomized routing of packets on simple extensions of the well-known omega networks. Valiant and Aleliunas have demonstrated randomized algorithms, for a certain context which we call nonrenewal, that complete the communication task in time O(log N) with overwhelming probability, where N is the number of sources and destinations. Our scheme has advantages because it uses switches of fixed degree, requires no scheduling, and, for the nonrenewal context, is as good in proven performance. The main advantage of our scheme comes when we consider the renewal context in which packets are generated at the sources continually and asynchronously. Our algorithm extends naturally from the nonrenewal context. In the analysis in the renewal context we, first, explicitly identify the maximum traffic intensities in the internal links of the extended omega networks over all source-destination traffic specifications that satisfy loose bounds. Second, the benefits of randomization on the stability of the network are identified. Third, exact results, for certain restricted models for sources and transmission, and approximate analytic results, for quite general models, are derived for the mean delays. These results show that, in the stable regime, the maximum mean time from source to destination is asymptotically proportional to log N. Numerical results are presented.

47 citations

Journal ArticleDOI
01 Dec 1985
TL;DR: An overview of the “probabilistic method” in Combinatorial Analysis and Graph Theory is given.
Abstract: An overview of the "probabilistic method" in Combinatorial Analysis and Graph Theory is given.

47 citations

Journal ArticleDOI
TL;DR: This paper starts with a paradigmatic fault tolerance scheme that systematically adds redundancy into a discrete-time dynamic system in a way that achieves tolerance to transient faults in both the state transition and the error-correcting mechanisms, and obtains an efficient way of providing fault tolerance to k identical unreliable LFSMs that operate in parallel on distinct input sequences.
Abstract: This paper discusses fault tolerance in discrete-time dynamic systems, such as finite-state controllers or computer simulations, with focus on the use of coding techniques to efficiently provide fault tolerance to linear finite-state machines (LFSMs). Unlike traditional fault tolerance schemes, which rely heavily-particularly for dynamic systems operating over extended time horizons-on the assumption that the error-correcting mechanism is fault free, we are interested in the case when all components of the implementation are fault prone. The paper starts with a paradigmatic fault tolerance scheme that systematically adds redundancy into a discrete-time dynamic system in a way that achieves tolerance to transient faults in both the state transition and the error-correcting mechanisms. By combining this methodology with low-complexity error-correcting coding, we then obtain an efficient way of providing fault tolerance to k identical unreliable LFSMs that operate in parallel on distinct input sequences. The overall construction requires only a constant amount of redundant hardware per machine (but sufficiently large k) to achieve an arbitrarily small probability of overall failure for any prespecified (finite) time interval, leading in this way to a lower bound on the computational capacity of unreliable LFSMs.

47 citations

Journal ArticleDOI
TL;DR: A non-trivial analysis of a simple randomized polynomial-time approximation algorithm for this problem yields an expected approximation ratio E[r(a,k)] that is an increasing function of a/k that is “equivalent” to the set multicover problem.

47 citations

Proceedings Article
01 Jan 2004
TL;DR: It is shown that a non-trivial analysis of a simple randomized polynomial-time approximation algorithm for this problem yields an expected approximation ratio E[r(a, k)] that is an increasing function of a/k.
Abstract: In this paper we investigate the computational complexities of a combinatorial problem that arises in the reverse engineering of protein and gene networks Our contributions are as follows: - We abstract a combinatorial version of the problem and observe that this is equivalent to the set multicover problem when the coverage factor k is a function of the number of elements n of the universe An important special case for our application is the case in which k = n - 1 - We observe that the standard greedy algorithm produces an approximation ratio of Ω(log n) even if k is large ie k = n - c for some constant c > 0 - Let 1 < a < n denotes the maximum number of elements in any given set in our set multicover problem Then, we show that a non-trivial analysis of a simple randomized polynomial-time approximation algorithm for this problem yields an expected approximation ratio E[r(a, k)] that is an increasing function of a/k The behavior of E[r(a, k)] is roughly as follows: it is about ln(a/k) when a/k is at least about e 2 ≃ 739, and for smaller values of a/k it decreases towards 2 exponentially with increasing k with lim α-κ→0 E[r(a, k)] < 2 Our randomized algorithm is a cascade of a deterministic and a randomized rounding step parameterized by a quantity β followed by a greedy solution for the remaining problem

47 citations


Cites methods from "A Measure of Asymptotic Efficiency ..."

  • ...For tight analysis, we found that the standard Chernoff bounds such as in [1, 3, 12, 18] were not always sufficient and hence we had to devise more appropriate bounds for certain parameter ranges....

    [...]

  • ...By using standard Chernoff’s bound [1, 3, 12], we have...

    [...]

References
More filters