scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Journal ArticleDOI
TL;DR: While the method of types is suitable primarily for discrete memoryless models, its extensions to certain models with memory are also discussed, and a wide selection of further applications are surveyed.
Abstract: The method of types is one of the key technical tools in Shannon theory, and this tool is valuable also in other fields. In this paper, some key applications are presented in sufficient detail enabling an interested nonspecialist to gain a working knowledge of the method, and a wide selection of further applications are surveyed. These range from hypothesis testing and large deviations theory through error exponents for discrete memoryless channels and capacity of arbitrarily varying channels to multiuser problems. While the method of types is suitable primarily for discrete memoryless models, its extensions to certain models with memory are also discussed.

473 citations


Cites background from "A Measure of Asymptotic Efficiency ..."

  • ...Chernoff [20]: if a simple null-hypothesis is to be tested against a simple alternative , with an arbitrarily fixed upper bound on the type 1 error probability, the type 2 error probability can be made to decrease with exponential rate but not better....

    [...]

Proceedings ArticleDOI
Miklós Ajtai1
23 May 1998
TL;DR: In this paper, it was shown that the shortest vector problem in lattices with La norm is NP-hard for randomized reductions and that finding a vector which is longer than the shortest non-zero vector by no more than a factor of 1 + 2'** (with respect to the Es norm) is also NP-complete for randomized reduction.
Abstract: We show that the shortest vector problem in lattices with La norm is NP-hard for randomized reductions. Moreover we also show that there is an absolute constant E > 0 so that to find a vector which is longer than the shortest non-zero vector by no more than a factor of 1 + 2'** (with respect to the Es norm) is also NP-hard for randomized reductions. The corresponding decision problem is NP-complete for randomized reductions. 1. Introduction. A lattice in R* is the set of all integer linear combinations of It fixed linearly independent vectors. T'he question of finding the shortest non-zero vector in a lattice with repsect to the L, norm was proved to be NP-hard by Van Emde Boas. However the corresponding problem for the Lz norm (or any other &norms for 1 5 p < OD) remained unsolved. Van Emde Boas conjectured almost twenty years ago (cf. [vEB]) that the L2 shortest vector problem for lattices in Z* is NP-hard and the corresponding decision problem is NP-complete. The cr-approximate version of the problem is the following: find a non-zero vector 11 in the lattice 1; so that its length is at most ~1]vc]] where ~0 is a shortest non-zero vector of the lattice. It has been shown by J. Lagarias, H.W Lens&a and, C. I? Schnorr (cf. [LLS]) that if the cr-appro,ximate problem is NP-hard for any Q > nl*a (where n is the dimension of the lattice) than NP = co-NP. According to recent a result of 0. Goldreich and S. Goldwasser it is unlikeley that the a-approximation problem is NP-hard even for Q: = $, since this problem is in NP tl coAM (see [GG]). In this paper we show that the shortest vector problem is NP-hard for randomized reductions. That is, there is a prob-abilistic Turing-machine which in polynomial time reduces any problem in NP to instances of the shortest vector problem. In other words this probabilistic 'Ruing machine can solve in polynomial time any problem in NP, provided that it can use an oracle which returns the solution of the shortest vector problem if an instance of it presented (by giving a basis of the corresponding lattice). We prove the same result about the 1 + 2-*'-approximate problem where E > 0 is n sufficiently small absolute constant and n is the dimension ot the lattice. (Recently J.-Y. Cai and A. Nerurkar has …

471 citations

Journal ArticleDOI
TL;DR: In this article, the authors review the quantum fidelity approach to quantum phase transitions in a pedagogical manner, and relate all established but scattered results on the leading term of the fidelity into a systematic theoretical framework, which might provide an alternative paradigm for understanding quantum critical phenomena.
Abstract: We review the quantum fidelity approach to quantum phase transitions in a pedagogical manner. We try to relate all established but scattered results on the leading term of the fidelity into a systematic theoretical framework, which might provide an alternative paradigm for understanding quantum critical phenomena. The definition of the fidelity and the scaling behavior of its leading term, as well as their explicit applications to the one-dimensional transverse-field Ising model and the Lipkin–Meshkov–Glick model, are introduced at the graduate-student level. Besides, we survey also other types of fidelity approach, such as the fidelity per site, reduced fidelity, thermal-state fidelity, operator fidelity, etc; as well as relevant works on the fidelity approach to quantum phase transitions occurring in various many-body systems.

456 citations

Proceedings ArticleDOI
24 May 1994
TL;DR: A taxonomy of different cache invalidation strategies is proposed and it is determined that for the units which are often disconnected (sleepers) the best cache invalidations strategy is based on signatures previously used for efficient file comparison, and for units which is connected most of the time (workaholics), the best Cache invalidation strategy isbased on the periodic broadcast of changed data items.
Abstract: In the mobile wireless computing environment of the future a large number of users equipped with low powered palm-top machines will query databases over the wireless communication channels. Palmtop based units will often be disconnected for prolonged periods of time due to the battery power saving measures; palmtops will also frequencly relocate between different cells and connect to different data servers at different times. Caching of frequently accessed data items will be an important technique that will reduce contention on the narrow bandwidth wireless channel. However, cache invalidation strategies will be severely affected by the disconnection and mobility of the clients. The server may no longer know which clients are currently residing under its cell and which of them are currently on. We propose a taxonomy of different cache invalidation strategies and study the impact of client's disconnection times on their performance. We determine that for the units which are often disconnected (sleepers) the best cache invalidation strategy is based on signatures previously used for efficient file comparison. On the other hand, for units which are connected most of the time (workaholics), the best cache invalidation strategy is based on the periodic broadcast of changed data items.

454 citations

Journal ArticleDOI
TL;DR: A randomized linear-time algorithm to find a minimum spanning tree in a connected graph with edge weights is presented, a unit-cost random-access machine with the restriction that the only operations allowed on edge weights are binary comparisons.
Abstract: We present a randomized linear-time algorithm to find a minimum spanning tree in a connected graph with edge weights. The algorithm uses random sampling in combination with a recently discovered linear-time algorithm for verifying a minimum spanning tree. Our computational model is a unit-cost random-access machine with the restriction that the only operations allowed on edge weights are binary comparisons.

450 citations

References
More filters