scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Journal ArticleDOI
TL;DR: The sensitivity limit for one-lane DNA sequencing based on laser-induced fluorescence is derived by statistical means and a closed-form expression for the sequencing error probability as a function of the signal strength and of the differences in the fluorescence properties of the labels is given.
Abstract: The sensitivity limit for one-lane DNA sequencing based on laser-induced fluorescence is derived by statistical means. A closed-form expression for the sequencing error probability as a function of the signal strength and of the differences in the fluorescence properties of the labels is given. Fluctuations of the signal caused by photodestruction of the molecules cannot be neglected. The expected sequencing error probability is expressed in terms of the number of molecules and the physical parameters of the measurement. This equation can be used in many ways to design and optimize sequencing experiments.

52 citations

Journal ArticleDOI
TL;DR: The Cramer-Rao inequality provides a lower bound for the variance of an estimator as discussed by the authors, under certain regularity conditions, and has been used to define discrimination efficiency and estimation efficiency at point in parameter space.
Abstract: The Cramer-Rao inequality provides, under certain regularity conditions, a lower bound for the variance of an estimator [7], [15]. Various generalizations, extensions and improvements in the bound have been made, by Barankin [1], [2], Bhattacharyya [3], Chapman and Robbins [5], Fraser and Guttman [11], Kiefer [12], and Wolfowitz [16], among others. Further considerations of certain inequality properties of a measure of information, discussed by Kullback and Leibler [14], yields a greater lower bound for the information measure (formula (4.11)), and leads to a result which may be considered a generalization of the Cramer-Rao inequality, the latter following as a special case. The results are used to define discrimination efficiency and estimation efficiency at a point in parameter space.

52 citations

Proceedings Article
01 Jan 2004
TL;DR: A protocol is introduced that overcomes barriers and provides a simple and efficient scheme for authenticating broadcast packet communications based on a new technique called selective verification, and is analyzed theoretically, experimentally, and architecturally.
Abstract: Authenticating broadcast packet communications poses a challenge that cannot be addressed efficiently with public key signatures on each packet, or securely with the use of a pre-distributed shared secret key, or practically with unicast tunnels. Unreliability is an intrinsic problem: many broadcast protocols assume that some information will be lost, making it problematic to amortize the cost of a single public key signature across multiple packets. Forward Error Correction (FEC) can compensate for loss of packets, but denial of service risks prevent the naive use of both public keys and FEC in authentication. In this paper we introduce a protocol, Broadcast Authentication Streams (BAS), that overcomes these barriers and provides a simple and efficient scheme for authenticating broadcast packet communications based on a new technique called selective verification. We analyze BAS theoretically, experimentally, and architecturally.

52 citations


Cites background from "A Measure of Asymptotic Efficiency ..."

  • ...Chernoff’s bound [2] now provides crisp estimates for the tail probability,...

    [...]

Journal ArticleDOI
TL;DR: This work focuses on the case when it is enough for the verifier to know that the answer is close to correct, and develops an approximate PCP model, which is constructed for several optimization problems, in which the total running time of the verifiers is significantly less than the size of the input.
Abstract: We investigate the question of when a verifier, with the aid of a proof, can reliably compute a function faster than it can without the proof. The proof system model that we use is based on a variant of the Probabilistically Checkable Proofs (PCP) model, in which a verifier can ascertain the correctness of the proof by looking at very few locations in the proof. However, known results in the PCP model require that the verifier spend time linear in the size of the input in order to determine where to query the proof. In this work, we focus on the case when it is enough for the verifier to know that the answer is close to correct, and develop an approximate PCP model. We construct approximate PCPs for several optimization problems, in which the total running time of the verifier is significantly less than the size of the input. For example, we give polylogarithmic time approximate PCPs for showing the existence of a large cut, or a large matching in a graph, and a small bin packing. In the process, we develop a set of tools for use in constructing these proof systems.

52 citations

Journal ArticleDOI
TL;DR: This work uses statistical detection theory in a continuous-time environment to provide a new perspective on calibrating a concern about robustness or an aversion to ambiguity as the decision maker repeatedly confronts uncertainty about state transition dynamics and a prior distribution over unobserved states or parameters.

52 citations


Cites background from "A Measure of Asymptotic Efficiency ..."

  • ...The innovation to the continuation value is( B ′λ + G′H ) · dWt , and the drift is ηt = λ · (AXt) + H · (DXt + F)....

    [...]

  • ...We could induce time dependence by initializing the state covariance matrix away from its invariant limit, but this induces only a smooth alteration in uncertainty prices that does not depend on the data realizations....

    [...]

References
More filters