scispace - formally typeset
Search or ask a question

Showing papers on "Sequential probability ratio test published in 1977"


Journal ArticleDOI
TL;DR: The structure of a sensor failure detection and identification system designed for the NASA F-8 DFBW aircraft is outlined, and preliminary simulation results indicate good behavior of the analytic decision statistic, based on the sequential probability ratio test.
Abstract: In this paper, we outline the structure of a sensor failure detection and identification system designed for the NASA F-8 DFBW aircraft The system is for use in a dual-redundant environment, and it takes maximal advantage of all functional relationships among the sensed variables The identification logic uses the quality sequential probability ratio, which provides a useful on-line measure of confidence in the various forms of analytic redundancy Preliminary simulation results indicate good behavior of the analytic decision statistic, based on the sequential probability ratio test

261 citations


Journal ArticleDOI
TL;DR: In this article, the authors combine one-sided sequential probability ratio tests (SPRTs) for binomial decision problems with error probability constraints to minimize the expected sample sizes to within o(1) asymptotically.
Abstract: Combinations of one-sided sequential probability ratio tests (SPRT's) are shown to be "nearly optimal" for problems involving a finite number of possible underlying distributions. Subject to error probability constraints, expected sample sizes (or weighted averages of them) are minimized to within o(1) asymptotically. For sequential decision problems, simple explicit procedures are proposed which "do exactly what a Bayes solution would do" with probability approaching one as the cost per observation, c, goes to zero. Exact computations for a binomial testing problem show that efficiencies of about 97% are obtained in some "small-sample" cases.

80 citations


Journal ArticleDOI
TL;DR: It is shown here that a properly truncated SPRT can eliminate this undesirable feature of the sequential probability ratio test, and truncating the SPRT at the sample size needed for the corresponding FSS test serves as a remedy.

37 citations


Journal ArticleDOI
TL;DR: The methodology contained herein can be used to construct near-optimal rules in other testing contexts and show that it is an improvement over rules previously proposed.
Abstract: The problem of comparing two medical treatments with respect to survival is considered. Treatment outcome is assumed to follow an exponential distribution. The ratio of expected survivals associated with the two treatments is the clinical parameter of interest. A nuisance parameter is present, but it is removed by an invariance reduction and a sequential probability ratio test is applied to the invariant likelihood ratio. A class of data-dependent treatment assignment rules is identified over which the probability of correct treatment selection at the termination of the trial is approximately constant. A cost function, the weighted sum of total patients in the trial and the number assigned to the inferior treatment is introduced, and a treatment allocation rule conjectured to minimize the expected cost is constructed. Both analytic and simulation results show that it is an improvement over rules previously proposed. The methodology contained herein can be used to construct near-optimal rules in other testing contexts.

27 citations


Journal ArticleDOI
TL;DR: Using a sequential probability ratio test (SPRT), the performances of optimum quantizers are compared to systems with unquantized data and the relation between these asymptotic relative efficiencies and those of fixed-sample-size detectors is noted.
Abstract: The quantization of the observed data for sequential signal detection is studied. The criteria used are the minimizations of the average sample number under the hypothesis, the average sample number under the alternative, and the maximum average sample number. Numerical results show that the performance is not very sensitive to different criteria. Using a sequential probability ratio test (SPRT), the performances of optimum quantizers are compared to systems with unquantized data. The asymptotic relative efficiencies of the quantizerSPRT's with respect to the SPRT for unquantized data are derived for symmetric noise densities. The relation between these asymptotic relative efficiencies and those of fixed-sample-size detectors is noted.

18 citations


Journal ArticleDOI
TL;DR: In this paper, conditions for convergence of a sequence of log-likelihood-ratio processes for sequential sampling are given under the assumption that each process converges weakly to a Wiener process with drift, depending on which hypothesis, in a suitable neighborhood of a null hypothesis, prevails.
Abstract: Material in Chapter VI of Hajek and Sidak's book is extended to a sequential analysis setting: conditions are given under which a sequence of log-likelihood-ratio processes (log-likelihood-ratios for sequential sampling, represented as jump processes in continuous time) converges weakly to a Wiener process with drift, the drift parameter depending on which hypothesis, in a suitable neighborhood of a null hypothesis, prevails. Conditions for convergence of other "test statistic" processes, related to likelihood ratios, are also given. Asymptotic sequential tests can thereby be constructed. Some "two-sample problem" examples are treated.

17 citations


Journal ArticleDOI
TL;DR: In this article, a condition on the moment-generating functions of the random walk model derived by Link and Heath (1975), which is necessary and sufficient to yield the sequential probability ratio test of Wald (1947) and the reaction-time model proposed by Stone (1960) is discussed.

11 citations


Journal ArticleDOI
TL;DR: In this article, an optimum partial sequential procedure for testing a null hypothesis concerning the binomial parameter with a two-sided alternative hypothesis is described, and formulas for its operating characteristic and average sample number functions are derived.
Abstract: An optimum partial sequential procedure for testing a null hypothesis concerning the binomial parameter with a two-sided alternative hypothesis is described. Formulas for its operating characteristic and average sample number functions are derived. By approximating an Armitage procedure by a special case of this partial procedure, approximate values can be obtained for his operating characteristic and average sample number functions.

10 citations


Journal ArticleDOI
TL;DR: In this article, a method is presented for the sequential analysis of experiments involving two treatments to which response is dichotomous, based upon a generalization of Bartlett's (1946) procedure for using the maximum likelihood estimate of a nuisance parameter in a sequential probability ratio test (SPRT).
Abstract: A method is presented for the sequential analysis of experiments involving two treatments to which response is dichotomous. Composite hypotheses about the difference in success probabilities are tested, and covariate information is utilized in the analysis. The method is based upon a generalization of Bartlett’s (1946) procedure for using the maximum likelihood estimate of a nuisance parameter in a Sequential Probability Ratio Test (SPRT). Treatment assignment rules studied include pure randomization, randomized blocks, and an adaptive rule which tends to assign the superior treatment to the majority of subjects. It is shown that the use of covariate information can result in important reductions in the expected sample size for specified error probabilities, and that the use of covariate information is essential for the elimination of bias when adaptive assignment rules are employed. Designs of the type presented are easily generated, as the termination criterion is the same as for a Wald SPRT of simple h...

9 citations


Journal ArticleDOI
TL;DR: A class of algorithms for detecting abnormally short-holding-time trunks has been developed that utilizes individual trunk data available in eadas/icur, and only slight attention is paid to the various trade-offs and real-world constraints encountered in implementing the algorithms.
Abstract: A class of algorithms for detecting abnormally short-holding-time trunks has been developed that utilizes individual trunk data available in eadas/icur (Engineering and Administrative Data Acquisition System/Individual Circuit Usage Recorder). This data consists of a two-dimensional statistic that compresses the raw trunk measurements–the state of the trunk (busy or idle) sampled every 100 or 200 seconds–into a manageable form. Because this data is essentially a sufficient statistic for the stochastic process used to model the (unobservable) trunk state measurements, one of the algorithms developed is Wald's sequential probability ratio test. Two of the algorithms developed have been implemented in ican (Individual Circuit Analysis Program), and are currently being used to test trunks associated with the No. 1 crossbar, No. 5 crossbar, crossbar tandem (1XB, 5XB, XBT), and step-by-step switching machines. The focus in this paper, however, is on the modeling and analysis aspects of the problem, and only slight attention is paid to the various trade-offs and real-world constraints encountered in implementing the algorithms.

4 citations


Book ChapterDOI
01 Jan 1977
TL;DR: In this article, the authors considered the stopping time N of a sequential probability ratio test based on ranks for the following problem as per which if X and Y are real valued and independent random variables, possessing distribution functions F and G, respectively, the problem is to test sequentially the hypothesis G = F against the alternative G = A for some given A ≠ 1.
Abstract: Publisher Summary This chapter discusses obstructive distributions in a sequential rank-order test based on Lehmann alternatives. Savage and Sethuraman considered the stopping time N of a sequential probability ratio test based on ranks for the following problem as per which if X and Y are real valued and independent random variables, possessing distribution functions F and G, respectively, the problem is to test sequentially the hypothesis G = F against the alternative G = F A for some given A ≠ 1. To this end, independent observations (X 1 , Y 1 ), (X 2 , Y 2 ) … on (X, Y) are taken and all information in the sample is discarded at each sampling stage except the ranks of the Ys among the Xs and Ys. This reduces the composite hypotheses to simple ones. The property of exponential boundedness of N is a desirable one. In the contrary case, when N is not exponentially bounded under P, P is obstructive. Complete or partial results on exponential boundedness of N and obstructive distributions have been obtained in several other testing situations.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the stopping time is always exponentially bounded when the null or alternative hypothesis holds, except in a trivial instance, under conditions which include invariant sequential probability ratio tests.
Abstract: : It is shown, under conditions which include invariant sequential probability ratio tests, that the stopping time is always exponentially bounded when the null or alternative hypothesis holds, except in a trivial instance. (Author)