scispace - formally typeset
Search or ask a question

Showing papers on "Sequential probability ratio test published in 1994"


Journal ArticleDOI
TL;DR: The sequential testing of more than two hypotheses has important applications in direct-sequence spread spectrum signal acquisition, multiple-resolution-element radar, and other areas and it is argued that the MSPRT approximates the much more complicated optimal test when error probabilities are small and expected stopping times are large.
Abstract: The sequential testing of more than two hypotheses has important applications in direct-sequence spread spectrum signal acquisition, multiple-resolution-element radar, and other areas A useful sequential test which we term the MSPRT is studied in this paper The test is shown to be a generalization of the sequential probability ratio test Under Bayesian assumptions, it is argued that the MSPRT approximates the much more complicated optimal test when error probabilities are small and expected stopping times are large Bounds on error probabilities are derived, and asymptotic expressions for the stopping time and error probabilities are given A design procedure is presented for determining the parameters of the MSPRT Two examples involving Gaussian densities are included, and comparisons are made between simulation results and asymptotic expressions Comparisons with Bayesian fixed sample size tests are also made, and it is found that the MSPRT requires two to three times fewer samples on average >

296 citations


Journal ArticleDOI
TL;DR: The Neyman's smooth test as discussed by the authors is a well-known goodness-of-fit procedure for testing uniformity, which can be viewed as a compromise between omnibus test procedures, with generally low power in all directions, and procedures whose power is focused in the direction of a specific alternative.
Abstract: Neyman's smooth test for testing uniformity is a recognized goodness-of-fit procedure. As stated by LaRiccia, the test can be viewed as a compromise between omnibus test procedures, with generally low power in all directions, and procedures whose power is focused in the direction of a specific alternative. The basic idea behind this test is to embed the null density into, say, a k-dimensional exponential family and then to construct an asymptotically optimal test for the parametric testing problem. The resulting procedure is Neyman's test with k components. The most difficult problem related with using this test is the choice of k. Recommendations in statistical literature are sometimes confusing. Some authors advocate a small number of components, whereas others show that in some situations a larger number of components is profitable. All existing suggestions concerning how to select k exploit in fact some preliminary knowledge about a possible alternative. In this article, a new data-driven met...

219 citations


Journal ArticleDOI
TL;DR: A conceptual generalization of the Wald's sequential probability ratio test (SPRT) to a decentralized environment is presented and performance comparisons with the centralized and other decentralized schemes are presented.
Abstract: A conceptual generalization of the Wald's sequential probability ratio test (SPRT) to a decentralized environment is presented. The local tests are chosen to be SPRT's, which quantize the temporally independent identically distributed (IID) observations into three levels for transmission to the fusion detector. Spatial independence among the local observations is assumed. The fusion test is also an SPRT. An analysis for the decentralized scheme is developed. Performance comparisons with the centralized and other decentralized schemes are presented. >

67 citations


Journal ArticleDOI
TL;DR: The proposed multiple multistage hypothesis test tracking (MMHTT) algorithm extends tracks formed from sequentially detected target trajectory segments using a multiple hypothesis tracking strategy that does not require a probabilistic larger maneuver model.
Abstract: Sequential hypothesis testing is investigated for multiframe detection and tracking of low-observable maneuvering point-source targets in a digital image sequence. The proposed multiple multistage hypothesis test tracking (MMHTT) algorithm extends tracks formed from sequentially detected target trajectory segments using a multiple hypothesis tracking strategy. The MMHTT algorithm does not require a probabilistic larger maneuver model. Computational efficiency is achieved by using a truncated sequential probability ratio test (SPRT) to prune a dense tree of candidate target trajectories and score the detected trajectory segments. An analytical performance evaluation is presented and confirmed by experimental results from an optical satellite tracking application. >

65 citations


Journal ArticleDOI
TL;DR: The detection and diagnosis of changes in stationary dynamical systems via statistical methods and using input design to improve detection performance are discussed and the design techniques are extended to the general multiple-hypotheses case.
Abstract: The detection and diagnosis of changes in stationary dynamical systems via statistical methods and using input design to improve detection performance are discussed. A cumulative sum test to detect a change towards one of several hypotheses is obtained by exploiting connections with the sequential probability ratio test. For input design, the objectives are taken to be to decrease the detection time and, at the same time, to ensure a tolerable false alarm rate. Both off-line auxiliary inputs and on-line generation of the input signal by a linear output feedback are considered. The problem is first introduced for the two-hypotheses case and then the design techniques are extended to the general multiple-hypotheses case.

63 citations


Journal ArticleDOI
TL;DR: It is concluded that a random sequence is an excellent model for a PN sequence, and that significant degradation in performance can be expected if the test design is based on the zero sequence model rather than on the random sequence model.
Abstract: The use of a sequential probability ratio test (SPRT) for the acquisition of pseudonoise (PN) sequences in chip syn- chronous direct-sequence spread-spectrum (DS/SS) systems is considered. The out-of-phase sequence is modeled as a random sequence and the probabilities of error and expected sample sizes for the corresponding test are derived. A different (and very commonly used) test is obtained if the out-of-phase sequence is modeled as a zero sequence. The probabilities of error and the expected sample sizes of both SPRT's are compared, and it is shown that the latter test has a significantly larger probability of type I error. Numerical evaluation of the performance of both tests applied to a PN sequence of period 21° - 1 gives results in agreement with the analytical results. We conclude that a random sequence is an excellent model for a PN sequence, and that significant degradation in performance can be expected if the test design is based on the zero sequence model rather than on the random sequence model.

42 citations


Journal ArticleDOI
TL;DR: The boundaries approach can now be used with a wide variety of test statistics, including those appropriate to the analysis of survival data, and can take various forms, although the use of straight lines still eases the underlying mathematical theory while at least approximating to the requirements of the majority of clinical trials.
Abstract: The earliest formal sequential procedure, the sequential probability ratio test, involved the plotting of certain test statistics and comparison with straight line parallel boundaries. The boundaries approach can now be used with a wide variety of test statistics, including those appropriate to the analysis of survival data. The boundaries can take various forms, although the use of straight lines still eases the underlying mathematical theory while at least approximating to the requirements of the majority of clinical trials. The implementation of sequential methods needs to be made flexibly and sensitively, with each clinical trial meriting an individualized approach.

21 citations


Journal ArticleDOI
TL;DR: In this paper, an algorithm was developed for determining exact values of the Operating Characteristic and Average Sample Number functions for SPRT when observations are drawn from a discrete, Koopman-Darmois density with positive probability on the nonnegative integers.
Abstract: An algorithm is developed for determining exact values of the Operating Characteristic and Average Sample Number functions for Wald's Sequential Probability Ratio Test (SPRT) when observations are drawn from a discrete, Koopman-Darmois density with positive probability on the nonnegative integers. The sample size distribution and percentiles of sample size are also considered. An example is given for the negative binomial distribution.

16 citations


Journal ArticleDOI
TL;DR: An interactive and menu-driven computer program to help investigators design and analyze phase II cancer clinical trials with a group sequential method, namely the sequential probability ratio test or the triangular test.

11 citations


01 Jan 1994
TL;DR: In this article, the Neyman-Pearson hypothesis test results in a decision (choice of action) justified not by any assessment of sample evidence, but by the pre-specified frequencies with which that procedure generates errors of the two possible types.
Abstract: A classical, Neyman-Pearson hypothesis test results in a decision (choice of action) justified not by any assessment of sample evidence, but by the pre-specified frequencies with which that procedure generates errors of the two possible types. By applying such a test in auditing, the hypothesis tested is accepted or rejected without the auditor having to consider whether the data observed confirms (in any degree), or disconfirms, that hypothesis. In contrast with the classical framework, the Bayesian approach is to evaluate the probability of the hypothesis tested conditional on the data observed, and then to make a decision on the basis of that revised probability. Decisions are thus evidence-based rather than rule-based. So as to compare the classical and Bayesian programs, a familiar test example is considered, and hypothetical data, which, on a classical view, marginally reject the auditee's stated account balance, are re-interpreted from a Bayesian, evidential perspective. The results of this comparison reveal that classical hypothesis tests in auditing do not have a consistent (from test-to-test) evidential basis, and, in Bayesian terms, are therefore "incoherent". Also, contrary to intuitive expectations, marginal rejection is found to imply evidence in favor of the auditee's stated balance. Asymptotically, an account balance which is rejected only marginally in a classical hypothesis test has an "objective" (not-dependent-on-prior) posterior probability arbitrarily close to one.

11 citations


Proceedings ArticleDOI
27 Jun 1994
TL;DR: In this article, the composite hypothesis test for the case of unknown distribution for an unknown parameter is considered, and it is shown that the optimum test is the average likelihood ratio test with uniform f/sub /spl theta.
Abstract: The composite hypothesis test for the case of unknown distribution for an unknown parameter is considered, and it is shown that the optimum test is the average likelihood ratio test with uniform f/sub /spl theta//; it is equivalent to the uniformly most powerful (UMP) test, if there is a UMP test for the problem, and it performs essentially better than the generalised likelihood ratio test. >

Journal ArticleDOI
TL;DR: In this article, the problem of testing which of two normally distributed treatments has the largest mean, when the tested populations incorporate a covariate, was considered and an optimal allocation that minimizes, in a continuous time setting, the expected sampling costs was derived.
Abstract: We consider the problem of testing which of two normally distributed treatments has the largest mean, when the tested populations incorporate a covariate. From the class of procedures using the invariant sequential probability ratio test we derive an optimal allocation that minimizes, in a continuous time setting, the expected sampling costs. Simulations show that this procedure reduces the number of observations from the costlier treatment and categories while maintaining an overall sample size closer to the “pairwise” procedure. A randomized trial example is given.

Book ChapterDOI
01 Jan 1994
TL;DR: The theory of sequential analysis was initiated by Wald during the 1940’s in response to problems of sampling inspection and more recent developments in sequential change-point detection and sequential clinical trials are described.
Abstract: The theory of sequential analysis was initiated by Wald during the 1940’s in response to problems of sampling inspection. Wald’s contributions are reviewed, and more recent developments in sequential change-point detection and sequential clinical trials are described. Particular attention is devoted to the application of random walk and renewal theory to improve Wald’s error probability and expected sample size approximations. The changing focus occasioned by the stimulus of new applications is discussed.

Journal ArticleDOI
TL;DR: In this paper, approximate probabilities of error for Armitage's (1947) test are derived and a method of adjusting the error rates used to establish the decision boundaries in order to attain the nominal error rates is developed.
Abstract: Abraham Wald developed the Sequential Probability Ratio Test in the 1940's to perform simple vs. simple hypothesis tests that would control both Type I and Type II error rates. Some applications require a test of three hypotheses. In addition, to perform a simple vs. composite two-sided test, a three-hypothesis test with all hypotheses simple has been suggested. Methods have been proposed that will test three hypotheses sequentially. They range widely in simplicity andaccuracy. In this paper,approximate probabilities of error for Armitage's (1947) test arederived. A method of adjusting the error rates used to establish the decision boundaries in order to attain the nominal error rates is developed.The procedure is compared to existing ones by Monte-Carlo simulation

Journal ArticleDOI
TL;DR: In this paper, the excess over the boundaries used in the test is approximated as a simple function of the parameter to be tested by using the condition of the test statistic immediately before the stopping time in normal and exponential cases.
Abstract: Since Wald developed the sequential probability ratio test, many studies have been done to approximate the characteristics of the test. One of the major efforts among them is to approximate the excess over the boundaries used in the test. In this paper the excess is approximated as a simple function of the parameter to be tested by using the condition of the test statistic immediately before the stopping time in normal and exponential cases. The use of the estimated excess shows good performances in estimating the operating characteristic function, the average sample number, and the probability mass function of the sarnple number. It also make it possible to determine the boundary values which can give the error probabilities close to the desired ones.

Journal ArticleDOI
TL;DR: In this article, the monotone likelihood ratio property of the gamma distribution is used to detect the change point in reliability growth model and the Wald's Sequential Probability Ratio Test (SPRT) is applied to regulate the reliability of the Gamma failure model.

01 Jan 1994
TL;DR: The variable-sample-size sequential probability ratio test is applied to the problem of sequential testing of a Gaussian mean and finds an optimal procedure that maximizes the expected net gain of sampling.
Abstract: Sequential sampling schemes have traditionally used ad hoc rules for sample size. The variable-sample-size sequential probability ratio test (VPRT), developed by Cressie and Morgan ( Proc. 4th Purdue Symp. on Decision Theory and Related Topics , IV Vol. 2, Springer, New York (1988), 107–118), generalizes the classical one-at-a-time and group-sequential procedures to an optimal procedure that maximizes the expected net gain of sampling, conditional on the accumulated observations on the stochastic process. In this paper, we apply the VPRT to the problem of sequential testing of a Gaussian mean.

Journal ArticleDOI
TL;DR: In this article, a repeated significance test on regression coefficients in a linear regression model is proposed for the sequential comparison of two medical treatments whose effectiveness is influenced by prognostoc factors.
Abstract: In clinical trials we often need a sequential testing procedure for a difference between two medical treatments whose effectiveness is influenced by prognostoc factors. This article considers a repeated significance test on regression coefficients in a linear regression model. We first derive approximations for the overall significance level and power of the test and compare our test with a fixed sample test. We then discuss applications of these results to the sequential comparison of two treatments and also discuss the effect of allocation rules on the behavior of the test statistics.


Journal ArticleDOI
01 Oct 1994
TL;DR: In this article, the authors studied the sequential properties of the stopping rule and the sequential estimator of q(ϑ1, ϑ2) under the assumption that the sample is type II censored.
Abstract: This paper deals with the sequential estimation ofq(ϑ1, ϑ2) when the underlying density function is of the formf(x)=q(ϑ1, ϑ2)h(x), where ϑ1 and ϑ2 are unknown truncation parameters. We study the sequential properties of the stopping rule and the sequential estimator ofq(ϑ1, ϑ2). In this study we assume that the sample is type II censored.

Journal ArticleDOI
TL;DR: This article considers the main decision making methods proposed in the literature for Phase II studies in oncology, and the interest of using group sequential methods, and especially the Triangular Test, is confirmed by a comparative study of the statistical properties of the different methods.

Proceedings ArticleDOI
27 Jun 1994
TL;DR: In this article, a sequential multihypothesis test designed for use with nonuniform decision costs is presented and its performance is characterized, asymptotic efficiency of the test is shown, and the use of sequential multi-hypothesis tests in hybrid serial search schemes is discussed.
Abstract: A sequential multihypothesis test designed for use with nonuniform decision costs is presented and its performance is characterized. Asymptotic efficiency of the test is shown, and the use of sequential multihypothesis tests in hybrid serial search schemes is discussed. A generalisation of the M-ary sequential probability ratio test is considered for the decision making problem. >

Journal ArticleDOI
TL;DR: The variable-sample-size sequential probability ratio test (VPRT) as mentioned in this paper generalizes the classical one-at-a-time and group-sequential procedures to an optimal procedure that maximizes the expected net gain of sampling.

Journal ArticleDOI
TL;DR: In this article, a sequential probability ratio test is developed to control the ratio of the expected value of "up" and "down" times of equipment when their distribution times are gamma random variables.