scispace - formally typeset
Search or ask a question

Showing papers on "Sequential probability ratio test published in 1990"


Journal ArticleDOI
TL;DR: In this article, an ordering of the sample space based on the maximum likelihood estimate of the mean of a normal distribution with known variance is investigated, which results in estimates which compare favourably with estimates computed from orderings investigated by Tsiatis, Rosner & Mehta (1984) and Chang & O'Brien (1986) for a variety of group sequential designs.
Abstract: SUMMARY Parameter estimation techniques which fail to adjust for the interim analyses of group sequential test designs will introduce bias in much the same way that the repeated use of single sample hypothesis testing causes inflation of the type one statistical error rate. Methods based on the duality of hypothesis testing and interval estimation require definition of an ordering for the outcome space for the test statistic. In this paper, estimation following a group sequential hypothesis test for the mean of a normal distribution with known variance is investigated. A proposed ordering of the sample space based on the maximum likelihood estimate of the mean is found to result in estimates which compare favourably with estimates computed from orderings investigated by Tsiatis, Rosner & Mehta (1984) and Chang & O'Brien (1986) for a variety of group sequential designs. The proposed ordering is then adapted for use when the sizes of groups accrued between analyses is random.

197 citations


Journal ArticleDOI
TL;DR: The group sequential methods proposed by Jones and Whitehead, namely the triangular test and the discrete SPRT, were applied to the comparison of p with p0, and H0 and H1 were expressed in terms of the log odds-ratio statistic log.
Abstract: Phase II cancer clinical trials are primarily designed to determine whether the response rate p to the treatment under study is greater than a specified value p0, that is to test the null hypothesis H0: p less than or equal to p0 against an alternative hypothesis H1 : p greater than p0 specified by p = p1. As an alternative to the single and multistage procedures and to Wald's continuous sequential probability ratio test (SPRT), we applied the group sequential methods proposed by Jones and Whitehead, namely the triangular test (TT) and the discrete SPRT, to the comparison of p with p0, and we expressed H0 and H1 in terms of the log odds-ratio statistic log [p(1 - p0)/p0(1 - p)]. A stimulation study showed that both the TT and the discrete SPRT had type I error and power close to the nominal values, and they compared favourably with multistage methods in terms of the average sample size.

44 citations


Journal ArticleDOI
TL;DR: The double triangular test as mentioned in this paper is a sequential procedure for testing the null hypothesis that a parameter is zero against the two-sided alternative that it is non-zero, and it is devised to produce small samples if the parameter is large in magnitude, whether positive or negative, and also to produce smaller samples if it is close to zero, in intermediate situations larger samples are likely to be necessary.
Abstract: The double triangular test is a sequential procedure for testing the null hypothesis that a parameter is zero against the two-sided alternative that it is non-zero The test is devised to produce small samples if the parameter is large in magnitude, whether positive or negative, and also to produce small samples if the parameter is close to zero. In intermediate situations larger samples are likely to be necessary. In this paper some new results concerning the theory of the double triangular test are presented. These are then used to explore in detail its properties, and how data collected using the test may be analysed. In particular, significance levels and point and interval estimates are considered.

20 citations


Journal ArticleDOI
TL;DR: In this article, three extant methods of adapting the length of computer-based mastery tests are described and compared: 1) the sequential probability ratio test (SPRT), 2) Bayesian use of the beta distribution, and 3) adaptive mastery testing based on item response theory (IRT).
Abstract: Three extant methods of adapting the length of computer-based mastery tests are described and compared: 1) the sequential probability ratio test (SPRT), 2) Bayesian use of the beta distribution, and 3) adaptive mastery testing based on item response theory (IRT). The utility of the SPRT has been empirically demonstrated by Frick [1]. Research has also demonstrated the effectiveness of use of the beta function in the Minnesota Adaptive Instructional System by Tennyson et al. [2]. Considerably more empirical research has been conducted on IRT-based approaches [3]. No empirical studies were found in which these three approaches have been directly compared. As a first step, computer simulations were undertaken to compare the accuracy and efficiency of these approaches in making mastery and nonmastery decisions. Results indicated that the IRT-based approach was more accurate when simulated examinee ability levels were clustered near the cut-off. On the other hand, when ability levels were more widely dispersed...

18 citations


Journal ArticleDOI
TL;DR: In this paper, the sequential probability ratio test was used to discriminate between two one-sided hypotheses and the maximum sample number was shown to occur when μ is approximately equal to the geometric mean of μo and μ1.
Abstract: Given an inverse Gaussian distribution I(.μ,a2μ) with known coefficient of variation a, the hypothesis HO: .μ μo is tested against H1: μ μ1 using the sequential probability ratio test. The maximum of the expected sample number is shown to occur when μ is approximately equal to the geometric mean of μoand μ1 and it is shown that this maximum value depends on .μo and μ1 only through their ratio. It is observed that the test can be used to discriminate between two one-sided hypotheses.

9 citations


Journal ArticleDOI
TL;DR: Sequential sampling plans based on the negative binomial distribution were developed for egg mass density of spruce budworm, Choristoneura fumiferana (Clemens), in Michigan's Upper Peninsula, to give forest pest managers the flexibility to select a plan that best meets management objectives.
Abstract: Sequential sampling plans based on the negative binomial distribution were developed for egg mass density of spruce budworm, Choristoneura fumiferana (Clemens), in Michigan's Upper Peninsula. Plans developed were modifications of Wald's sequential probability ratio test (SPRT) based on Monte Carlo simulation. Parameters of the negative binomial distribution were estimated by the maximum likelihood method. Sampling models developed classify egg mass populations into low and high categories using the number of egg masses on whole branch samples of balsam fir, Abies balsamifera (L.) Voss. Plans are presented for three pairs of population density hypotheses to give forest pest managers the flexibility to select a plan that best meets management objectives. Monte Carlo estimates indicated that Wald's average sample number (ASN) and operating characteristic (OC) equations underestimated the actual ASN values, and the actual alpha and beta error probabilities were smaller than the values prescribed by Wald's equations. Errors were sufficiently large to have serious economic implications during implementation of plans developed from Wald's approximate procedure. The sequential plans developed by modifying the decision boundaries of Wald's SPRT using the Monte Carlo procedure had error probabilities, OC functions, and ASN values that were approximately equal to those desired.

6 citations


Journal ArticleDOI
TL;DR: In this paper, a method for decomposing many sequential probability ratio tests into smaller independent components called "modules" is presented. But this method is not suitable for the analysis of complex cases, as complex cases can be analyzed numerically using this method.
Abstract: Summary This paper gives a method for decomposing many sequential probability ratio tests into smaller independent components called “modules”. A function of some characteristics of modules can be used to determine the asymptotically most efficient of a set of statistical tests in which a, the probability of type I error equals β, the probability of type II error. The same test is seen also to give the asymptotically most efficient of the corresponding set of tests in which a is not equal to β. The “module” method is used to give an explanation for the super-efficiency of the play-the-winner and play-the-loser rules in two-sample binomial sampling. An example showing how complex cases can be analysed numerically using this method is also given.

5 citations


Journal ArticleDOI
TL;DR: In this paper, the authors apply probability ratio tests to the problem of fault detection and diagnosis in a chemical plant, where the alternative hypotheses correspond to level sensor bias, leakage and valve malfunction in a stock tank and fouling in a heat exchanger.

1 citations