scispace - formally typeset
Search or ask a question

Showing papers on "Sequential probability ratio test published in 1981"


Journal ArticleDOI
TL;DR: In this article, it is shown that the optimum property of Wald's SPRT for testing simple hypotheses based on i.i.d. observations can be extended to invariant SPRTs like the sequential $t$-test, the Savage-Sethuraman sequential rank-order test, etc.
Abstract: It is well known that Wald's SPRT for testing simple hypotheses based on i.i.d. observations minimizes the expected sample size both under the null and under the alternative hypotheses among all tests with the same or smaller error probabilities and with finite expected sample sizes under the two hypotheses. In this paper it is shown that this optimum property can be extended, at least asymptotically as the error probabilities tend to 0, to invariant SPRTs like the sequential $t$-test, the Savage-Sethuraman sequential rank-order test, etc. In fact, not only do these invariant SPRTs asymptotically minimize the expected sample size, but they also asymptotically minimize all the moments of the sample size distribution among all invariant tests with the same or smaller error probabilities. Modifications of these invariant SPRTs to asymptotically minimize the moments of the sample size at an intermediate parameter are also considered.

111 citations


Journal ArticleDOI
TL;DR: In this article, the problem of testing statistical hypothesis in nonlinear regression models with inequality constraints on the parameters is considered, and it is shown that the distribution of the Kuhn-Tucker, the likelihood ratio and the Wald test statistics converges to the same mixture of chi-square distributions under the null hypothesis.

38 citations


Journal ArticleDOI
TL;DR: The use of composite null and alternative hypothesis in sequential clinical trials is explored, and it is shown that the SPRT and the Bayes formulations using Bayes odds ratios are equivalent in terms of the weighted likelihood ratio.
Abstract: Sequential methods have become increasingly important for the monitoring of patient safety during clinical trials. However, the typical Wald sequential probability ratio test (SPRT), which compares two simple hypotheses, often presents anomalies which can be attributed to an inadequate representation of the parameter space. The use of composite null and alternative hypothesis in sequential clinical trials is explored and the resulting sequential rules are examined. It is shown that the SPRT and the Bayes formulations using Bayes odds ratios are equivalent in terms of the weighted likelihood ratio (WLR). The WLR is obtained for normal variates when the null hypothesis restricts the mean to (i) an interval and (ii) a point, in each case with complementary alternatives, as well as the one-sided formulation with a half-open interval. Applications to clinical trials include large-samples procedures, the comparative binomial trial and the comparison of survival distributions. Illustrative sequential boundaries are presented and the features of these different formulations are compared and discussed. Mixed sequential rules are considered within the framework for ethical stopping rules proposed by Meier (1979, Clinical Pharmacology and Therapeutics 25, 633--640).

16 citations



Journal ArticleDOI
TL;DR: For given (small) a and β a sequential confidence set that covers the true parameter point with probability at least 1 - a and one or more specified false parameter points with probability β at most β can be generated by a family of sequen-tial tests.
Abstract: For given (small) a and β a sequential confidence set that covers the true parameter point with probability at least 1 - a and one or more specified false parameter points with probability at most β can be generated by a family of sequen-tial tests Several situations are described where this approach would be a natural one The following example is studied in some detail: obtain an upper (1 - α)-confidence interval for a normal mean μ (variance known) with β-protection at μ - δ(μ), where δ() is not bounded away from 0 so that a truly sequential procedure is mandatory Some numerical results are presented for intervals generated by (1) sequential probability ratio tests (SPRT's), and (2) generalized sequential probability ratio tests (GSPRT's) These results indicate the superiority of the GSPRT-generated intervals over the SPRT-generated ones if expected sample size is taken as performance criterion

12 citations



Proceedings ArticleDOI
01 Jan 1981
TL;DR: In this article, the authors present an algorithm to detect and isolate the first failure of any one of twelve duplex control sensor signals being monitored using like-signal differences for fault detection while relying upon analytic redundancy relationships among unlike quantities.
Abstract: This paper reviews the formulation and flight test results of an algorithm to detect and isolate the first failure of any one of twelve duplex control sensor signals being monitored. The technique uses like-signal differences for fault detection while relying upon analytic redundancy relationships among unlike quantities to isolate the faulty sensor. The fault isolation logic utilizes the modified sequential probability ratio test, which explicitly accommodates the inevitable irreducible low frequency errors present in the analytic redundancy residuals. In addition, the algorithm uses sensor output selftest, which takes advantage of the duplex sensor structure by immediately removing a highly erratic sensor from control calculations and analytic redundancy relationships while awaiting a definitive fault isolation decision via analytic redundancy. This study represents a proof of concept demonstration of a methodology that can be applied to duplex or higher flight control sensor configurations and, in addition, can monitor the health of one simplex signal per analytic redundancy relationship.

9 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of sequentially testing composite, contiguous hypotheses where the risk function is a linear combination of the probability of error in the terminal decision and the expected sample size.
Abstract: Consider the problem of sequentially testing composite, contiguous hypotheses where the risk function is a linear combination of the probability of error in the terminal decision and the expected sample size. Assume that the common boundary of the closures of the null and the alternative hypothesis is compact. Observations are independent and identically distributed. We study properties of Bayes tests. One property is the exponential boundedness of the stopping time. Another property is continuity of the risk functions. The continuity property is used to establish complete class theorems as opposed to the essentially complete class theorems in Brown, Cohen and Strawderman.

6 citations


Journal ArticleDOI
A. Irle1
TL;DR: For a continuous time stochastic process with distribution P ϑ depending on a one-dimensional parameter ϑ the problem of sequentially testing ϑ = 0 against ϑ > 0 is treated in this paper.

6 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of sequentially testing a null hypothesis vs an alternative hypothesis when the risk function is a linear combination of probability of error in the terminal decision and expected sample size (i.e., constant cost per observation).
Abstract: Consider the problem of sequentially testing a null hypothesis vs an alternative hypothesis when the risk function is a linear combination of probability of error in the terminal decision and expected sample size (i.e., constant cost per observation.) Assume that the parameter space is the union of null and alternative, the parameter space is convex, the intersection of null and alternative is empty, and the common boundary of the closures of null and alternative is nonempty and compact. Assume further that observations are drawn from a $p$-dimensional exponential family with an open $p$-dimensional parameter space. Sufficient conditions for Bayes tests to have bounded stopping times are given.

6 citations


Journal ArticleDOI
TL;DR: Results show that the ranking size m need not be very large in order that the performance of the proposed test be almost as good as sequential rank tests which require ranking of all the data.
Abstract: A class of nonparametric sequential tests is considered for testing a symmetric density under the hypothesis against a one-sided shift alternative. The test statistic at each observation is the sum of intermediate statistics obtained from the ranks within the most recent m observations, where tn is a fixed ranking size. Excessive ranking of the data can be avoided with a proper choice of tn so that real-time implementation of the sequential rank test is feasible. Approximate expressions for the power and average sample number functions are given. Comparison with existing nonparametric tests is studied. Results show that the ranking size m need not be very large in order that the performance of the proposed test be almost as good as sequential rank tests which require ranking of all the data.

Journal ArticleDOI
TL;DR: A new diagnostic theory for the design of automated medical questioning equipment which tabulates the answers to questions into a form easily understood by physicians, which enumerates data on doubtful diseases and which indicates pertinent medical examinations may come to the aid of patients and physicians is presented.
Abstract: A medical interview is a very important part of medical treatment since it is conducted when a patient is first admitted to a hospital and treatment is decided afterwards. However, these interviews are not always carried out in sufficient detail because physicians have very heavy work-loads. The development of automated medical questioning equipment which tabulates the answers to questions into a form easily understood by physicians, which enumerates data on doubtful diseases and which indicates pertinent medical examinations may come to the aid of patients and physicians. This paper presents a new diagnostic theory for the design of automated medical questioning equipment. Diagnostic theories can be classified into batch and sequential theories; the authors have investigated the sequential one, because decisions are made using minimal data. The techniques supporting this theory are multi-class recognition systems based on independently designed dual-class recognition systems and Wald's Sequential Probability Ratio Test. To discuss the properties inherent in the present theory, classification of three pattern classes was made. These were normal, hypertension and myocardial infarction classes of patients. The mean error probability of classification was found to be 3.08%.

Journal ArticleDOI
B.K. Ghosh1, S. Keith Lee1
TL;DR: In this article, the authors consider the problem of testing the equality of propor-tions in several multinomial populations and propose a repeated significance test, which is as powerful as the likelihood ratio test, but it requires less number of populations for sampling purposes.
Abstract: We consider the problem of testing the equality of propor-tions in several multinomial populations. The standard likelihood ratio procedure is modified to construct a repeated significance test. This test is as powerful as the likelihood ratio test, but it requires less number of populations for sampling purposes.

01 Aug 1981
TL;DR: The results of the study showed that the three-parameter logistic based procedure had higher decision consistency than the one-parameters based procedure when classifications were repeated after one week.
Abstract: : This report describes a study comparing the classification results obtained from a one-parameter and three-parameter logistic based tailored testing procedure used in conjunction with Wald's sequential probability ratio test (SPRT). Eighty-eight college students were classified into four grade categories using achievement test results obtained from tailored testing procedures based on maximum information item selection and maximum likelihood ability estimation. Tests were terminated using the SPRT procedure. The results of the study showed that the three-parameter logistic based procedure had higher decision consistency than the one-parameter based procedure when classifications were repeated after one week. Both procedures required fewer items for classification into grade categories than a traditional test over the same material. The three-parameter procedure required the fewest items of all, using an average of 12 to 13 items to assign a grade. (Author)

01 Mar 1981
TL;DR: In this article, the authors present an algorithm to detect and isolate the first failure of any one of twelve duplex control sensors being monitored, using like sensor output differences for fault detection while relying upon analytic redundancy relationships among unlike quantities to isolate the faulty sensor.
Abstract: The formulation and flight test results of an algorithm to detect and isolate the first failure of any one of twelve duplex control sensors being monitored are described. The technique uses like sensor output differences for fault detection while relying upon analytic redundancy relationships among unlike quantities to isolate the faulty sensor. The fault isolation logic utilizes the modified sequential probability ratio test, which explicitly accommodates the inevitable irreducible low frequency errors present in the analytic redundancy residuals. In addition, the algorithm uses sensor output selftest, which takes advantage of the duplex sensor structure by immediately removing a highly erratic sensor from control calculations and analytic redundancy relationships while awaiting a definitive fault isolation decision via analytic redundancy.

Proceedings Article
01 Jan 1981
TL;DR: The implementation of the variable dwell time algorithm as a sequential probability ratio test is developed and the performance of this algorithm is compared to the optimum detection algorithm and to the fixed dwell-time system.
Abstract: Pseudo noise (PN) spread spectrum systems require a very accurate alignment between the PN code epochs at the transmitter and receiver. This synchronism is typically established through a two-step algorithm, including a coarse synchronization procedure and a fine synchronization procedure. A standard approach for the coarse synchronization is a sequential search over all code phases. The measurement of the power in the filtered signal is used to either accept or reject the code phase under test as the phase of the received PN code. This acquisition strategy, called a single dwell-time system, has been analyzed by Holmes and Chen (1977). A synopsis of the field of sequential analysis as it applies to the PN acquisition problem is provided. From this, the implementation of the variable dwell time algorithm as a sequential probability ratio test is developed. The performance of this algorithm is compared to the optimum detection algorithm and to the fixed dwell-time system.

Journal ArticleDOI
S. Tantaratana1
01 Dec 1981
TL;DR: In this paper, exact expressions for the power function and the average sample number function of a truncated nonparametric sequential test are obtained for absorbing boundaries and the probability of reaching a given position at each stage of a random walk.
Abstract: Explicit expressions for the probability of reaching a given position at each stage of a random walk are derived for the case of absorbing boundaries. Utilizing these results, exact expressions for the power function and the average sample number function of a truncated nonparametric sequential test are obtained. Some numerical examples are also given.

Proceedings ArticleDOI
01 Apr 1981
TL;DR: The relative efficiency of a sequential detector is compared with its fixed sample size detector counterpart for signal levels which may be inconsistent with the assumed alternative.
Abstract: It is known that the sequential probability ratio test (SPRT) minimizes the average detection time among all tests for fixed type I and type II errors under the hypothesis and alternative. Often, in practical detection problems, the signal level is unknown during the decision interval. In this paper, the relative efficiency of a sequential detector is compared with its fixed sample size detector counterpart for signal levels which may be inconsistent with the assumed alternative. The results are applicable to detectors called partition detectors which are designed based on knowledge of a set of quantiles and related functions from the unknown noise field, and guarantee distribution-free performance under the hypothesis. For the class of sequential detectors that have boundaries which are linear functions of the sample size, the operating characteristic function and average sample number ASN are derived.