scispace - formally typeset
Search or ask a question

Showing papers on "Sequential probability ratio test published in 1997"


Journal ArticleDOI
TL;DR: An adaptive neural fuzzy inference system modeling technique is introduced for sensor and associated instrument channel calibration validation for sensor fault detection and statistical decision technique known as the sequential probability ratio test is used to detect sensor anomalies.
Abstract: An adaptive neural fuzzy inference system modeling technique is introduced for sensor and associated instrument channel calibration validation. This method uses an inferential-modeling technique after a genetic algorithm search is used to empirically determine the appropriate combinations of input variables to optimally model each signal to be monitored. These variables are used as input to a fuzzy inference system that is trained to estimate the monitored signals. The estimates are compared with the actual signals, and a statistical decision technique known as the sequential probability ratio test is used to detect sensor anomalies. The sensor fault detection system is demonstrated using data supplied from Florida Power Corporation's Crystal River Unit 3 nuclear power generating station.

64 citations


01 Jan 1997
TL;DR: In this paper, a unified treatment of a variety of optimal stopping problems in sequential testing theory is given, by introducing a gen- eral class of loss functions and prior distributions.
Abstract: After a brief survey of a variety of optimal stopping problems in sequential testing theory, we give a unified treatment of these problems by introducing a gen- eral class of loss functions and prior distributions. In the context of a one-parameter exponential family, this unified treatment leads to relatively simple sequential tests involving generalized likelihood ratio statistics or mixture likelihood ratio statistics. The latter have been used by Robbins in his development of power-one tests, whose optimality properties are also discussed in this connection. Probability theory began with efforts to calculate the odds and to develop strategies in games of chance. Optimal stopping problems arose naturally in this context, determining when one should stop playing a sequence of games to max- imize one's expected fortune. A systematic theory of optimal stopping emerged with the seminal papers of Wald and Wolfowitz (1948) and Arrow, Blackwell and Girschick (1949) on the optimality of the sequential probability ratio test (SPRT). The monographs by Chow, Robbins and Siegmund (1971), Chernoff (1972) and Shiryayev (1978) provide comprehensive treatments of optimal stopping theory, which has subsequently developed into an important branch of stochastic con- trol theory. The subject of sequential hypothesis testing has also developed far beyond its original setting of a simple null versus a simple alternative hypothesis assumed by the SPRT. Although it is not difficult to formulate optimal stopping problems associated with optimal tests of composite hypotheses, these optimal stopping problems no longer have explicit solutions that are easily interpretable as in the case of the SPRT. Moreover, numerical solutions of the optimal stop- ping problems require precise specification of prior distributions, loss functions for wrong decisions and sampling costs, which may be difficult to come up with in practice. In Sections 2 and 3 we develop an asymptotic approach to solve approximately a general class of optimal stopping problems associated with se- quential tests of composite hypotheses. The asymptotic solutions provide natural

42 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the new classification algorithm based on the sequential probability ratio test (SPRT) has several important merits, including ease of error rate control, lower computational complexity and lower decision delay.

24 citations


Journal ArticleDOI
TL;DR: A modified Wald's test is proposed that is valid in small to moderate samples and maintains good power, as well as the performance of the large sample likelihood ratio test and an exact test of the equality of correlated binomial proportions.
Abstract: With measurements taken on subjects over time, on matched pairs of subjects or on clusters of subjects, the data often contain pairs of correlated dichotomous responses. McNemar's test is perhaps the best known test to compare two correlated binomial proportions. The salient feature of McNemar's test is that we compute the variance of the contrast estimator under the restriction that the null hypothesis is true. Wald's test, on the other hand, does not require that restriction. As a consequence, Wald's statistic is always greater in magnitude than McNemar's statistic when the marginal proportions are unequal, but there is a problem with the validity of both McNemar's test and Wald's test with small to moderate samples. There have been various modifications suggested for McNemar's test to improve its performance. We propose a modified Wald's test that is valid in small to moderate samples and maintains good power. We also evaluate the performance of McNemar's test and Wald's test with and without modifications to enhance validity as well as the performance of the large sample likelihood ratio test and an exact test of the equality of correlated binomial proportions. In a smaller study, we compare the behaviour of a test based on the James-Stein estimator of the common odds ratio proposed by Liang and Zeger to McNemar's test and Wald's test.

21 citations



Journal ArticleDOI
TL;DR: It is shown that the neural-network sequential detector can closely approximate the optimal SPRT with similar performance, and has the additional advantage that it is a nonparametric detector that does not require probability density functions.
Abstract: This paper proposes a novel neural-network method for sequential detection, We first examine the optimal parametric sequential probability ratio test (SPRT) and make a simple equivalent transformation of the SPRT that makes it suitable for neural-network architectures. We then discuss how neural networks can learn the SPRT decision functions from observation data and labels. Conventional supervised learning algorithms have difficulties handling the variable length observation sequences, but a reinforcement learning algorithm, the temporal difference (TD) learning algorithm works ideally in training the neural network. The entire neural network is composed of context units followed by a feedforward neural network. The context units are necessary to store dynamic information that is needed to make good decisions. For an appropriate neural-network architecture, trained with independent and identically distributed (iid) observations by the TD learning algorithm, we show that the neural-network sequential detector can closely approximate the optimal SPRT with similar performance. The neural-network sequential detector has the additional advantage that it is a nonparametric detector that does not require probability density functions. Simulations demonstrated on iid Gaussian data show that the neural network and the SPRT have similar performance.

19 citations


Proceedings ArticleDOI
10 Dec 1997
TL;DR: This paper uses Wald's approximation, with modification regarding the threshold overshoot, to predict the performance of the test, namely the average run length (ARL), between false alarms T and D and shows that T is asymptotically exponential in D, as in the i.i.d. case.
Abstract: Page's test is optimal in quickly detecting distributional changes among independent observations. In this paper we propose a similar procedure for the quickest detection of dependent signals which can be conveniently modeled as hidden Markov models. Considering Page's test as a repeated sequential probability ratio test, we use Wald's approximation, with modification regarding the threshold overshoot, to predict the performance of the test, namely the average run length (ARL), between false alarms T. Using the asymptotic convergence property of the test statistic, we are also able to predict the ARL to detection D. The analysis shows that T is asymptotically exponential in D, as in the i.i.d. case. The results are supported by numerical examples.

19 citations


Journal ArticleDOI
TL;DR: In this paper, a modified version of the Wald test of regression disturbances was used for MA(1) regression disturbances and the Monte Carlo results showed that the modified Wald tests always have monotonic increasing power functions in contrast to the traditional Wald test.

17 citations


Journal ArticleDOI
TL;DR: In this article, the joint distribution of the sequence of estimates of the parameter vector θ in a normal general linear model when data accumulate over a series of analyses is derived, even when observations are correlated.
Abstract: We derive the joint distribution of the sequence of estimates of the parameter vector θ in a normal general linear model when data accumulate over a series of analyses. This seclllerlce of estimates has a remarkably simple covariance structure, even when observations are correlated, allowing standard group sequential tests to be applied in very general settings. If observations variaices and covariances depend on an unknown scale factor σ2, the joint distribution of the sequence of estimates of θ σ2 has a simple form. again even in the case of correlated observations. From these results, we establish a general treatment of group sequential t,χ2and F-tests.

17 citations


Journal ArticleDOI
TL;DR: Exact, numerically stable solutions to certain delay-differential equations (DDEs) are developed here, and their asymptotic properties are explored.
Abstract: The problem of performance computation for sequential tests between Poisson processes is considered. The average sample numbers and error probabilities of the sequential probability ratio test (SPRT) between two homogeneous Poisson processes are known to solve certain delay-differential equations (DDEs). Exact, numerically stable solutions to these DDEs are developed here, and their asymptotic properties are explored. These solutions are seen to be superior to earlier solutions of Dvoretsky, Kiefer, and Wolfowitz (1953), which suffer from severe numerical instability in some ranges of parameters of interest in applications. The application of these results is illustrated in the problem of performance approximation for the cumulative sum (CUSUM) quickest detection procedure.

17 citations


Journal ArticleDOI
TL;DR: The likelihood ratio test performs well even when the sample size is moderate, whereas the score test does not seem to control the nominal significance level, and has the best performance among all the methods studied.
Abstract: We are often faced with the statistical problem of evaluating the effect of a treatment in the extreme of a population. This requires taking measurements on truncated random variables and, hence, it becomes necessary to take proper account of the effect of regression toward the mean. The usual statistical procedures are inappropriate for testing treatment effect in the presence of regression toward the mean. Likelihood ratio and score tests based on truncated distributions should provide valid statistical inferences in these situations. We conducted simulation studies to investigate the properties of these methods and found that the likelihood ratio test performs well even when the sample size is moderate, whereas the score test does not seem to control the nominal significance level. We compared the likelihood ratio test to a regression-based t-test, assuming the mean of the baseline distribution to be known, and found the likelihood ratio test more powerful. In the case where the baseline mean is unknown, we also investigated Wald's test and compared it with the likelihood ratio test and score test with respect to validity and power using simulation. Wald's test and the score test do not control the nominal significance level unless the sample size is extremely large. Overall, the likelihood ratio test has the best performance among all the methods studied. The proposed likelihood ratio test is illustrated using an example of a cholesterol study.

Journal ArticleDOI
TL;DR: In this article, the authors relax some of the conditions and show that there are sequential procedures that strictly dominate the sequential probability ratio test in all three senses, and that decision-makers are better served by looking for sequential procedures which possess the first two types of optimality.
Abstract: Wald and Wolfowitz (1948) have shown that the Sequential Probability Ratio Test (SPRT) for deciding between two simple hypotheses is, under very restrictive conditions, optimal in three attractive senses. First, it can be a Bayes-optimal rule. Second, of all level α tests having the same power, the test with the smallest joint-expected number of observations is the SPRT, where this expectation is taken jointly with respect to both data and prior over the two hypotheses. Third, the level α test needing the fewest conditional-expected number of observat ions is the SPRT, where this expectation is now taken with respect to the data conditional on either hypothesis being true. Principal among the strong restrictions is that sampling can proceed only in a one-at-a-time manner. In this paper, we relax some of the conditions and show that there are sequential procedures that strictly dominate the SPRT in all three senses. We conclude that the third type of optimality occurs rarely and that decision-makers are better served by looking for sequential procedures that possess the first two types of optimality. By relaxing the one-at-a-time sampling restriction, we obtain optimal (in the first two senses) variable-s ample-size- sequential probability ratio tests.

Journal ArticleDOI
TL;DR: In this article, a sensitivity analysis to shape parameter mis-specification is recommended before any specific test is implemented, and it is doubtful that the shape parameter may be estimated with enough precision to successfully implement these procedures.

Journal ArticleDOI
TL;DR: A method is presented, the minimax method, that can be used to select an SPRT which is optimal in testing the null hypothesis θ = θ0 against the composite alternative hypothesis η ≠‬‬0 for three monitoring systems, namely a system consisting of one sampling location with known mean and variance.
Abstract: Data provided by an environmental monitoring system are sampled successively. We propose to analyse such data by means of the sequential probability ratio test (SPRT) which is especially designed to analyse data which are sampled consecutively. We present a method, the minimax method, that can be used to select an SPRT which is optimal in testing the null hypothesis θ = θ0 against the composite alternative hypothesis θ ≠ θ0 for three monitoring systems, namely a system consisting of one sampling location with known mean and variance, a system consisting of one sampling location with unknown mean and variance and a system consisting of two sampling locations with unknown mean and covariance matrix. The latter test is applied to field data of the mallard. © 1997 by John Wiley & Sons, Ltd.

Proceedings ArticleDOI
17 Dec 1997
TL;DR: This work compares two hybrid active schemes both using sequential tests at the testing stage and presents the system and channel models, and the sequential decision rules and numerical results are given.
Abstract: Direct sequence spread spectrum (DS/SS) technology has been proven very useful in various mobile communication and global positioning systems. In order to exploit the advantages of a DS/SS signal the receiver must first be to able synchronize the local pseudonoise (PN) code with the received PN code. This is usually done in two steps: acquisition and tracking. First, the acquisition process coarsely aligns the two PN codes to within a fraction of a chip duration. The tracking circuit then takes over and performs fine adjustment until the desired accuracy is achieved. Hybrid acquisition schemes have been shown to provide a useful tradeoff between the low complexity of serial schemes and the high speed of parallel schemes. We compare two hybrid active schemes both using sequential tests at the testing stage. The first scheme uses a test based on the M-ary sequential probability ratio test (MSPRT), while the second scheme uses a test based on the sequential probability ratio test (SPRT) that utilizes M SPRTs operating independently. The latter scheme can be viewed as a form of pipelining. Both coherent and noncoherent acquisition are considered. The system and channel models are presented, and the sequential decision rules and numerical results are given.

Journal ArticleDOI
Aiyi Liu1
TL;DR: In this paper, the maximum likelihood estimate (MLE) of the drift of a Brownian motion following a symmetric sequential probability ratio test (SPRT) was shown to be asymptotically efficient when the boundary of the SPRT tends to infinity.
Abstract: Bias and variance are evaluated explicitly for the maximum likelihood estimate (MLE) of the drift of a Brownian motion following a symmetric sequential probability ratio test (SPRT). The MLE is shown to be asymptotically efficient when the boundary of the SPRT tends to infinity.

Journal ArticleDOI
TL;DR: In this paper, a double specification for sequential tests based on the sequential probability ratio test, the risk levels of and, estimated center and a given tolerance t of roundness have been applied to determine the quality of the roundness.
Abstract: Computer vision systems are the most versatile non-contact inspection systems. However, 100 scanning of discrete industrial workpieces for roundness inspection is time-consuming. As a result, the determination of the sample size for roundness inspection using computer vision inspection systems is a critical problem. The relationship between a 1- degree of confidence and a given esti2 2 mated range of S ERS based on the chi-square distribution at different degrees of freedom has been used to determine the sample size for estimating the centre of a circle. Sequential test methods reduce the number of samples required. A double specification for sequential tests based on the sequential probability ratio test, the risk levels of and , estimated centre and a given tolerance t of roundness have been applied to determine the quality of roundness. The average number of sample pixels ANSP required to determine the quality of roundness has also been derived.

Journal ArticleDOI
TL;DR: In this paper, a sequential probability ratio test (SPRT) was proposed to test against a simple hypothesis, i.e., a minimal relevant trend (i.e. a composite hypothesis).
Abstract: Environmental monitoring data are collected successively in time. Therefore, it is self evident to think about analyzing the data sequentially. We propose the use of a sequential probability ratio test (SPRT), developed to test against a simple hypothesis, to test against a minimal relevant trend 01 (i.e., a composite hypothesis). Three refinements are introduced: the boundaries An and Bn are made funnel shaped, 01 is replaced by 0 if the norm of 0 exceeds the norm of 01, and observation is stopped as soon as a predescribed accuracy is attained. Simulation studies show that these refinements generalize the SPRT for testing against a composite hypothesis and improve the performance in terms of power and expected sample size of the test. The robustness of the adjusted SPRT against spatial and serial correlation is studied. We demonstrate that the test is robust against serial correlation between -.5 and +.5 and spatial correlation between -.5 and +.2. The use of the SPRT is illustrated with filed data on three Tern species-the Sandwich Tern, the Arctic Tern, and the Common Tern.

01 Jan 1997
TL;DR: Automatic quality monitoring in robotised GMA welding using a repeated sequential probability ratio test method to improve quality monitoring and reduce uncertainty in the design and quality measurements.
Abstract: Automatic quality monitoring in robotised GMA welding using a repeated sequential probability ratio test method

Journal ArticleDOI
TL;DR: The Bayesian method for belief updating proposed in Racz (1996) is examined, and the method is compared to the classical binary Sequential Probability Ratio Testing method (SPRT).

01 Jan 1997
TL;DR: This paper will concentrate on sequential inference, for the case of simple hypotheses and for the cases with simple hypotheses with a nuisance parameter, from Wald's sequential probability ratio test, SPRT, and Cox's maximum likelihood SPRT for the two hypothesis cases above.
Abstract: In many clinical experiments there is a conflict between ethical demands to provide the best possible medical care for the patients and the statisticians desire to obtain an efficient experiment. Play-the-winner allocations is a group of designs that, during the experiment, tends to place more patients on the treatment that seems to be better. Using a randomized play-the-winner allocation and making a suitable inference for the design, is a suggestion to perform a reasonable experiment for the above mentioned considerations. In this paper we will concentrate on sequential inference, for the case of simple hypotheses and for the case with simple hypotheses with a nuisance parameter. The response to treatment is assumed to be dichotomous. We proceed from Wald's sequential probability ratio test, SPRT, and Cox's maximum likelihood SPRT, for the two hypothesis cases above.

Proceedings ArticleDOI
29 Jun 1997
TL;DR: The design of sequential detection tests is considered for serial search problems in which more than one candidate hypothesis can be tested simultaneously and results show that the binary tests outperform the MSPRT in most cases.
Abstract: The design of sequential detection tests is considered for serial search problems in which more than one candidate hypothesis can be tested simultaneously. The use of the M-ary sequential probability ratio test (MSPRT) and multiple parallel binary sequential probability ratio tests are compared. Theoretical and numerical results show that the binary tests outperform the MSPRT in most cases.

Proceedings ArticleDOI
29 Jun 1997
TL;DR: The performance of a modified code acquisition scheme, based on two-stage sequential probability ratio test (SPRT), is studied and can be similar to the conventional single-stage acquisition scheme.
Abstract: The performance of a modified code acquisition scheme, based on two-stage sequential probability ratio test (SPRT), is studied. The first stage (i.e., testing stage) is used to test the sum of M (M>1) local PN codes with different phases. This stage can reject M code phases once for each non-synchronisation declaration, and thus reduce the code acquisition (search) time. Also, the hardware or computation load can be similar to the conventional single-stage acquisition scheme.

Journal ArticleDOI
TL;DR: In this article, the multivariate analysis of variance model (MANOVA) is used to test hypotheses in the MANOVA model, and a table of critical values for multivariate Wald test is introduced.
Abstract: The Wald test can be applied to test all standard hypotheses in the univariate and multivariate ANOVA models, linear and log-linear multinomial models, linear and nonlinear regression models, and many other models by making minor changes to the test statistic. Because of its simplicity and generality, the Wald test has great practical and pedagogical appeal. However, in a number of cases including the multivariate analysis of variance model (MANOVA), the Wald test can only be used with very large samples because its exact distribution is unknown and approximations are highly inaccurate in the multivariate case. The Bartlett-Nanda-Pallai Trace (BNPT), Lawley-Hotelling Trace (LHT), Wilk's Lambda (WL), and Roy's Maximum Root (RMR) also can be used to test hypotheses in the MANOVA model. But unlike the multivariate Wald test tables of critical values have been developed for these statistics so they can be used in small samples, A table of critical values for the multivariate Wald test is introduced here that ...

01 Jan 1997
TL;DR: In this article, the problem of automatic monitoring the weld quality when welding with Gas Metal Arc (GMA) in short circuiting mode is dealt with, where a simple statistical change detection algorithm, the repeated sequential probability ratio test (SPRT), is used.
Abstract: This paper deals with the problem of automatic monitoring the weld quality when welding with Gas Metal Arc (GMA) in short circuiting mode. Experiments with two different types of T-joints are performed in order to provokeoptimal and non-optimal welding conditions. During the experiments, voltage and current are measured from the welding process. A simple statistical change detection algorithm for the weld quality, the repeated Sequential Probability Ratio Test (SPRT), is used. The algorithm can equivalently be viewed as a cumulative sum (CUSUM) - type test. The test statistics is based upon the fluctuations of amplitude in the the weld voltage. It is shown that the fluctuations of the weld voltage amplitude decreases when the welding process is not operating under optimal condition. The results obtained from the experiments indicate that it is possible to detect changes in the weld quality automatically and on-line.

Book ChapterDOI
E. Torgersen1
01 Jan 1997
TL;DR: In this article, the authors consider the problem of testing sequentially the null hypothesis " \(\theta = 0\) against the alternative " \( theta = 1 \) " on the basis of i.i.d.
Abstract: Consider the problem of testing sequentially the null hypothesis “ \(\theta = 0\) ” against the alternative “ \( \theta = 1 \) ” on the basis of i.i.d. potentially observable variables X1, X2,…. Let N be a stopping rule admitting a test based on (Xi,…, XN) having probabilities of errors ao and ai. Then the Hellinger transform of (Xi,…, XN) is at most equal to that of (X1,..., XN*.) where N* is the stopping rule of a sequential probability ratio test 5‘ having the same probabilities of errors. In particular the Hellinger distance between the distributions of (X1,…, XN) under \( \theta = 0 \) and \( \theta = 1 \) is at least equal to the same distance for (X1,…, XN*.). This remains so if the Hellinger distance is replaced by the statistical distance and provided the number 1 is not outside the stopping bounds.

Journal ArticleDOI
TL;DR: For clinical trials with interim analyses, there have been methodologies and software to calculate boundaries for comparing binomial, normal, and survival data from two treatment groups from two groups as mentioned in this paper.
Abstract: For clinical trials with interim analyses, there have been methodologies and software to calculate boundaries for comparing binomial, normal, and survival data from two treatment groups. Jermison & Turnbull (1991) extended Pocock (1977) and O' Brien- Fleming (1979) boundaries to t-tests, x2-tests and F-tests for comparing normal data from several treatment groups. This paper demonstrates that the above boundaries can be applied to a wide variety of test statistics based on general parametric settings. We show that asymptotically the x2 boundaries as well as the corresponding nominal significance levels calculated by Jennison & Turnbull can be applied to interim analyses based on the score test, the Wald test, and the likelihood ratio test for general parametric models. Based on the results of this paper, currently available software in group sequential testing can be used to calculate. the nominal significance levels (or boundaries) for group sequential testing based on logistic regression, A NOVA, and ot...

Proceedings ArticleDOI
01 Jan 1997
TL;DR: In this article, the authors consider hypothesis testing with a finite memory when the likelihood ratio is semi-bounded and show that a 2-state memory can achieve correct convergence with probability one under one hypothesis but only in probability under the other hypothesis.
Abstract: We consider hypothesis testing with a finite memory when the likelihood ratio is semi-bounded. It is shown that a 2-state memory can achieve correct convergence with probability one under one hypothesis but only in probability under the other hypothesis.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the group sequential procedures for comparing two treatments based on multivariate observations in clinical trials and propose a group sequential x2 statistic in order to carry out repeated significance test for hypothesis of no difference between two population mean vectors.
Abstract: In this study we discuss the group sequential procedures for comparing two treatments based on multivariate observations in clinical trials. Also we suppose that a response vector on each of two treatments has a multivariate normal distribution with unknown covariance matrix. Then we propose a group sequential x2 statistic in order to carry out repeated significance test for hypothesis of no difference between two population mean vectors. In order to realize the group sequential test where average sample number is reduced, we propose another modified group sequential x2 statistic by extension of Jennison and Turnbull ( 1991 ). After construction of repeated confidence boundaries for making the repeated significance test, we compare two group sequential procedures based on two statistics regarding the average sample number and the power of the test in the simulations.