scispace - formally typeset
Search or ask a question

Showing papers on "Sequential probability ratio test published in 2002"


Book
01 Apr 2002
TL;DR: In this paper, the authors proposed a test for the variance of the mean of a sign test and the percentile of the sign test for estimating the mean probability ratio of a normal population.
Abstract: Preface Objectives, Coverage, and Hopes Introduction Back to the Origin Recent Upturn and Positive Feelings The Objectives The Coverage Aims and Scope Final Thoughts Why Sequential? Introduction Tests of Hypotheses Estimation Problems Selection and Ranking Problems Computer Programs Sequential Probability Ratio Test Introduction Termination and Determination of A and B ASN Function and OC Function Examples and Implementation Auxiliary Results Sequential Tests for Composite Hypotheses Introduction Test for the Variance Test for the Mean Test for the Correlation Coefficient Test for the Gamma Shape Parameter Two-Sample Problem: Comparing the Means Auxiliary Results Sequential Nonparametric Tests Introduction A Test for the Mean: Known Variance A Test for the Mean: Unknown Variance A Test for the Percentile A Sign Test Data Analyses and Conclusions Estimation of the Mean of a Normal Population Introduction Fixed-Width Confidence Intervals Bounded Risk Point Estimation Minimum Risk Point Estimation Some Selected Derivations Location Estimation: Negative Exponential Distribution Introduction Fixed-Width Confidence Intervals Minimum Risk Point Estimation Selected Derivations Point Estimation of the Mean of an Exponential Population Introduction Minimum Risk Estimation Bounded Risk Estimation Data Analyses and Conclusions Other Selected Multistage Procedures Some Selected Derivations Fixed-Width Intervals from MLEs Introduction General Sequential Approach General Accelerated Sequential Approach Examples Data Analyses and Conclusions Some Selected Derivations Distribution-Free Methods in Estimation Introduction Fixed-Width Confidence Intervals for the Mean Minimum Risk Point Estimation for the Mean Bounded Length Confidence Interval for the Median Data Analyses and Conclusions Other Selected Multistage Procedures Some Selected Derivations Multivariate Normal Mean Vector Estimation Introduction Fixed-Size Confidence Region: SIGMA = sigma2H Fixed-Size Confidence Region: Unknown Dispersion Matrix Minimum Risk Point Estimation: Unknown Dispersion Matrix Data Analyses and Conclusions Other Selected Multistage Procedures Some Selected Derivations Estimation in a Linear Model Introduction Fixed-Size Confidence Region Minimum Risk Point Estimation Data Analyses and Conclusions Other Selected Multistage Procedures Some Selected Derivations Estimating the Difference of Two Normal Means Introduction Fixed-Width Confidence Intervals Minimum Risk Point Estimation Other Selected Multistage Procedures Some Selected Derivations Selecting the Best Normal Population Introduction Indifference Zone Formulation Two-Stage Procedure Sequential Procedure Data Analyses and Conclusions Other Selected Multistage Procedures Some Selected Derivations Sequential Bayesian Estimation Introduction Selected Fixed Sample Size Concepts Elementary Sequential Concepts Data Analysis Selected Applications Introduction Clinical Trials Integrated Pest Management Experimental Psychology: Cognition of Distance A Problem from Horticulture Other Contemporary Areas of Applications Appendix: Selected Reviews, Tables, and Other Items Introduction Big O(.) and Little o(.) Some Probabilistic Notions and Results A Glimpse at Nonlinear Renewal Theory Abbreviations and Notation Statistical Tables References Index Exercises appear at the end of each chapter.

120 citations


Patent
07 Jun 2002
TL;DR: In this paper, a system and method for monitoring a condition of a monitored system is presented, which employs empirically derived distributions in the sequential probability ratio test (SPRT) to provide more accurate and sensitive alerts of impending faults, breakdowns and process deviations.
Abstract: A system and method for monitoring a condition of a monitored system. Estimates of monitored parameters from a model of the system provide residual values that can be analyzed using a sequential probability ratio test (“SPRT”). The invention employs empirically derived distributions in the SPRT to provide more accurate and sensitive alerts of impending faults, breakdowns and process deviations. The distributions can be generated from piecewise continuous approximation or spline functions based on the actual distribution of residual data to provide improved computational performance. The distributions can be provided before monitoring, or can be updated and determined during monitoring in an adaptive fashion.

112 citations


Report SeriesDOI
TL;DR: In this article, the authors compare the finite sample performance of a range of tests of linear restrictions for linear panel data models estimated using Generalized Method of Moments (GMM), including standard asymptotic Wald tests based on one-step and two-step GMM estimators.
Abstract: We compare the finite sample performance of a range of tests of linear restrictions for linear panel data models estimated using Generalised Method of Moments (GMM). These include standard asymptotic Wald tests based on one-step and two-step GMM estimators; two bootstrapped versions of these Wald tests; a version of the two-step Wald test that uses a more accurate asymptotic approximation to the distribution of the estimator; the LM test; and three criterion-based tests that have recently been proposed. We consider both the AR(1) panel model, and a design with predetermined regressors. The corrected two-step Wald test performs similarly to the standard one-step Wald test, whilst the bootstrapped one-step Wald test, the LM test, and a simple criterion-difference test can provide more reliable finite sample inference in some cases.

62 citations


Proceedings Article
01 Jan 2002
TL;DR: A binary-hypothesis technique called the sequential probability ratio test (SPRT) provides optimal detection of change points for online surveillance of digitized signals and demonstrates the dual advantages of high sensitivity with good avoidance of false alarms.
Abstract: This paper presents a real-time machine learning technique that has been adapted from the field of statistical process control (SPC) to give early annunciation of incipient anomalies in signals and processes involving enterprise computing systems and associated networks. A binary-hypothesis technique called the sequential probability ratio test (SPRT) provides optimal detection of change points for online surveillance of digitized signals and demonstrates the dual advantages of high sensitivity with good avoidance of false alarms. SPRT-based systems are being developed for a variety of applications to enhance the reliability, availability, and serviceability of enterprise computing systems.

54 citations


ReportDOI
16 Feb 2002
TL;DR: The performance of the SPRT is improved by integrating extreme values statistics, which specifically models behavior in the tails of the distribution of interest into the SPRTs, which improves the early identification of conditions that could lead to performance degradation and safety concerns.
Abstract: The primary objective of damage detection is to ascertain with confidence if damage is present or not within a structure of interest. In this study, a damage classification problem is cast in the context of the statistical pattern recognition paradigm. First, a time prediction model, called an autoregressive and autoregressive with exogenous inputs (AR-ARX) model, is fit to a vibration signal measured during a normal operating condition of the structure. When a new time signal is recorded from an unknown state of the system, the prediction errors are computed for the new data set using the time prediction model. When the structure undergoes structural degradation, it is expected that the prediction errors will increase for the damage case. Based on this premise, a damage classifier is constructed using a sequential hypothesis testing technique called the sequential probability ratio test (SPRT). The SPRT is one form of parametric statistical inference tests, and the adoption of the SPRT to damage detection problems can improve the early identification of conditions that could lead to performance degradation and safety concerns. The sequential test assumes a probability distribution of the sample data sets, and a Gaussian distribution of the sample data sets is often used. This assumption, however, might impose potentially misleading behavior on the extreme values of the data, i.e. those points in the tails of the distribution. As the problem of damage detection specifically focuses attention on the tails, the assumption of normality is likely to lead the analysis astray. To overcome this difficulty, the performance of the SPRT is improved by integrating extreme values statistics, which specifically models behavior in the tails of the distribution of interest into the SPRT.

27 citations


Book ChapterDOI
01 Jan 2002
TL;DR: In this chapter, an automatic input selection routine was described which combines the principal component analysis, correlation analysis, and genetic algorithm to select important input features and whether the sensors fail or not is determined by applying the sequential probability ratio test to the residuals between the estimated signals and the measured signals.
Abstract: It is well known that the performance of fuzzy neural networks applied to sensor monitoring strongly depends on the selection of inputs. In their applications to sensor monitoring, there are usually a large number of input variables related to a relevant output. As the number of input variables increases, the required training time of a fuzzy neural network increases exponentially. Thus, it is essential to reduce the number of inputs to a fuzzy neural network and moreover, to select the optimum number of mutually independent inputs that are able to clearly define the input-output mapping. In this chapter, an automatic input selection routine was described which combines the principal component analysis, correlation analysis, and genetic algorithm to select important input features. Also, whether the sensors fail or not is determined by applying the sequential probability ratio test to the residuals between the estimated signals and the measured signals. The described sensor failure detection method was verified through applications to the steam generator water level, the steam generator steam flowrate, the pressurizer water level, and the pressurizer pressure sensors in pressurized water reactors.

18 citations


Proceedings ArticleDOI
09 Mar 2002
TL;DR: In this paper, the adaptive test is shown to be uniformly asymptotically optimal in the sense that it minimizes the average sample size for all parameter values when probabilities of errors are small.
Abstract: Wald's sequential probability ratio test (SPRT) is known to be optimal for simple hypotheses. However, the hypotheses in target detection applications are usually composite, because, as a rule, the models are only partially known. A major method traditionally used for testing composite hypotheses is based on a generalized likelihood ratio. However, the generalized SPRT suffers from a crucial drawback - it is very difficult to select thresholds in order to guarantee prescribed levels of false alarms and missed detections. We consider an adaptive approach that allows us to overcome this problem. At each stage, unknown parameters are replaced with an estimator which is based on previous observations, but not on the current observation. It is shown that the adaptive test is uniformly asymptotically optimal in the sense that it minimizes the average sample size for all parameter values when probabilities of errors are small. The general results are applied to the problem of detecting a target with unknown intensity in clutter with unknown variance.

17 citations


Journal ArticleDOI
TL;DR: Ghosh, B.K., Sen, P.K. as discussed by the authors, et al. present the Handbook of Sequential Analysis (HSA) for the sequential analysis of sequential analysis.
Abstract: *Reprinted from Handbook of Sequential Analysis; Ghosh, B.K., Sen, P.K., Eds.; Marcel Dekker, Inc.: New York, 1991, 121–144.

17 citations


Proceedings ArticleDOI
08 Jul 2002
TL;DR: By introducing a concept of measurement support of a track an SPRT-type test in an explicit form is developed based on several newly obtained results andSimulated numerical examples are given, which demonstrate the performance of the proposed test.
Abstract: It is essential in target tracking to determine whether a track is really a good estimated trajectory of a target. Due to many uncertainties involved in tracking, however, this is a difficult task. This paper presents a systematic approach for rejecting or accepting a track. It is based on the sequential probability ratio test (SPRT), which is optimal in the sense of having the quickest decision among all tests subject to the same or lower decision error rates. By introducing a concept of measurement support of a track an SPRT-type test in an explicit form is developed based on several newly obtained results. Simulated numerical examples are given, which demonstrate the performance of the proposed test.

14 citations


Proceedings ArticleDOI
18 Jun 2002
TL;DR: In this application of the statistical pattern recognition paradigm, a prediction model of a chosen feature is developed from the time domain response of a baseline structure and the SPRT algorithm is utilized to decide if the test structure is undamaged or damaged and which joint is exhibiting the change.
Abstract: In this application of the statistical pattern recognition paradigm, a prediction model of a chosen feature is developed from the time domain response of a baseline structure. After the model is developed, subsequent feature sets are tested against the model to determine if a change in the feature has occurred. In the proposed statistical inference for damage identification there are two basic hypotheses; (1) the model can predict the feature, in which case the structure is undamaged or (2) the model can not accurately predict the feature, suggesting that the structure is damaged. The Sequential Probability Ratio Test (SPRT) develops a statistical method that quickly arrives at a decision between these two hypotheses and is applicable to continuous monitoring. In the original formulation of the SPRT algorithm, the feature is assumed to be Gaussian and thresholds are set accordingly. It is likely, however, that the feature used for damage identification is sensitive to the tails of the distribution and that the tails may not necessarily be governed by Gaussian characteristics. By modeling the tails using the technique of Extreme Value Statistics, the hypothesis decision thresholds for the SPRT algorithm may be set avoiding the normality assumption. The SPRT algorithm is utilized to decide if the test structure is undamaged or damaged and which joint is exhibiting the change.

12 citations


Journal ArticleDOI
TL;DR: In this paper, a partially sequential test procedure for ordered categorical data is presented, where a fixed number of observations drawn from the first population is used to provide a suitable stopping rule.
Abstract: In multi-sample inference problems, in particular two-sample inference problems, a partially sequential scheme, introduced by Wolfe (1977), is instrumental in reducing time and/or cost of the experiment when one sample is very difficult to obtain and/or costly compared to the other one. The present paper deals with a partially sequential test procedure for ordered categorical data. "Ridit", introduced by Bross (1958) and based on a fixed number of observations drawn from the first population, is used to provide a suitable stopping rule. A random number of observations are drawn from the second population using that stopping rule. A suitable test statistic is suggested based on those fixed number of observations from the first population and a random number of observations from the second population. Various asymptotic properties of the test procedure are explored. The paper concludes with the results of a simulation study.

Proceedings ArticleDOI
12 Mar 2002
TL;DR: A Q-learning method to train sequential detection networks through reinforcement learning and cross-entropy minimization on labeled data and networks that approximate the optimal parametric sequential probability ratio test are obtained.
Abstract: In this paper we discuss the design of sequential detection networks for nonparametric sequential analysis. We present a general probabilistic model for sequential detection problems where the sample size as well as the statistics of the sample can be varied. A general sequential detection network handles three decisions. First, the network decides whether to continue sampling or stop and make a final decision. Second, in the case of continued sampling the network chooses the source for the next sample. Third, once the sampling is concluded the network makes the final classification decision. We present a Q-learning method to train sequential detection networks through reinforcement learning and cross-entropy minimization on labeled data. As a special case we obtain networks that approximate the optimal parametric sequential probability ratio test. The performance of the proposed detection networks is compared to optimal tests using simulations.

Journal ArticleDOI
TL;DR: A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors.
Abstract: A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information ofother sensors The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors

Proceedings ArticleDOI
08 Mar 2002
TL;DR: The contribution of this work is based on developing a method of classifying an unlabeled vector of fused features as quickly as possible given an acceptable mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm.
Abstract: In this paper, the previously introduced Mean-Field Bayesian Data Reduction Algorithm is extended for adaptive sequential hypothesis testing utilizing Page's test. In general, Page's test is well understood as a method of detecting a permanent change in distribution associated with a sequence of observations. However, the relationship between detecting a change in distribution utilizing Page's test with that of classification and feature fusion is not well understood. Thus, the contribution of this work is based on developing a method of classifying an unlabeled vector of fused features (i.e., detect a change to an active statistical state) as quickly as possible given an acceptable mean time between false alerts. In this case, the developed classification test can be thought of as equivalent to performing a sequential probability ratio test repeatedly until a class is decided, with the lower log-threshold of each test being set to zero and the upper log-threshold being determined by the expected distance between false alerts. It is of interest to estimate the delay (or, related stopping time) to a classification decision (the number of time samples it takes to classify the target), and the mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm. Results are demonstrated by plotting the delay to declaring the target class versus the mean time between false alerts, and are shown using both different numbers of simulated training data and different numbers of relevant features for each class.

01 Jan 2002
TL;DR: General theorems are developed and a unified approach to analyzing and evaluating various properties of sequential tests and post-test es- timation is proposed that allows for effective evaluation of properties of special interest.
Abstract: By the sufficiency principle, the probability density of a sequential test statistic under certain conditions can be factored into a known function that does not depend on the stopping rule and a conditional probability that is free of un- known parameters. We develop general theorems and propose a unified approach to analyzing and evaluating various properties of sequential tests and post-test es- timation. The proposed approach is of practical value since it allows for effective evaluation of properties of special interest, such as the bias-adjustment of post-test estimation after a sequential test, and the probability of discordance between a sequential test and a nonsequential test.

01 Sep 2002
TL;DR: In this paper, two sub-grouping techniques, Subgroups Consistency Check (SCC) and Subgroups Voting (SV), are presented to prevent the effect of fault propagation in general signal validation methods.
Abstract: On-line signal validation is essential for safe and economic operations of a complicated industrial system such as a nuclear power plant. Various signal validation methods based on empirical signal estimation have been developed and successfully used. The first part of the thesis addresses a common and unavoidable problem for these methods –fault propagation, which causes false identification of healthy signals as faulty ones because of the faults existing in other signals. This effect is especially serious when faults occur in multiple signals and/or during system transient. A sub-grouping technique is presented in the thesis to prevent the effect of fault propagation in general signal validation methods. Specifically, two methods, Subgroups Consistency Check (SCC) and Subgroups Voting (SV), are developed. Their effectiveness is demonstrated by using a well-known Multivariate State Estimation technique (MSET) as a general method of signal estimation. To further improve the performance of MSET estimation, a procedure called Feedback Once (FBO) is also developed. All these new methods are tested and compared with MSET by using real transient data from a reactor startup process in a nuclear power plant. The results show that false identification of signals caused by fault propagation is significantly reduced by the two sub-grouping methods and the FBO method is able to improve the performance of MSET estimation to some extent. The results demonstrate that implementation of these new methods can lead to an improved signal validation technique that remains effective even when faults occur in multiple signals during system transients. The other major contribution is on the improvement of statistical test used for signal validation. Sequential Probability Ratio Test (SPRT) is a popular method that has been widely used in many signal validation methods. However, the assumption of SPRT is too stringent to satisfy in practice, which may cause the false identification rate exceeding the preset tolerance. In this thesis, a Sequential Rank-sum Probability Ratio Test (SRPRT) method is developed. This method is similar to SPRT in procedure but is based on a much weaker assumption that can be easily satisfied. The demonstrations show that SRPRT yields a smaller false identification rate than SPRT and is always below the preset tolerance.