scispace - formally typeset
Search or ask a question
Topic

Sequential probability ratio test

About: Sequential probability ratio test is a research topic. Over the lifetime, 1248 publications have been published within this topic receiving 22355 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Under the assumption of two equiprobable classes that are normally distributed with equal covariance matrices, it is shown that the LSC is equivalent to Wald's sequential probability ratio test.
Abstract: A nonparametric sequential pattern classifier called a linear sequential classifier (LSC) is presented. The pattern components are measured sequentially and the decisions either to measure the next component or to stop and classify the pattern are made using linear functions derived from sample patterns based on the least mean-square error criterion. The required linear functions are computed using an adaption of Greville's recursive algorithm for computing the generalized inverse of a matrix. A recursive algorithm for computing the least mean-square error is given and is used to determine the order in which the pattern components are measured. Under the assumption of two equiprobable classes that are normally distributed with equal covariance matrices, it is shown that the LSC is equivalent to Wald's sequential probability ratio test. Computer-simulated experiments indicate that the LSC is more effective than existing nonparametric sequential classifiers.

9 citations

Journal ArticleDOI
TL;DR: The study showed that the SPRT with multidimensional IRT has the same characteristics as theSPRT with uniddimensional IRT and results in more accurate classifications than the latter when used for multiddimensional data.
Abstract: A classification method is presented for adaptive classification testing with a multidimensional item response theory (IRT) model in which items are intended to measure multiple traits, that is, within-dimensionality. The reference composite is used with the sequential probability ratio test (SPRT) to make decisions and decide whether testing can be stopped before reaching the maximum test length. Item-selection methods are provided that maximize the determinant of the information matrix at the cutoff point or at the projected ability estimate. A simulation study illustrates the efficiency and effectiveness of the classification method. Simulations were run with the new item-selection methods, random item selection, and maximization of the determinant of the information matrix at the ability estimate. The study also showed that the SPRT with multidimensional IRT has the same characteristics as the SPRT with unidimensional IRT and results in more accurate classifications than the latter when used for multidimensional data.

9 citations

Posted Content
TL;DR: In this paper, the authors proposed a change detection test based on the Doob's Maximal Inequality and showed that it is an approximation of the sequential probability ratio test (SPRT), and the relationship between the threshold value used in the proposed test and its size and power was deduced from the approximation.
Abstract: A martingale framework for concept change detection based on testing data exchangeability was recently proposed (Ho, 2005). In this paper, we describe the proposed change-detection test based on the Doob's Maximal Inequality and show that it is an approximation of the sequential probability ratio test (SPRT). The relationship between the threshold value used in the proposed test and its size and power is deduced from the approximation. The mean delay time before a change is detected is estimated using the average sample number of a SPRT. The performance of the test using various threshold values is examined on five different data stream scenarios simulated using two synthetic data sets. Finally, experimental results show that the test is effective in detecting changes in time-varying data streams simulated using three benchmark data sets.

8 citations

Journal ArticleDOI
TL;DR: Wald's approximations are shown to be applicable even though the problem setting deviates from that of the traditional sequential probability ratio test (SPRT), and the proposed scheme achieves significant savings in the cost of data fusion.
Abstract: The problem of decentralized detection in a large wireless sensor network is considered. An adaptive decentralized detection scheme, group-ordered sequential probability ratio test (GO-SPRT), is proposed. This scheme groups sensors according to the informativeness of their data. Fusion center collects sensor data sequentially, starting from the most informative data and terminates the process when the target performance is reached. Wald's approximations are shown to be applicable even though the problem setting deviates from that of the traditional sequential probability ratio test (SPRT). To analyze the efficiency of GO-SPRT, the asymptotic equivalence between the average sample number of GO-SPRT, which is a function of a multinomial random variable, and a function of a normal random variable, is established. Closed-form approximations for the average sample number are then obtained. Compared with fixed sample size test and traditional SPRT, the proposed scheme achieves significant savings in the cost of data fusion.

8 citations

Posted Content
TL;DR: In this article, the authors introduced almost fixed-length hypothesis testing, where the decision maker declares the true hypothesis almost always after collecting a fixed number of samples $n$; however, in very rare cases with exponentially small probability the decision-maker is allowed to collect another set of samples (no more than polynomial in $n) and improve the tradeoff between type-I and type-II error exponents.
Abstract: The maximum type-I and type-II error exponents associated with the newly introduced almost-fixed-length hypothesis testing is characterized. In this class of tests, the decision-maker declares the true hypothesis almost always after collecting a fixed number of samples $n$; however in very rare cases with exponentially small probability the decision maker is allowed to collect another set of samples (no more than polynomial in $n$). This class of hypothesis tests are shown to bridge the gap between the classical hypothesis testing with a fixed sample size and the sequential hypothesis testing, and improve the trade-off between type-I and type-II error exponents.

8 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
82% related
Linear model
19K papers, 1M citations
79% related
Estimation theory
35.3K papers, 1M citations
78% related
Markov chain
51.9K papers, 1.3M citations
77% related
Statistical hypothesis testing
19.5K papers, 1M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20236
202223
202129
202023
201929
201832