scispace - formally typeset
Search or ask a question
Topic

Sequential probability ratio test

About: Sequential probability ratio test is a research topic. Over the lifetime, 1248 publications have been published within this topic receiving 22355 citations.


Papers
More filters
01 Aug 1981
TL;DR: The results of the study showed that the three-parameter logistic based procedure had higher decision consistency than the one-parameters based procedure when classifications were repeated after one week.
Abstract: : This report describes a study comparing the classification results obtained from a one-parameter and three-parameter logistic based tailored testing procedure used in conjunction with Wald's sequential probability ratio test (SPRT). Eighty-eight college students were classified into four grade categories using achievement test results obtained from tailored testing procedures based on maximum information item selection and maximum likelihood ability estimation. Tests were terminated using the SPRT procedure. The results of the study showed that the three-parameter logistic based procedure had higher decision consistency than the one-parameter based procedure when classifications were repeated after one week. Both procedures required fewer items for classification into grade categories than a traditional test over the same material. The three-parameter procedure required the fewest items of all, using an average of 12 to 13 items to assign a grade. (Author)

1 citations

Proceedings ArticleDOI
23 Jun 2013
TL;DR: This work mathematically compare and contrast the sequential probability ratio test, Bayesian fusion, and Dempster-Shafer theory of evidence approaches for combining information from multiple looks and shows results for an application in infrared video classification.
Abstract: Multiple-look fusion is quickly becoming more important in statistical pattern recognition. With increased computing power and memory one can make many measurements on an object of interest using, for example, video imagery or radar. By obtaining more views of an object, a system can make decisions with lower missed detection and false alarm errors. There are many approaches for combining information from multiple looks and we mathematically compare and contrast the sequential probability ratio test, Bayesian fusion, and Dempster-Shafer theory of evidence. Using a consistent probabilistic framework we demonstrate the differences and similarities between the approaches and show results for an application in infrared video classification.

1 citations

Posted Content
TL;DR: In this paper, the authors consider sequential detection based on quantized data in the presence of eavesdropper and characterize the asymptotic performance of the MSPRT in terms of the expected sample size as a function of the vanishing error probabilities.
Abstract: We consider sequential detection based on quantized data in the presence of eavesdropper. Stochastic encryption is employed as a counter measure that flips the quantization bits at each sensor according to certain probabilities, and the flipping probabilities are only known to the legitimate fusion center (LFC) but not the eavesdropping fusion center (EFC). As a result, the LFC employs the optimal sequential probability ratio test (SPRT) for sequential detection whereas the EFC employs a mismatched SPRT (MSPRT). We characterize the asymptotic performance of the MSPRT in terms of the expected sample size as a function of the vanishing error probabilities. We show that when the detection error probabilities are set to be the same at the LFC and EFC, every symmetric stochastic encryption is ineffective in the sense that it leads to the same expected sample size at the LFC and EFC. Next, in the asymptotic regime of small detection error probabilities, we show that every stochastic encryption degrades the performance of the quantized sequential detection at the LFC by increasing the expected sample size, and the expected sample size required at the EFC is no fewer than that is required at the LFC. Then the optimal stochastic encryption is investigated in the sense of maximizing the difference between the expected sample sizes required at the EFC and LFC. Although this optimization problem is nonconvex, we show that if the acceptable tolerance of the increase in the expected sample size at the LFC induced by the stochastic encryption is small enough, then the globally optimal stochastic encryption can be analytically obtained; and moreover, the optimal scheme only flips one type of quantized bits (i.e., 1 or 0) and keeps the other type unchanged.

1 citations

01 May 2013
TL;DR: In this paper, a sequential probability ratio test for collision avoidance maneuver decisions is proposed, which explicitly allows decision-makers to incorporate false alarm and missed detection risks, and is potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold.
Abstract: A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.

1 citations

01 Jan 1997
TL;DR: This paper will concentrate on sequential inference, for the case of simple hypotheses and for the cases with simple hypotheses with a nuisance parameter, from Wald's sequential probability ratio test, SPRT, and Cox's maximum likelihood SPRT for the two hypothesis cases above.
Abstract: In many clinical experiments there is a conflict between ethical demands to provide the best possible medical care for the patients and the statisticians desire to obtain an efficient experiment. Play-the-winner allocations is a group of designs that, during the experiment, tends to place more patients on the treatment that seems to be better. Using a randomized play-the-winner allocation and making a suitable inference for the design, is a suggestion to perform a reasonable experiment for the above mentioned considerations. In this paper we will concentrate on sequential inference, for the case of simple hypotheses and for the case with simple hypotheses with a nuisance parameter. The response to treatment is assumed to be dichotomous. We proceed from Wald's sequential probability ratio test, SPRT, and Cox's maximum likelihood SPRT, for the two hypothesis cases above.

1 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
82% related
Linear model
19K papers, 1M citations
79% related
Estimation theory
35.3K papers, 1M citations
78% related
Markov chain
51.9K papers, 1.3M citations
77% related
Statistical hypothesis testing
19.5K papers, 1M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20236
202223
202129
202023
201929
201832