scispace - formally typeset
Search or ask a question
Topic

Sequential probability ratio test

About: Sequential probability ratio test is a research topic. Over the lifetime, 1248 publications have been published within this topic receiving 22355 citations.


Papers
More filters
Proceedings ArticleDOI
31 Dec 2009
TL;DR: A novel spectrum sensing technique, called as multi-slot spectrum sensing, to detect spectral holes and to opportunistically use under-utilized frequency bands without causing harmful interference to legacy (primary) networks is proposed.
Abstract: In this paper, we propose a novel spectrum sensing technique, called as multi-slot spectrum sensing, to detect spectral holes and to opportunistically use under-utilized frequency bands without causing harmful interference to legacy (primary) networks. The key idea of the proposed sensing scheme is to combine the observations from the past N (N ≥2) sensing blocks including the latest one. Specifically, we first establish the detection model with the proposed multi-slot spectrum sensing technique. Then, we deploy the backward sequential probability ratio test (BSPRT) for the established model to detect spectral. Moreover, we evaluate the performances of the proposed scheme in terms of the mean delay for detection and the mean time to false alarm. Compared with the equally combining strategy, which equally combines the statistics of the past multiple sensing blocks, the proposed sensing strategy using BSPRT always performs better, which are verified via the conducted simulations.

10 citations

Journal ArticleDOI
TL;DR: A new approach is proposed in which a time limit is defined for the test and examinees’ response times are considered in both item selection and test termination, which showed a substantial reduction in average testing time while slightly improving classification accuracy.
Abstract: A well-known approach in computerized mastery testing is to combine the Sequential Probability Ratio Test (SPRT) stopping rule with item selection to maximize Fisher information at the mastery threshold. This article proposes a new approach in which a time limit is defined for the test and examinees’ response times are considered in both item selection and test termination. Item selection is performed by maximizing Fisher information per time unit, rather than Fisher information itself. The test is terminated once the SPRT makes a classification decision, the time limit is exceeded, or there is no remaining item that has a high enough probability of being answered before the time limit. In a simulation study, the new procedure showed a substantial reduction in average testing time while slightly improving classification accuracy compared with the original method. In addition, the new procedure reduced the percentage of examinees who exceeded the time limit.

10 citations

Journal ArticleDOI
TL;DR: In this paper, the PSPRT was compared to the Wald's SPRT with the same error probabilities for some parameter values, such as the Koopman-Darmois parameter.
Abstract: In testing a normal mean with known variance, or a Koopman-Darmois parameter, an initial fixed number n of observations is followed by Wald's SPRT procedure. The conditional SPRT, given, and optimality properties in a certain class of tests are noted. For some parameter values, the PSPRT may have a lower ASN than a Wald SPRT with the same error probabilities.

10 citations

Journal ArticleDOI
20 Apr 2018
TL;DR: Simulations show that the SeqRDT approach leads to faster decision making compared to its fixed sam-ple counterpart Block-RDT and is robust to model mismatches compared to the Sequential Probability Ratio Test (SPRT) when the actual signal is a distorted version of the assumed signal.
Abstract: In this work, we propose a non-parametric sequential hypothesis test based on random distortion testing (RDT). RDT addresses the problem of testing whether or not a random signal, $\Xi$ , observed in independent and identically distributed (i.i.d) additive noise deviates by more than a specified tolerance, $\tau$ , from a fixed model, $\xi _0$ . The test is non-parametric in the sense that the underlying signal distributions under each hypothesis are assumed to be unknown. The need to control the probabilities of false alarm (PFA) and missed detection (PMD), while reducing the number of samples required to make a decision, leads to a novel sequential algorithm, Seq RDT. We show that under mild assumptions on the signal, Seq RDT follows the properties desired by a sequential test. We introduce the concept of a buffer and derive bounds on PFA and PMD, from which we choose the buffer size. Simulations show that Seq RDT leads to faster decision-making on an average compared to its fixed-sample-size (FSS) counterpart, Block RDT. These simulations also show that the proposed algorithm is robust to model mismatches compared to the sequential probability ratio test (SPRT).

10 citations

01 Jan 2009
TL;DR: This study utilized a monte-carlo approach, with 10,000 examinees simulated under each condition, to evaluate differences in efficiency and accuracy due to hypothesis structure, nominal error rate, and indifference region size.
Abstract: Computer-based testing can be used to classify examinees into mutually exclusive groups. Currently, the predominant psychometric algorithm for designing computerized classification tests (CCTs) is the sequential probability ratio test (SPRT; Reckase, 1983) based on item response theory (IRT). The SPRT has been shown to be more efficient than confidence intervals around θ estimates as a method for CCT delivery (Spray & Reckase, 1996; Rudner, 2002). More recently, it was demonstrated that the SPRT, which only uses fixed values, is less efficient than a generalized form which tests whether a given examinee’s θ is below θ1or above θ2 (Thompson, 2007). This formulation allows the indifference region to vary based on observed data. Moreover, this composite hypothesis formulation better represents the conceptual purpose of the test, which is to test whether θ is above or below the cutscore. The purpose of this study was to explore the specifications of the new generalized likelihood ratio (GLR; Huang, 2004). As with the SPRT, the efficiency of the procedure depends on the nominal error rates and the distance between θ1 and θ2 (Eggen, 1999). This study utilized a monte-carlo approach, with 10,000 examinees simulated under each condition, to evaluate differences in efficiency and accuracy due to hypothesis structure, nominal error rate, and indifference region size. The GLR was always at least as efficient as the fixed-point SPRT while maintaining equivalent levels of accuracy.

10 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
82% related
Linear model
19K papers, 1M citations
79% related
Estimation theory
35.3K papers, 1M citations
78% related
Markov chain
51.9K papers, 1.3M citations
77% related
Statistical hypothesis testing
19.5K papers, 1M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20236
202223
202129
202023
201929
201832