scispace - formally typeset
Search or ask a question
Topic

Sequential probability ratio test

About: Sequential probability ratio test is a research topic. Over the lifetime, 1248 publications have been published within this topic receiving 22355 citations.


Papers
More filters
Proceedings ArticleDOI
22 Apr 2008
TL;DR: This method minimizes the ghost-source problem of current estimation methods, and achieves a lower false alarm rate compared with current detection methods.
Abstract: Identification of a low-level point radiation source amidst background radiation is achieved by a network of radiation sensors using a two-step approach. Based on measurements from three sensors, the geometric difference triangulation method is used to estimate the location and strength of the source. Then a sequential probability ratio test based on current measurements and estimated parameters is employed to finally decide: (1) the presence of a source with the estimated parameters, or (2) the absence of the source, or (3) the insufficiency of measurements to make a decision. This method achieves specified levels of false alarm and missed detection probabilities, while ensuring a close-to-minimal number of measurements for reaching a decision. This method minimizes the ghost-source problem of current estimation methods, and achieves a lower false alarm rate compared with current detection methods. This method is tested and demonstrated using: (1) simulations, and (2) a test-bed that utilizes the scaling properties of point radiation sources to emulate high intensity ones that cannot be easily and safely handled in laboratory experiments.

101 citations

Journal ArticleDOI
TL;DR: A first-passage time fluctuation theorem is derived which implies that the decision time distributions for correct and wrong decisions are equal.
Abstract: We show that the steady-state entropy production rate of a stochastic process is inversely proportional to the minimal time needed to decide on the direction of the arrow of time. Here we apply Wald's sequential probability ratio test to optimally decide on the direction of time's arrow in stationary Markov processes. Furthermore, the steady-state entropy production rate can be estimated using mean first-passage times of suitable physical variables. We derive a first-passage time fluctuation theorem which implies that the decision time distributions for correct and wrong decisions are equal. Our results are illustrated by numerical simulations of two simple examples of nonequilibrium processes.

91 citations

Journal ArticleDOI
23 Jun 2008
TL;DR: It is shown experimentally that the proposed sequential correspondence verification (SCV) algorithm significantly outperforms the standard correspondence selection method based on SIFT distance ratios on challenging matching problems.
Abstract: In many retrieval, object recognition, and wide-baseline stereo methods, correspondences of interest points (distinguished regions) are commonly established by matching compact descriptors such as SIFTs. We show that a subsequent cosegmentation process coupled with a quasi-optimal sequential decision process leads to a correspondence verification procedure that 1) has high precision (is highly discriminative), 2) has good recall, and 3) is fast. The sequential decision on the correctness of a correspondence is based on simple statistics of a modified dense stereo matching algorithm. The statistics are projected on a prominent discriminative direction by SVM. Wald's sequential probability ratio test is performed on the SVM projection computed on progressively larger cosegmented regions. We show experimentally that the proposed sequential correspondence verification (SCV) algorithm significantly outperforms the standard correspondence selection method based on SIFT distance ratios on challenging matching problems.

89 citations

Book
28 Oct 1992
TL;DR: In this paper, a theory of optimal sampling is developed in order to prove the various properties of the procedures and the procedures turn out to be optimal in a Bayesian sense as well as for problems with side conditions (e.g., specified bounds on error probabilities or expected sampling costs).
Abstract: This volume is concerned with statistical procedures where the data are collected in sequentially designed groups. The basic premise here is that the expected total sample size is not always the appropriate criterion for evaluating statistical procedures, especially for nonlinear sampling costs (eg. additive fixed costs) and in clinical trials. In fact, this criterion seems to have been a hindrance to the practical use of Wald's sequential probability ratio test (SPRT) despite its well-known optimum properties. This volume systematically develops decision procedures which retain the possibility of early stopping and remove some of the disadvantages of one-at-a-time sampling. In particular, for generalizations of the SPRT algorithms, methods for computing characteristics (such as operating characteristics or power functions, expected sampling costs, etc) are developed and implemented. The procedures turn out to be optimal in a Bayesian sense as well as for problems with side conditions (eg. specified bounds on error probabilities or expected sampling costs). A theory of optimal sampling is developed in order to prove the various properties of the procedures.

89 citations

Journal ArticleDOI
TL;DR: In this article, the authors compare the finite sample performance of a range of tests of linear restrictions for linear panel data models estimated using the generalized method of moments (GMM), including standard asymptotic Wald tests based on one-step and two-step GMM estimators.
Abstract: We compare the finite sample performance of a range of tests of linear restrictions for linear panel data models estimated using the generalized method of moments (GMM). These include standard asymptotic Wald tests based on one-step and two-step GMM estimators; two bootstrapped versions of these Wald tests; a version of the two-step Wald test that uses a finite sample corrected estimate of the variance of the two-step GMM estimator; the LM test; and three criterion-based tests that have recently been proposed. We consider both the AR(1) panel model and a design with predetermined regressors. The corrected two-step Wald test performs similarly to the standard one-step Wald test, whilst the bootstrapped one-step Wald test, the LM test, and a simple criterion-difference test can provide more reliable finite sample inference in some cases.

86 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
82% related
Linear model
19K papers, 1M citations
79% related
Estimation theory
35.3K papers, 1M citations
78% related
Markov chain
51.9K papers, 1.3M citations
77% related
Statistical hypothesis testing
19.5K papers, 1M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20236
202223
202129
202023
201929
201832