scispace - formally typeset
Search or ask a question
Topic

Sequential probability ratio test

About: Sequential probability ratio test is a research topic. Over the lifetime, 1248 publications have been published within this topic receiving 22355 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The relative efficiency of a sequential hypothesis test compared to a fixed sample size test is defined as the ratio of the expected sample size of the sequential test to the sample sizes of the fixed sample Size likelihood ratio test with the same size and power.
Abstract: The relative efficiency of a sequential hypothesis test compared to a fixed sample size test is defined as the ratio of the expected sample size of the sequential test to the sample size of the fixed sample size test with the same size and power. Asymptotic behavior of the relative efficiency is studied for the detection of a constant signal in additive noise. With some regularity conditions imposed on the noise density, the asymptotic relative efficiency of the sequential probability ratio test with respect to the corresponding fixed sample size likelihood ratio test is a function of the size and the power. As the size a approaches zero and the power approaches unity, this asymptotic relative efficiency has a limiting value depending on the functional relationship of a and 1 - \beta as they approach zero. Comparison of the power functions is also studied.

33 citations

Posted Content
TL;DR: In this paper, a sequential Gaussian shift-in-mean hypothesis testing in a distributed multi-agent network is studied and the expected stopping time for the proposed algorithm at each network agent is evaluated and is benchmarked with respect to the optimal centralized algorithm.
Abstract: This paper studies the problem of sequential Gaussian shift-in-mean hypothesis testing in a distributed multi-agent network. A sequential probability ratio test (SPRT) type algorithm in a distributed framework of the \emph{consensus}+\emph{innovations} form is proposed, in which the agents update their decision statistics by simultaneously processing latest observations (innovations) sensed sequentially over time and information obtained from neighboring agents (consensus). For each pre-specified set of type I and type II error probabilities, local decision parameters are derived which ensure that the algorithm achieves the desired error performance and terminates in finite time almost surely (a.s.) at each network agent. Large deviation exponents for the tail probabilities of the agent stopping time distributions are obtained and it is shown that asymptotically (in the number of agents or in the high signal-to-noise-ratio regime) these exponents associated with the distributed algorithm approach that of the optimal centralized detector. The expected stopping time for the proposed algorithm at each network agent is evaluated and is benchmarked with respect to the optimal centralized algorithm. The efficiency of the proposed algorithm in the sense of the expected stopping times is characterized in terms of network connectivity. Finally, simulation studies are presented which illustrate and verify the analytical findings.

32 citations

Journal ArticleDOI
TL;DR: In this article, a new multivariate approach to quality control is presented, on the basis of sequential probability ratio tests, a multivariate cumulative sum chart is derived using an approximation to the noncentral x2 distribution, a linear decision rule is obtained.
Abstract: A new multivariate approach to quality control is presented. On the basis of sequential probability ratio tests, a multivariate cumulative sum chart is derived. Using an approximation to the noncentral x2 distribution, a linear decision rule is obtained.

32 citations

Journal ArticleDOI
TL;DR: In this paper, the degradation based reliability demonstration test (RDT) plan design problems for long life products under a small sample circumstance are studied and the superiority of degradation based RDT methods compared with the traditional failure based methods is shown.
Abstract: In this paper, the degradation based reliability demonstration test (RDT) plan design problems for long life products under a small sample circumstance are studied. Fixed sample method, sequential probability ratio test (SPRT) method, and sequential Bayesian decision method are provided based on univariate degradation testing. The simulation examples show the superiority of degradation based RDT methods compared with the traditional failure based methods, and the sequential-type methods have more test power than their fixed sample counterparts. The test power can be further improved by combining the test data of a reliability indicator with the data of its marker, based on which the bivariate fixed sample method and the sequential Bayesian decision method are defined. The simulation study shows the benefit from the combination. The degradation based RDT plan optimization model, and the corresponding searching-based solution algorithm using some heuristic rules discovered in the paper, are also presented. The case study of Rubidium Atomic Frequency Standard with a RDT plan design demonstrates the effectiveness of our methods on overcoming the difficulties of small samples in reliability demonstration of long life products.

32 citations

Journal ArticleDOI
TL;DR: This article makes a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons and shows that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.
Abstract: Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.

32 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
82% related
Linear model
19K papers, 1M citations
79% related
Estimation theory
35.3K papers, 1M citations
78% related
Markov chain
51.9K papers, 1.3M citations
77% related
Statistical hypothesis testing
19.5K papers, 1M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20236
202223
202129
202023
201929
201832