Topic
Sequential probability ratio test
About: Sequential probability ratio test is a research topic. Over the lifetime, 1248 publications have been published within this topic receiving 22355 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: It is theoretically proved that the scheme using groups of samples with the optimal signaling waveform is the most energy-efficient and under a constant power constraint.
Abstract: Several sampling schemes and their corresponding sequential detection procedures in autoregressive noise are presented in this paper. Two of them use uniform sampling procedures with high and low sampling rates, respectively. The other two employ groups of samples, which are separated by long intergroup delays such that the intergroup correlations are negligible. One of the group-sampling schemes also employs optimal signaling waveforms to further improve its energy-efficiency. In all the schemes, data sampling and transformation are designed in such a way that Wald's sequential probability ratio test (SPRT) can still be implemented. The performances of different schemes, in terms of average termination time (ATT), are derived analytically. When all the schemes employ the same sampling interval and under a constant signal amplitude constraint, their performances are compared through analytical and numerical methods. In addition, under a constant power constraint, their ATTs and energy-efficiency are compared. It is theoretically proved that the scheme using groups of samples with the optimal signaling waveform is the most energy-efficient.
14 citations
••
TL;DR: The authors derived a Bartlett-type correction to the Wald statistic for the test of non-linear restrictions and derived a test based on the transformed critical values that can be suggested based on Phillips and Park's (Econometrica, 1988, 56, 1065, 1083) results.
14 citations
••
01 Dec 2014TL;DR: In this article, the problem of sequential testing composite hypotheses, considering multiple hypotheses and very general non-iid stochastic models, has been revisited, and two sequential tests are studied: the multihypothesis generalized sequential likelihood ratio test and the multi-hop-to-adaptive SRL test with one-stage delayed estimators.
Abstract: We revisit the problem of sequential testing composite hypotheses, considering multiple hypotheses and very general non-iid stochastic models Two sequential tests are studied: the multihypothesis generalized sequential likelihood ratio test and the multihypothesis adaptive sequential likelihood ratio test with one-stage delayed estimators While the latter loses information compared to the former, it has an advantage in designing thresholds to guarantee given upper bounds for probabilities of errors, which is practically impossible for the generalized likelihood ratio type tests It is shown that both tests have asymptotic optimality properties minimizing the expected sample size or even more generally higher moments of the stopping time as probabilities of errors vanish Two examples that illustrate the general theory are presented
14 citations
•
TL;DR: The type-III error as mentioned in this paper is one of the most common types of misspecification in standard hypothesis testing procedures, and it has been identified as a fundamental problem in statistical analysis.
Abstract: This century and the history of modern statistics began with Karl Pearson's, [181], (1900)goodness-of-fit test, one of the most important breakthroughs in science. The basic motivation behind this test was to see whether an assumed probability model adequately described the data at hand. Then, over the first half of this century we saw the developments of some general principles of testing, such as Jerzy Neyman and Egon Pearson's,[159], (1928) likelihood ratio test, Neyman's,[155], (1937) smooth test, Abraham Wald's test in 1943 and C.R. Rao's score test in 1948. All these tests were developed under the assumption that the underlying model is correctly specified. Trygve Haavelmo,[99], (1944) termed this underlying model as the priori admissible hypothesis. Although Ronald Fisher,[80], (1922) identified the "Problem of Specification" as one of the most fundamental problems in statistics much earlier, Haavelmo was probably the first to draw the attention to the consequences of misspecification of the priori admissible hypothesis on the standard hypothesis testing procedures. We will call this the type-III error. In this paper, we will deal with a number of ways that an assumed probability model can be misspecified, and discuss how some of the standard tests could be modified to make them valid under various misspecification.
14 citations
01 Jan 2011
TL;DR: Sequential Analysis builds upon the Neyman-Pearson Theorem as another form of sampling within hypothesis tests, in which the number of samples is not predetermined before sampling.
Abstract: A problem is presented that involves simple hypothesis testing in regards to two diierent proportions. The Neyman-Pearson Theorem deenes a rule in which the best critical region is derived for testing a simple hypothesis. Sequential Analysis builds upon this theorem as another form of sampling within hypothesis tests, in which the number of samples is not predetermined before sampling. Abraham Wald's Sequential Probability Ratio Test, along with standard testing will be used to make conclusions about our original problem concerning the simple hypothesis test.
14 citations