scispace - formally typeset
Search or ask a question

Showing papers on "Sequential probability ratio test published in 2015"


Journal ArticleDOI
18 Feb 2015-Neuron
TL;DR: This work trained rhesus monkeys to make decisions based on a sequence of evanescent, visual cues assigned different logLR, hence different reliability, and found that monkeys' choices and reaction times were explained by LIP activity in the context of accumulation of logLR to a threshold.

143 citations


Journal ArticleDOI
TL;DR: A first-passage time fluctuation theorem is derived which implies that the decision time distributions for correct and wrong decisions are equal.
Abstract: We show that the steady-state entropy production rate of a stochastic process is inversely proportional to the minimal time needed to decide on the direction of the arrow of time. Here we apply Wald's sequential probability ratio test to optimally decide on the direction of time's arrow in stationary Markov processes. Furthermore, the steady-state entropy production rate can be estimated using mean first-passage times of suitable physical variables. We derive a first-passage time fluctuation theorem which implies that the decision time distributions for correct and wrong decisions are equal. Our results are illustrated by numerical simulations of two simple examples of nonequilibrium processes.

91 citations


Journal ArticleDOI
TL;DR: This work develops index-type algorithms for the case with both known observation distributions and the case when the observation distributions have unknown parameters and shows that the proposed algorithms are asymptotically optimal in terms of minimizing the total expected cost as the error constraints approach zero.
Abstract: Sequential detection of independent anomalous processes among $K$ processes is considered. At each time, only $M$ $(1\leq M\leq K)$ processes can be observed, and the observations from each chosen process follow two different distributions, depending on whether the process is normal or abnormal. Each anomalous process incurs a cost per unit time until its anomaly is identified and fixed. Switching across processes and state declarations are allowed at all times, while decisions are based on all past observations and actions. The objective is a sequential search strategy that minimizes the total expected cost incurred by all the processes during the detection process under reliability constraints. We develop index-type algorithms for the case with both known observation distributions and the case when the observation distributions have unknown parameters. We show that the proposed algorithms are asymptotically optimal in terms of minimizing the total expected cost as the error constraints approach zero. Simulation results demonstrate strong performance in the finite regime.

47 citations


Journal ArticleDOI
TL;DR: Two improved network-SPRT methods are presented: using the threshold off-set as a weighting factor for the binary decisions from individual detectors in a weighted majority voting fusion rule, and applying a composite SPRT derived using measurements from all counters.
Abstract: In support of national defense, Domestic Nuclear Detection Office s (DNDO) Intelligent Radiation Sensor Systems (IRSS) program supported the development of networks of radiation counters for detecting, localizing and identifying low-level, hazardous radiation sources. Industry teams developed the first generation of such networks with tens of counters, and demonstrated several of their capabilities in indoor and outdoor characterization tests. Subsequently, these test measurements have been used in algorithm replays using various sub-networks of counters. Test measurements combined with algorithm outputs are used to extract Key Measurements and Benchmark (KMB) datasets. We present two selective analyses of these datasets: (a) a notional border monitoring scenario that highlights the benefits of a network of counters compared to individual detectors, and (b) new insights into the Sequential Probability Ratio Test (SPRT) detection method, which lead to its adaptations for improved detection. Using KMB datasets from an outdoor test, we construct a notional border monitoring scenario, wherein twelve 2 *2 NaI detectors are deployed on the periphery of 21*21meter square region. A Cs-137 (175 uCi) source is moved across this region, starting several meters from outside and finally moving away. The measurements from individual counters and the network were processed using replays of amore » particle filter algorithm developed under IRSS program. The algorithm outputs from KMB datasets clearly illustrate the benefits of combining measurements from all networked counters: the source was detected before it entered the region, during its trajectory inside, and until it moved several meters away. When individual counters are used for detection, the source was detected for much shorter durations, and sometimes was missed in the interior region. The application of SPRT for detecting radiation sources requires choosing the detection threshold, which in turn requires a source strength estimate, typically specified as a multiplier of the background radiation level. A judicious selection of this source multiplier is essential to achieve optimal detection probability at a specified false alarm rate. Typically, this threshold is chosen from the Receiver Operating Characteristic (ROC) by varying the source multiplier estimate. ROC is expected to have a monotonically increasing profile between the detection probability and false alarm rate. We derived ROCs for multiple indoor tests using KMB datasets, which revealed an unexpected loop shape: as the multiplier increases, detection probability and false alarm rate both increase until a limit, and then both contract. Consequently, two detection probabilities correspond to the same false alarm rate, and the higher is achieved at a lower multiplier, which is the desired operating point. Using the Chebyshev s inequality we analytically confirm this shape. Then, we present two improved network-SPRT methods by (a) using the threshold off-set as a weighting factor for the binary decisions from individual detectors in a weighted majority voting fusion rule, and (b) applying a composite SPRT derived using measurements from all counters.« less

46 citations


Journal ArticleDOI
TL;DR: This model combined a model of noisy physical simulation with a decision making strategy called the sequential probability ratio test, or SPRT, and predicted that people should use more samples when it is harder to make an accurate prediction due to higher simulation uncertainty.

35 citations


Journal ArticleDOI
TL;DR: The approach postulated in this framework is shown to achieve early and robust damage detection, identification (classification), and quantification based on predetermined sampling plans, which are both analytically and experimentally compared and assessed.
Abstract: The goal of this study is the introduction and experimental assessment of a sequential probability ratio test framework for vibration-based structural health monitoring. This framework is based on ...

30 citations


Journal ArticleDOI
TL;DR: In this paper, a robust fault detection and diagnostic scheme for a multi-energy domain system that integrates a model-based strategy for system fault modeling and a data-driven approach for online anomaly monitoring is presented.

22 citations


Proceedings ArticleDOI
11 Jul 2015
TL;DR: This paper proposes SPRINT-Race, a multi-objective racing algorithm based on the Sequential Probability Ratio Test with an Indifference Zone that is applied to identifying the Pareto optimal parameter settings of Ant Colony Optimization algorithms in the context of solving Traveling Salesman Problems.
Abstract: Multi-objective model selection, which is an important aspect of Machine Learning, refers to the problem of identifying a set of Pareto optimal models from a given ensemble of models. This paper proposes SPRINT-Race, a multi-objective racing algorithm based on the Sequential Probability Ratio Test with an Indifference Zone. In SPRINT-Race, a non-parametric ternary-decision sequential analogue of the sign test is adopted to identify pair-wise dominance and non-dominance relationship. In addition, a Bonferroni approach is employed to control the overall probability of any erroneous decisions. In the fixed confidence setting, SPRINT-Race tries to minimize the computational effort needed to achieve a predefined confidence about the quality of the returned models. The efficiency of SPRINT-Race is analyzed on artificially-constructed multi-objective model selection problems with known ground-truth. Moreover, SPRINT-Race is applied to identifying the Pareto optimal parameter settings of Ant Colony Optimization algorithms in the context of solving Traveling Salesman Problems. The experimental results confirm the advantages of SPRINT-Race for multi-objective model selection.

20 citations


Journal ArticleDOI
TL;DR: This letter provides a generalization of the well-known Wald test which is asymptotically optimal and reduces to the Wald detector in some special cases and simulation results show the superiority of the proposed GWT over its counterparts, namely, the Wald test and GLR detector.
Abstract: This letter provides a generalization of the well-known Wald test. The proposed generalized Wald test (GWT) is a Separating Function Estimation Test (SFET) which is a type of detector recently introduced for a wide class of composite problems. The test statistics of an SFET is an estimate of a real-valued Separating Function (SF). It is already proved that a Minimum Variance Unbiased Estimator of any SF leads to the optimal Uniformly Most Powerful unbiased detector. In many practical cases, such an optimal detector does not exist; hence, suboptimal ones are used, instead. Selecting an SF with a guaranteed performance is still an open problem which is investigated in this letter. First, we derive a lower bound for the detection probability of the SFET in terms of corresponding SF and the Fisher Information Matrix. Then we optimize the proposed bound with respect to the SF. The solution of the optimization problem leads to a generalization of the Wald test which is asymptotically optimal and reduces to the Wald detector in some special cases. Simulation results show the superiority of the proposed GWT over its counterparts, namely, the Wald test and GLR detector in some examples.

16 citations


Journal ArticleDOI
TL;DR: The sequential probability ratio test is shown to be the optimal sequential detection rule and a sensor selection probability vector such that the expected total observation cost is minimized subject to constraints on reliability and sensor usage.
Abstract: We study the problem of binary sequential hypothesis testing using multiple sensors with associated observation costs. An off-line randomized sensor selection strategy, in which a sensor is chosen at every time step with a given probability, is considered. The objective of this work is to find a sequential detection rule and a sensor selection probability vector such that the expected total observation cost is minimized subject to constraints on reliability and sensor usage. First, the sequential probability ratio test is shown to be the optimal sequential detection rule in this framework as well. Efficient algorithms for obtaining the optimal sensor selection probability vector are then derived. In particular, a special class of problems in which the algorithm has complexity that is linear in the number of sensors is identified. An upper bound for the average sensor usage to estimate the error incurred due to Wald’s approximations is also presented. This bound can be used to set a safety margin for guaranteed satisfaction of the constraints on the sensor usage.

14 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a rational sequential probability ratio test (SPRT) chart to monitor both process mean and variance, which is more effective than the cumulative sum chart by more than 63%.
Abstract: The sequential probability ratio test (SPRT) chart is a very effective tool for monitoring manufacturing processes. This paper proposes a rational SPRT chart to monitor both process mean and variance. This SPRT chart determines the sampling interval d based on the rational subgroup concept according to the process conditions and administrative considerations. Since the rational subgrouping is widely adopted in the design and implementation of control charts, the studies of the rational SPRT have a practical significance. The rational SPRT chart is designed optimally in order to minimize the index average extra quadratic loss for the best overall performance. A systematic performance study has also been conducted. From an overall viewpoint, the rational SPRT chart is more effective than the cumulative sum chart by more than 63%. Furthermore, this article provides a design table, which contains the optimal values of the parameters of the rational SPRT charts for different specifications. This will greatly f...

Journal ArticleDOI
Amadou Ba1, Sean A. McKenna1
TL;DR: In this paper, an approach combining affine projection algorithms and an autoregressive (AR) model is proposed to predict water quality time series and apply online change-point detection methods to the estimated residuals to determine the presence, or not, of contamination events.
Abstract: We develop an approach for water quality time series monitoring and contamination event detection. The approach combines affine projection algorithms and an autoregressive (AR) model to predict water quality time series. Then, we apply online change-point detection methods to the estimated residuals to determine the presence, or not, of contamination events. Particularly, we compare the performance of four change-point detection methods, namely, sequential probability ratio test (SPRT), cumulative sum (CUSUM), binomial event discriminator (BED), and online Bayesian change-point detection (OBCPD), by using residuals obtained from four water quality time series, chlorine, conductivity, total organic carbon, and turbidity. Our fundamental criterion for the performance evaluation of the four change-point detection methods is given by the receiver operating characteristic (ROC) curve which is characterized by the true positive rate as a function of the false positive rate. We highlight with detailed experiments that OBCPD provides the best performance for large contamination events, and we also provide insight on the choice of change-point detection algorithms to consider for designing efficient contamination detection schemes.

Proceedings ArticleDOI
01 Sep 2015
TL;DR: The existence of pure Nash equilibria are proved and sufficient conditions for the existence of StackelbergEquilibria with the defender as leader in the special case that the attacker does not discount future payoffs are given.
Abstract: This paper examines a two-player, non-zero-sum, sequential detection game motivated by problems arising in the cyber-security domain. A defender agent seeks to sequentially detect the presence of an attacker agent via the drift of a stochastic process. The attacker strategically chooses the drift of the observed stochastic process, while his payoff increases in both the drift of the stochastic process and the expected time spent undetected by the defender. It is the defender's objective to minimize a payoff function which is a weighted sum of the expected observation time and both type I and type II detection errors. As such, a best response sequential decision rule for the defender is a continuous-time version of Wald's Sequential Probability Ratio Test. We prove the existence of pure Nash equilibria and give sufficient conditions for the existence of Stackelberg equilibria with the defender as leader in the special case that the attacker does not discount future payoffs. The equilibria are explored through numerical examples.

Journal ArticleDOI
TL;DR: A new approach is proposed in which a time limit is defined for the test and examinees’ response times are considered in both item selection and test termination, which showed a substantial reduction in average testing time while slightly improving classification accuracy.
Abstract: A well-known approach in computerized mastery testing is to combine the Sequential Probability Ratio Test (SPRT) stopping rule with item selection to maximize Fisher information at the mastery threshold. This article proposes a new approach in which a time limit is defined for the test and examinees’ response times are considered in both item selection and test termination. Item selection is performed by maximizing Fisher information per time unit, rather than Fisher information itself. The test is terminated once the SPRT makes a classification decision, the time limit is exceeded, or there is no remaining item that has a high enough probability of being answered before the time limit. In a simulation study, the new procedure showed a substantial reduction in average testing time while slightly improving classification accuracy compared with the original method. In addition, the new procedure reduced the percentage of examinees who exceeded the time limit.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: This work proposes a sequential canonical correlation technique (S-CCT) method to estimate the number of active PUs quickly and accurately, and shows that the method can achieve better performance with fewer samples than CCT.
Abstract: In cognitive radio networks, a priori information on the number of primary users (PUs) is helpful to estimate more specific parameters of PUs' signal, such as the carrier frequency, direction of arrival, and location. We propose a sequential canonical correlation technique (S-CCT) method to estimate the number of active PUs quickly and accurately. In the proposed method, classical canonical correlation technique (CCT) is improved using multi-hypothesis sequential probability ratio test. Simulation results show that our proposed S-CCT method can achieve better performance with fewer samples than CCT.

Journal ArticleDOI
TL;DR: In this paper, a decentralized sequential hypothesis testing (DSPT) algorithm is proposed to reduce the number of samples to make a reliable detection in cognitive radio systems, where the cognitive radios sequentially collect the observations, make local decisions and send them to the fusion center for further processing.

Journal ArticleDOI
TL;DR: The simulation results showed that the SCGLR can yield increased efficiency without sacrificing accuracy, relative to the SPRT, SCSPRT, and GLR in a wide variety of CCT designs.
Abstract: Computerized classification tests (CCTs) are used to classify examinees into categories in the context of professional certification testing The term “variable-length” refers to CCTs that terminate (ie, cease administering items to the examinee) when a classification can be made with a prespecified level of certainty The sequential probability ratio test (SPRT) is a common criterion for terminating variable-length CCTs, but recent research has proposed more efficient methods Specifically, the stochastically curtailed SPRT (SCSPRT) and the generalized likelihood ratio criterion (GLR) have been shown to classify examinees with accuracy similar to the SPRT while using fewer items This article shows that the GLR criterion itself may be stochastically curtailed, resulting in a new termination criterion, the stochastically curtailed GLR (SCGLR) All four criteria—the SPRT, SCSPRT, GLR, and the new SCGLR—were compared using a simulation study In this study, we examined the criteria in testing conditions that varied several CCT design features, including item bank characteristics, pass/fail threshold, and examinee ability distribution In each condition, the termination criteria were evaluated according to their accuracy (proportion of examinees classified correctly), efficiency (test length), and loss (a single statistic combing both accuracy and efficiency) The simulation results showed that the SCGLR can yield increased efficiency without sacrificing accuracy, relative to the SPRT, SCSPRT, and GLR in a wide variety of CCT designs

Journal ArticleDOI
01 Mar 2015
TL;DR: A testability demonstration planning method based on the sequential probability ratio test method is proposed which can decrease the sample size with almost the same operation characteristic as the classical method and the result shows that the fault detection rate passes the test with a credible performance and the actual sample size is remarkably decreased while comparing with the Classical method.
Abstract: The flight control system plays an important role in adjusting the attitude of manned or auto-pilot aircrafts. To reduce the fault diagnosis time and accelerate the maintenance actions, many flight control systems have adopted the design for testability. Testability demonstration for the flight control system is needed to check the indexes of testability such as fault detection rate and fault isolation rate. Currently, the standards and statistical methods for the testability demonstration planning have the problems such as large sample, long test period and it is not optimal for the flight control systems which are of complex structure and high cost. A testability demonstration planning method based on the sequential probability ratio test method is proposed as it can decrease the sample size with almost the same operation characteristic as the classical method. Firstly, the decision factor and rules of the sequential probability ratio test method and truncated decision rules are introduced. Secondly, th...

Journal ArticleDOI
01 Feb 2015
TL;DR: In this paper, a scheme that integrates bond graph modeling for fault signatures establishment, and a multivariate state estimation technique-based empirical estimation for residual generation followed by a sequential probability ratio test-based residual evaluation for monitoring alarm is presented.
Abstract: Fault detection and isolation are critical for safety related complex systems like aircraft, trains, automobiles, power plants and chemical plants. In order to realize a robust and real time monitoring and diagnosis for these types of multi-energy domain systems, this paper presents a novel scheme that integrates bond graph modeling for fault signatures establishment, and a multivariate state estimation technique-based empirical estimation for residual generation followed by a Sequential Probability Ratio Test-based residual evaluation for monitoring alarm. Once a fault is detected and alerted, a synthesized non-null coherence vector is created, and then matched with the pre-designed fault signatures matrix to isolate possible faults. To identify the effectiveness of the proposed methodology, a simulation for pneumatic equalizer control unit of locomotive electronically controlled pneumatic brake is conducted. The experimental results show that satisfied performance of fault detection and isolation can be...

Proceedings ArticleDOI
07 Jul 2015
TL;DR: The aim of present study is to calculate single condition monitoring parameter from multiple features and shows that the method can clearly picked up the sign of early bearing damage.
Abstract: This paper presents the application of multivariate state estimation technique (MSET) and sequential probability ratio test (SPRT) for early damage detection of low speed slew bearing. This paper also investigates the appropriate and reliable features for slew bearing condition monitoring. It is found that largest Lyapunov exponent (LLE), approximate entropy, margin factor (MF) and impulse factor (IF) are able to monitor the slew bearing condition. The aim of present study is to calculate single condition monitoring parameter from multiple features. Combined MSET and SPRT were used to analyse the recorded reliable features obtained from a previous work. The result shows that the method can clearly picked up the sign of early bearing damage.

Proceedings ArticleDOI
24 Jun 2015
TL;DR: A single-hop, random-access, wireless sensor network (WSN) performing sequential distributed detection under a bandwidth constraint is considered and the asymptotic relative efficiency of the collision-aware SPRT relative to another SPRT is derived.
Abstract: We consider a single-hop, random-access, wireless sensor network (WSN) performing sequential distributed detection under a bandwidth constraint. At each time slot, the sensor nodes will make their decisions on whether the observed event is happening. The sensor censoring strategy is applied such that only the local decisions equal to one (i.e., the event happens) will be sent to the fusion center (FC). Because only one transmission channel between the FC and the sensor nodes is assumed, if two or more local decisions are sent at a time slot, a packet collision happens. We design a sequential probability ratio test (SPRT) at the FC that is aware of the collisions. The performance measures of the collision-aware SPRT are analyzed based on a Markov chain. In addition, we derive the asymptotic relative efficiency of the collision-aware SPRT relative to another SPRT. Numerical results show some interesting characteristics of the collision-aware SPRT. For example, the collision-aware SPRT prefers the scenario that many local decisions are likely to be sent per time slot.

Proceedings ArticleDOI
TL;DR: In this paper, the authors study sequential collusion-resistant fingerprinting, where the fingerprinting code is generated in advance but accusations may be made between rounds, and show that in this setting both the dynamic Tardos scheme and schemes building upon Wald's sequential probability ratio test (SPRT) are asymptotically optimal.
Abstract: We study sequential collusion-resistant fingerprinting, where the fingerprinting code is generated in advance but accusations may be made between rounds, and show that in this setting both the dynamic Tardos scheme and schemes building upon Wald's sequential probability ratio test (SPRT) are asymptotically optimal. We further compare these two approaches to sequential fingerprinting, highlighting differences between the two schemes. Based on these differences, we argue that Wald's scheme should in general be preferred over the dynamic Tardos scheme, even though both schemes have their merits. As a side result, we derive an optimal sequential group testing method for the classical model, which can easily be generalized to different group testing models.

Proceedings ArticleDOI
01 Jul 2015
TL;DR: The theoretical validity of 2-SPRT is proved for the problem of testing hypotheses with multivariate normal densities and a method of forced independence and identical distribution to optimally map the non-i.i.d. likelihood ratio sequence is presented.
Abstract: Double sequential probability ratio test (2-SPRT), as an extended version of SPRT to cope with the no-upper-bound problem, is extended to the multiple-model hypothesis testing (MMHT) approach, called 2-MMSPRT, for detecting unknown events that may have multiple prior distributions. Not only does it address the mis-specified problem of the SPRT based MMHT method (MMSPRT), but it also can be expected to provide most efficient detection in the sense of minimizing the maximum expected sample size subject to error probability constraints. Specifically, we proved the theoretical validity of 2-SPRT for the problem of testing hypotheses with multivariate normal densities. Moreover, we present a method of forced independence and identical distribution (i.i.d.) to optimally map the non-i.i.d. likelihood ratio sequence to an i.i.d. one, by which we solve the problem of SPRT and 2-SPRT for dynamic systems with a non-identical distribution. Finally, 2-MMSPRT's asymptotic efficiency is also verified. Performance of 2-MMSPRT is evaluated for model-set selection problems in several scenarios. Simulation results demonstrate the asymptotic effectiveness of the proposed 2-MMSPRT compared with the MMSPRT.

Journal ArticleDOI
TL;DR: This work addresses the problem of confidence interval estimation following a sequential probability ratio test (SPRT) for a normal distribution having mean and variance unknown but equal and proposes a methodology based on random central limit theorem that is remarkably easy to implement with bias-corrected estimators.
Abstract: Confidence interval estimation following a sequential probability ratio test (SPRT) is an important and difficult problem with applications in clinical trials. Difficulties arise because following termination of SPRT, a customary estimator of an unknown parameter of interest obtained from the randomly stopped data is often biased. As a result, coverage probability of a naive confidence interval based on the randomly stopped version of the customary estimator often falls below the target confidence level. We address this problem for a normal distribution having mean and variance unknown but equal and propose a methodology based on random central limit theorem that is remarkably easy to implement with bias-corrected estimators. We have also explored limited bootstrapped versions of our parametric resolutions. With the help of extensive sets of simulations, we have concluded that our data-driven bias-corrected parametric confidence intervals with a slight variance inflation perform remarkably well to...

Journal ArticleDOI
TL;DR: The novel Generalized Likelihood Ratio Test (GLRT) algorithm is proposed, where the Maximum Likelihood Estimations (MLEs) of the unknown occurring interval are obtained through a Dynamic Programming (DP) method adaptively without the secondary data.

Journal ArticleDOI
TL;DR: This article pointed out that statistical tests of psychological hypotheses against a null hypothesis are loaded in favor of eventual success at rejecting the null hypothesis and suggested that psychologists should ask not "is there a difference, but rather "is the difference, if any, such that it would be of theoretical or practical importance, or, perhaps, how much difference is there".
Abstract: Professor Meehl [2] has pointed out a very significant problem in the methodology of psychological research, indicating that statistical tests of psychological hypotheses against a null hypothesis are loaded in favor of eventual success at rejecting the null hypothesis. In my opinion this is not, however, a contrast between physics and psychology, but rather between the method of parameter estimation and that of the null hypothesis in the tradition of Fisher. A physicist could use the null hypothesis method as well as the psychologist. The fact that he doesn't is probably related to the more advanced state of his measurement techniques and theoretical constructs. The suggestion that nearly all psychological variables are correlated is an empirical question. Much evidence does seem to support this statement, but it is absurd to expect that all of these relations would be of substantial significance. The hypothesis which the psychologist wishes to support (the alternative to the null hypothesis) that a difference exists between the groups, may well be of a minor or incidental nature. It would seem that psychologists should ask not \"is there a difference,\" but rather \"is the difference, if any, such that it would be of theoretical or practical importance,\" or, perhaps, \"how much difference is there.\" If the I.Q. of boys were found to be 0.6 I.Q. points less than that of girls of the same age, one may reasonably doubt that this would be of any theoretical or practical significance. This applies to a statistical argument as well. On the other hand, if the difference were 6.0 points, there would likely be theoretical and practical significance to the difference. Perhaps at least a part of the appropriate solution is to report the confidence limits for the estimates of parameters of the populations. The confidence interval is an estimated range of values with a given (high) probability of containing the true population parameter. Methods of calculating confidence intervals are discussed in Hays [1]. Intuitively the confidence interval defines a set of tenable values for the parameter. I suggest the following form: \"The confidence interval (.95) for the mean of group A is 5.74 to 8.91 and that for group B is 7.50 to 9.34. The confidence interval (.95) for the difference of the population means is 1.2 to -0.4.\" The rejection of the null hypothesis is a crude test of a theory: it provides only a small amount of information. In many ways the reporting of a confidence interval in a data report is much more informative than the reporting of a hypothesis test or a significance level. It is likely that various amounts of difference

Journal ArticleDOI
TL;DR: The proposed S-BSD scheme is seen to significantly reduce the average number of symbols/blocks required to achieve the desired detection performance in comparison to the fixed block size BSD (F-BSD).
Abstract: This letter presents the optimal Bartlett detector based sequential probability ratio test (SPRT) for spectrum sensing in multi-antenna array cognitive radio networks. The optimal Wald test based sequential Bartlett spectral detector (S-BSD) is derived for a single/multiple primary user scenario, considering a fading wireless channel. Further, this is also extended to a scenario with multiple-input multiple-output (MIMO) wireless systems. Closed form expressions are derived for the average number of blocks required for the sequential detector in terms of the desired probability of false alarm and mis-detection. The S-BSD framework is also subsequently extended to an AWGN channel scenario. Simulation results are presented to illustrate the performance of the proposed detection schemes and verify the derived analytical results. The proposed S-BSD scheme is seen to significantly reduce the average number of symbols/blocks required to achieve the desired detection performance in comparison to the fixed block size BSD (F-BSD).

Proceedings ArticleDOI
19 Oct 2015
TL;DR: This work builds a hierarchical framework of online detection and identification procedures drawn from sequential analysis namely the CUSUM (Cumulative Sum) and SPRT (Sequential Probability Ratio Test), both of which are low complexity algorithms.
Abstract: One of the most significant problems in the area of 3D range image processing is that of segmentation and classification from 3D laser range data, especially in real-time. In this work we introduce a novel multi-layer approach to the classification of 3D laser scan data. In particular, we build a hierarchical framework of online detection and identification procedures drawn from sequential analysis namely the CUSUM (Cumulative Sum) and SPRT (Sequential Probability Ratio Test), both of which are low complexity algorithms. Each layer of algorithms builds upon the decisions made at the previous stage thus providing a robust framework of online decision making. In our new framework we are not only able to classify in coarse classes such as vertical, horizontal and/or vegetation but to also identify objects characterized by more subtle or gradual changes such as curbs or steps. Moreover, our new multi-layer approach combines information across scan lines and results in more accurate decision making. We perform experiments in complex urban scenes and provide quantitative results.

Posted Content
TL;DR: An information theoretic analysis of Wald's sequential probability ratio test shows that in case the test terminates at time instant $k$ the probability to decide for hypothesis $\mathcal{H}_1$ (or the counter-hypothesis$) is independent of time.
Abstract: We provide an information theoretic analysis of Wald's sequential probability ratio test. The optimality of the Wald test in the sense that it yields the minimum average decision time for a binary decision problem is reflected by the evolution of the information densities over time. Information densities are considered as they take into account the fact that the termination time of the Wald test depends on the actual realization of the observation sequence. Based on information densities we show that in case the test terminates at time instant $k$ the probability to decide for hypothesis $\mathcal{H}_1$ (or the counter-hypothesis $\mathcal{H}_0$) is independent of time. We use this characteristic to evaluate the evolution of the mutual information between the binary random variable and the decision variable of the Wald test. Our results establish a connection between minimum mean decision times and the corresponding information processing.

Proceedings ArticleDOI
01 Dec 2015
TL;DR: The proposed LTS-GSPRT amounts to the algorithm where each sensor successively reports the decisions of local GSPRTs to the fusion center to preserve the same asymptotic performance of the centralized GSRPT as the local thresholds and global thresholds grow large at different rates.
Abstract: This paper investigates the generalized sequential probability ratio test (GSPRT) with multiple sensors. Focusing on the communication-constrained scenario, where sensors transmit one-bit messages to the fusion center, we propose a decentralized GSRPT based on level-triggered sampling scheme (LTS-GSPRT). The proposed LTS-GSPRT amounts to the algorithm where each sensor successively reports the decisions of local GSPRTs to the fusion center. Interestingly, with significantly lower communication overhead, LTS-GSPRT preserves the same asymptotic performance of the centralized GSPRT as the local thresholds and global thresholds grow large at different rates.