scispace - formally typeset
Search or ask a question

Showing papers on "Sequential probability ratio test published in 2016"


Journal ArticleDOI
TL;DR: Simulations indicate improvements in both failure detection and recovery speed, contributing to improved accuracy and stability in HCV fault-tolerant navigation.
Abstract: A fault-detection algorithm for a redundant multisensor navigation system for hypersonic cruise vehicles (HCVs) is proposed. The algorithm comprehensively diagnoses failures according to the failure level monitored by the sequential probability ratio test (SPRT) and chi-square test as well as the failure trend monitored by the SPRT. A test statistics feedback-reset loop is also added to shorten the recovery time after failure ceases. Simulations indicate improvements in both failure detection and recovery speed, contributing to improved accuracy and stability in HCV fault-tolerant navigation.

69 citations


Book ChapterDOI
10 Oct 2016
TL;DR: Statistical Model Checking (SMC) is a compromise between verification and testing where executions of the systems are monitored until an algorithm from statistics can produce an estimate for the system to satisfy a given property.
Abstract: Statistical Model Checking (SMC) is a compromise between verification and testing where executions of the systems are monitored until an algorithm from statistics can produce an estimate for the system to satisfy a given property.

66 citations


Journal ArticleDOI
TL;DR: In this paper, a sequential Gaussian shift-in-mean hypothesis testing in a distributed multi-agent network is studied, where the agents update their decision statistics by simultaneously processing latest observations (innovations) sensed sequentially over time and information obtained from neighboring agents.
Abstract: This paper studies the problem of sequential Gaussian shift-in-mean hypothesis testing in a distributed multi-agent network. A sequential probability ratio test (SPRT) type algorithm in a distributed framework of the consensus+innovations form is proposed, in which the agents update their decision statistics by simultaneously processing latest observations (innovations) sensed sequentially over time and information obtained from neighboring agents (consensus). For each pre-specified set of type I and type II error probabilities, local decision parameters are derived which ensure that the algorithm achieves the desired error performance and terminates in finite time almost surely (a.s.) at each network agent. Large deviation exponents for the tail probabilities of the agent stopping time distributions are obtained and it is shown that asymptotically (in the number of agents or in the high signal-to-noise-ratio regime) these exponents associated with the distributed algorithm approach that of the optimal centralized detector. The expected stopping time for the proposed algorithm at each network agent is evaluated and is benchmarked with respect to the optimal centralized algorithm. The efficiency of the proposed algorithm in the sense of the expected stopping times is characterized in terms of network connectivity. Finally, simulation studies are presented which illustrate and verify the analytical findings.

33 citations


Journal ArticleDOI
TL;DR: A supervised land cover change detection framework in which a MODIS NDVI time series is modeled as a triply modulated cosine function using the extended Kalman filter and the trend parameter is used to derive repeated sequential probability ratio test (RSPRT) statistics, which achieves better performance in terms of accuracy and detection delay.
Abstract: To improve statistical approaches for near real-time land cover change detection in nonGaussian time-series data, we propose a supervised land cover change detection framework in which a MODIS NDVI time series is modeled as a triply modulated cosine function using the extended Kalman filter and the trend parameter of the triply modulated cosine function is used to derive repeated sequential probability ratio test (RSPRT) statistics. The statistics are based on relative density ratios estimated directly from the training set by a relative unconstrained least squares importance Fitting (RULSIF) algorithm, unlike traditional likelihood ratio-based test statistics. We test the framework on simulated, synthetic, and real-world beetle infestation datasets, and show that using estimated relative density ratios, instead of assuming the individual density functions to be Gaussian or approximating them with Gaussian Kernels, in the RSPRT statistics achieves better performance in terms of accuracy and detection delay. We verify the efficiency of the proposed approach by comparing its performance with three existing methods on all the three datasets under consideration in this study. We also propose a simple heuristic technique that tunes the threshold efficiently in difficult cases of near real-time change detection, when we need to take three performance indices, namely, false positives, false negatives, and mean detection delay, into account simultaneously.

26 citations


Journal ArticleDOI
TL;DR: A general framework for feature level sequential fusion, which combines biometric features and makes a decision each time a user inputs a biometric sample, is proposed and an optimal algorithm that can minimize the average number of input is proposed, and its optimality theoretically is proved.

22 citations


Proceedings ArticleDOI
20 Jun 2016
TL;DR: It is demonstrated that the AAKR model produces good reconstructions when the observations are similar to observations represented in the training data, and for some examples of simulated anomalies, the method reveals the abnormal behaviour.
Abstract: In this paper we present an application of sensorbased anomaly detection in maritime transport. The study is based on real sensor data streamed from a ship to shore, where the data is analysed through a big data analytics platform. The novelty of this work originates in the use of data from sensors covering different aspects of the ship operation, exemplified here by propulsion power, speed over ground and ship motion in four different degrees of freedom. The developed method employs Auto Associative Kernel Regression (AAKR) for signal reconstruction, and the Sequential Probability Ratio Test (SPRT) technique for anomaly detection, where different hypothesis tests looking both at mean and variance deviations have been tested. In order to compare different settings, formal state of the art performance metrics have been used. We demonstrate that the AAKR model produces good reconstructions when the observations are similar to observations represented in the training data, and for some examples of simulated anomalies, the method reveals the abnormal behaviour. As long as the parameters are tuned carefully, alarms are triggered appropriately by the SPRT.

15 citations


Journal ArticleDOI
TL;DR: Based on the governing equations for OC and ASN of the SPRT developed in the previous work, a solution for the general case is proposed that relies on approximating the original test by truncation, that is, truncating the test at some finite time K.
Abstract: The operating characteristic (OC) and average sample number (ASN) of the sequential probability ratio test (SPRT) and multi-hypothesis SPRT (MSPRT) are studied. We consider the case where the observation sequence is independent but not necessarily identically distributed. Also, the thresholds for the test can be time varying. Based on the governing equations for OC and ASN of the SPRT developed in our previous work, a solution for the general case is proposed. The governing equations for OC and ASN of the MSPRT are also obtained. Numerical solutions for MSPRT are developed. Basically, the solutions rely on approximating the original test by truncation, that is, truncating the test at some finite time $K$ . We show that under some mild conditions, the approximation error diminishes as $K$ increases, at the cost of increased computation. Numerical examples are provided to demonstrate our solutions by comparing with Monte Carlo simulations, Simon’s lower bound, and Dragalin’s method (if available) for ASN.

14 citations


01 Jan 2016
TL;DR: In this paper, the application of the sequential probability ratio test to such inspections is explored, and the approximate theoretical properties of the sequence test procedure are determined and compared with simulation results.
Abstract: Accessions of seeds, put aside for long-term conservation in seed banks, need to be monitored periodically in order to guard against loss of viability. Each inspection can be regarded as a formal test of the null hypothesis that the accession requires regeneration. In this paper the application of the sequential probability ratio test to such inspections is explored. The approximate theoretical properties of the sequential test procedure are determined and compared with simulation results. The most important advantage of the sequential approach over an equivalent fixed-sample approach is that, on average, it consumes far fewer seeds.

13 citations


Journal ArticleDOI
TL;DR: This paper redefine some concepts about fuzzy hypotheses testing, and then the sequential probability ratio test for fuzzy hypothesis testing with fuzzy observations is given.
Abstract: In hypotheses testing, such as other statistical problems, we may confront imprecise concepts. One case is a situation in which both hypotheses and observations are imprecise. In this paper, we redefine some concepts about fuzzy hypotheses testing, and then we give the sequential probability ratio test for fuzzy hypotheses testing with fuzzy observations. Finally, we give some applied examples.

12 citations


Journal ArticleDOI
TL;DR: In this paper, a two-parametric family of modified sequential probability ratio tests is proposed and analyzed to get the robust test by the minimax risk criterion, and numerical experiments illustrate the theoretical results.
Abstract: The problem of robustifying of the sequential probability ratio test is considered for a discrete hypothetical model. Exact values for error prob- abilities and for conditional expected sample sizes are obtained. Asymptotic robustness analysis for these characteristics is performed under "contamina- tions". A two-parametric family of modified sequential probability ratio tests is proposed and analyzed to get the robust test by the minimax risk criterion. Numerical experiments illustrate the theoretical results.

11 citations


Journal ArticleDOI
TL;DR: Numerical simulation results show that the proposed sensing strategy can significantly reduce the sensing time when the majority of potential channels are occupied, and the performance of this low complexity algorithm is analyzed when the presence of unoccupied channels is rare.
Abstract: Spectrum sensing is a key technology enabling the cognitive radio system. In this paper, the problem of how to quickly and accurately find an unoccupied channel from a large amount of potential channels is considered. The cognitive radio system under consideration is equipped with a narrow band sensor, hence it can only sense those potential channels in a sequential manner. In this scenario, we propose a novel two-stage mixed-observation sensing strategy. In the first stage, which is named as scanning stage, the sensor observes a linear combination of the signals from a pair of channels. The purpose of the scanning stage is to quickly identify a pair of channels such that at least one of them is highly likely to be unoccupied. In the second stage, which is called refinement stage, the sensor only observers the signal from one of those two channels identified from the first stage, and selects one of them as the unoccupied channel. The problem under this setup is an ordered two concatenated Markov stopping time problem. The optimal solution is solved using the tools from the multiple stopping time theory. It turns out that the optimal solution has a rather complex structure, hence a low complexity algorithm is proposed to facilitate the implementation. In the proposed low complexity algorithm, the cumulative sum test is adopted in the scanning stage and the sequential probability ratio test is adopted in the refinement stage. The performance of this low complexity algorithm is analyzed when the presence of unoccupied channels is rare. Numerical simulation results show that the proposed sensing strategy can significantly reduce the sensing time when the majority of potential channels are occupied.

Journal ArticleDOI
TL;DR: The theoretical validity of 2-SPRT is proved for the problem of testing hypotheses with multivariate normal densities and a technique of forced independence and identical distribution is presented to map the non-i.i.d. likelihood ratio sequence to an i.i-d.
Abstract: This paper presents an approach to multiple-model hypothesis testing (MMHT) based on 2-SPRT (2-MMSRPT) for detecting unknown events that may have multiple possible distributions. The sequential probability ratio test (SPRT) based MMHT method (MMSPRT) is promising because of its efficiency and theoretical validity. However, it may suffer from SPRT’s lack of an upper bound on its stopping time, especially in the mis-specified case. The proposed 2-MMSPRT algorithm not only copes with this problem, but also is in a setting that can provide efficient detection in the sense of minimizing the maximum expected sample size subject to error probability constraints. Specifically, we prove the theoretical validity of 2-SPRT for the problem of testing hypotheses with multivariate normal densities. Moreover, we present a technique of forced independence and identical distribution (i.i.d.) to map the non-i.i.d. likelihood ratio sequence to an i.i.d. one, which enables us to apply the SPRT and 2-SPRT effectively to the dynamic case (under the linear-Gaussian assumption) with a non-identical distribution. Also, 2-MMSPRT’s detection efficiency under some assumptions/approximations is verified. Performance of 2-MMSPRT is evaluated for signal detection and model-set selection problems in several scenarios. Simulation results demonstrate the detection efficiency of the proposed 2-MMSPRT compared with the MMSPRT and some traditional tests.

Journal ArticleDOI
TL;DR: By the development of a sequential probability ratio test for the fuzzy hypothesis testing (FHT), a novel cooperative sequential detector is proposed to deal with the effect of noise power uncertainty.
Abstract: Efficient and reliable spectrum sensing is extremely significant, especially in the presence of noise uncertainty in low SNR environment below which conventional detectors fail to be robust. In this letter, by the development of a sequential probability ratio test for the fuzzy hypothesis testing (FHT), we propose a novel cooperative sequential detector to deal with the effect of noise power uncertainty. In this approach, for every measurement, FHT is computed by each cognitive radio. Subsequently, fusion center sequentially accumulates these fuzzy test statistics and decides about the sensing time. Simulation results are illustrated to show the effectiveness and robustness of the proposed sequential FHT detector. The significant reduction in sample complexity is demonstrated for our scheme in comparison with energy detector, sequential crisp hypothesis testing detector, and fixed sample size FHT detector.

Journal ArticleDOI
TL;DR: The study showed that the SPRT with multidimensional IRT has the same characteristics as theSPRT with uniddimensional IRT and results in more accurate classifications than the latter when used for multiddimensional data.
Abstract: A classification method is presented for adaptive classification testing with a multidimensional item response theory (IRT) model in which items are intended to measure multiple traits, that is, within-dimensionality. The reference composite is used with the sequential probability ratio test (SPRT) to make decisions and decide whether testing can be stopped before reaching the maximum test length. Item-selection methods are provided that maximize the determinant of the information matrix at the cutoff point or at the projected ability estimate. A simulation study illustrates the efficiency and effectiveness of the classification method. Simulations were run with the new item-selection methods, random item selection, and maximization of the determinant of the information matrix at the ability estimate. The study also showed that the SPRT with multidimensional IRT has the same characteristics as the SPRT with unidimensional IRT and results in more accurate classifications than the latter when used for multidimensional data.

Posted Content
TL;DR: In this article, the authors introduced almost fixed-length hypothesis testing, where the decision maker declares the true hypothesis almost always after collecting a fixed number of samples $n$; however, in very rare cases with exponentially small probability the decision-maker is allowed to collect another set of samples (no more than polynomial in $n) and improve the tradeoff between type-I and type-II error exponents.
Abstract: The maximum type-I and type-II error exponents associated with the newly introduced almost-fixed-length hypothesis testing is characterized. In this class of tests, the decision-maker declares the true hypothesis almost always after collecting a fixed number of samples $n$; however in very rare cases with exponentially small probability the decision maker is allowed to collect another set of samples (no more than polynomial in $n$). This class of hypothesis tests are shown to bridge the gap between the classical hypothesis testing with a fixed sample size and the sequential hypothesis testing, and improve the trade-off between type-I and type-II error exponents.

Journal ArticleDOI
TL;DR: This work analyzes the asymptotic performances of fully distributed sequential hypothesis testing procedures as the type-I and type-II error rates approach zero, in the context of a sensor network without a fusion center.
Abstract: This work analyzes the asymptotic performances of fully distributed sequential hypothesis testing procedures as the type-I and type-II error rates approach zero, in the context of a sensor network without a fusion center. In particular, the sensor network is defined by an undirected graph, where each sensor can observe samples over time, access the information from the adjacent sensors, and perform the sequential test based on its own decision statistic. Different from most literature, the sampling process and the information exchange process in our framework take place simultaneously (or, at least in comparable time-scales), thus cannot be decoupled from one another. Two message-passing schemes are considered, based on which the distributed sequential probability ratio test (DSPRT) is carried out respectively. The first scheme features the dissemination of the raw samples. Although the sample propagation based DSPRT is shown to yield the asymptotically optimal performance at each sensor, it incurs excessive inter-sensor communication overhead due to the exchange of raw samples with index information. The second scheme adopts the consensus algorithm, where the local decision statistic is exchanged between sensors instead of the raw samples, thus significantly lowering the communication requirement compared to the first scheme. In particular, the decision statistic for DSPRT at each sensor is updated by the weighted average of the decision statistics in the neighbourhood at every message-passing step. We show that, under certain regularity conditions, the consensus algorithm based DSPRT also yields the order-2 asymptotically optimal performance at all sensors.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed SSPRT-based MHT can achieve better tracking performance than MHT based on the WSPRT under a high false alarm spatial density.
Abstract: To date, Wald sequential probability ratio test (WSPRT) has been widely applied to track management of multiple hypothesis tracking (MHT). But in a real situation, if the false alarm spatial density is much larger than the new target spatial density, the original track score will be very close to the deletion threshold of the WSPRT. Consequently, all tracks, including target tracks, may easily be deleted, which means that the tracking performance is sensitive to the tracking environment. Meanwhile, if a target exists for a long time, its track will have a high score, which will make the track survive for a long time even after the target has disappeared. In this paper, to consider the relationship between the hypotheses of the test, we adopt the Shiryayev SPRT (SSPRT) for track management in MHT. By introducing a hypothesis transition probability, the original track score can increase faster, which solves the first problem. In addition, by setting an independent SSPRT for track deletion, the track score can decrease faster, which solves the second problem. The simulation results show that the proposed SSPRT-based MHT can achieve better tracking performance than MHT based on the WSPRT under a high false alarm spatial density.

Posted Content
TL;DR: In this paper, a nonparametric sequential test is proposed to prevent type $1$ error inflation under continuous monitoring, which does not require knowledge of the underlying probability distribution generating the data.
Abstract: We propose a nonparametric sequential test that aims to address two practical problems pertinent to online randomized experiments: (i) how to do a hypothesis test for complex metrics; (ii) how to prevent type $1$ error inflation under continuous monitoring. The proposed test does not require knowledge of the underlying probability distribution generating the data. We use the bootstrap to estimate the likelihood for blocks of data followed by mixture sequential probability ratio test. We validate this procedure on data from a major online e-commerce website. We show that the proposed test controls type $1$ error at any time, has good power, is robust to misspecification in the distribution generating the data, and allows quick inference in online randomized experiments.


Journal ArticleDOI
TL;DR: In this article, a variable single sampling plan having desired operating characteristics (OCs) indexed by quality loss has been proposed in the area of statistical quality control, which is based on the Wald's sequential probability ratio test.
Abstract: The proportion of non-conforming items has been traditionally utilised as an evaluation criterion for quality of items. However, the proportion of non-conforming items is not necessarily useful as a proper evaluation criterion for controlling high-quality manufacturing in recent years. Accordingly, in order to achieve further quality improvement and innovation, more careful quality evaluation has been required newly. Then, a concept of quality loss in the Taguchi methods has been devised as a severe criterion of quality evaluation. Hereby, a variable single sampling plan having desired operating characteristics (OCs) indexed by quality loss has been proposed in the area of statistical quality control. By the way, the most economical sampling inspection in the average sample number (ASN) is the sequential sampling plan based on the Wald’s sequential probability ratio test. Then, from the viewpoint of cost reduction, we discuss a variable sequential sampling plan having desired OC indexed by quality loss wi...

Sun, Xu, Li, Rangwei, Hu, Peng 
01 Jan 2016
TL;DR: In this paper, a tracking filter algorithm based on the maneuvering detection delay is presented in order to solve the fuzzy problem of target maneuver decision introduced by the measure?ment errors of active sonar.
Abstract: A tracking filter algorithm based on the maneuvering detection delay is presented in order to solve the fuzzy problem of target maneuver decision introduced by the measure?ment errors of active sonar. When the maneuvering detection is unclear, two target moving hypotheses, the uniform and the maneuver, derived from the method of multiple hypothesis tracking, are generated to delay the final decision time. Then the hypothesis test statistics is constructed by using the residual sequence. The active sonar?s tracking ability of unknown prior information targets is improved due to the modified sequential probability ratio test and the integration of the advantages of strong tracking filter and the Kalman filter. Simulation results show that the algorithm is able to not only track the uniform targets accurately, but also track the maneuvering targets steadily. The effectiveness of the algorithm for real underwater acoustic targets is further verified by the sea trial data processing results.

Journal ArticleDOI
TL;DR: In this article, the concept of R-symmetry, the basic properties of the M-Gaussian distribution and some analogies between Gaussian and MGaussian distributions are reviewed.
Abstract: Scientific data, as a sequential or a simple random sample, often indicate a unimodal, right-skewed population. For such data, the ubiquitous symmetry assumption and the Gaussian model are inappropriate and in case of high skewness, even corrections using devices such as Box-Cox transformation are inadequate. In such cases, the recently introduced M-Gaussian distribution, which may be described as an R-symmetric Gaussian twin, with its mode as the centrality parameter, can be an appropriate model. In this article, the concept of R-symmetry, the basic properties of the M-Gaussian distribution and some analogies between Gaussian and M-Gaussian distributions are reviewed. Then the sequential probability ratio test (SPRT) for simple a hypothesis about the mode of an M-Gaussian population assuming the dispersion parameter to be known is derived. The average sample number (ASN) and operating characteristic (OC) function are obtained and the robustness properties of the test with respect to the harmonic ...

Proceedings ArticleDOI
26 Jun 2016
TL;DR: This paper addresses the problem of adaptive waveform design for target detection with composite sequential hypothesis testing with Bayesian considerations, and proposes a novel test, named penalized GSPRT (PGSPRT), on the basis of restraining the exponential growth of the GSP RT with respect to the sequential probability ratio test (SPRT).
Abstract: This paper addresses the problem of adaptive waveform design for target detection with composite sequential hypothesis testing. We begin with an asymptotic analysis of the generalized sequential probability ratio test (GSPRT). The analysis is based on Bayesian considerations, similar to the ones used for the derivation of the Bayesian information criterion (BIC) for model order selection. Following the analysis, a novel test, named penalized GSPRT (PGSPRT), is proposed on the basis of restraining the exponential growth of the GSPRT with respect to the sequential probability ratio test (SPRT). The performance measures of the PGSPRT in terms of average sample number (ASN) and error probabilities are also investigated. In the proposed waveform design scheme, the transmit spatial waveform (beamforming) is adaptively determined at each step based on observations in the previous steps. The spatial waveform is determined to minimize the ASN of the PGSPRT. Simulations demonstrate the performance measures of the new algorithm for target detection in a multiple input, single output channel.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: SimSPRT-II is a comprehensive parametric monte-carlo simulation framework for tuning, optimization, and performance evaluation of SPRT based AI algorithms for applications in a broad range of engineering and security prognostic applications.
Abstract: New prognostic AI innovations are being developed, optimized, and productized for enhancing the reliability, availability, and serviceability of enterprise servers and data centers, known as Electronic Prognostics (EP). EP prognostic innovations are now being spun off for prognostic cyber-security applications, and for Internet-of-Things (IoT) prognostic applications in the industrial sectors of manufacturing, transportation, and utilities. For these applications, the function of prognostic anomaly detection is achieved by predicting what each monitored signal "should be" via highly accurate empirical nonlinear nonparametric (NLNP) regression algorithms, and then differencing the optimal signal estimates from the real measured signals to produce "residuals". The residuals are then monitored with a Sequential Probability Ratio Test (SPRT). The advantage of the SPRT, when tuned properly, is that it provides the earliest mathematically possible annunciation of anomalies growing into time series signals for a wide range of complex engineering applications. SimSPRT-II is a comprehensive parametric monte-carlo simulation framework for tuning, optimization, and performance evaluation of SPRT algorithms for any types of digitized time-series signals. SimSPRT-II enables users to systematically optimize SPRT performance as a multivariate function of Type-I and Type-II errors, Variance, Sampling Density, and System Disturbance Magnitude, and then quickly evaluate what we believe to be the most important overall prognostic performance metrics for real-time applications: Empirical False and Missed-alarm Probabilities (FAPs and MAPs), SPRT Tripping Frequency as a function of anomaly severity, and Overhead Compute Cost as a function of sampling density. SimSPRT-II has become a vital tool for tuning, optimization, and formal validation of SPRT based AI algorithms for applications in a broad range of engineering and security prognostic applications.

Journal ArticleDOI
TL;DR: For the one-sided hypothesis test H0, the truncated sequential probability ratio test stops at min(τ, T) where T/b → finite constant larger than 1/c′(θ1).
Abstract: Assume i.i.d. random variables {X1, …, Xn, …} follow the standard exponential family {dFθ(x) = exp(θx − c(θ))dF0(x)}. For the one-sided hypothesis test H0: θ = θ0 0 where c(θ0) = c(θ1), the truncated sequential probability ratio test stops at min (τ, T) where , and H0 is rejected if τ < T. Inference problems based on asymptotic pivots are considered given τ < T by assuming T/b → finite constant larger than 1/c′(θ1).

Proceedings ArticleDOI
16 May 2016
TL;DR: A modified version of Wald's sequential probability ratio test is used by this paper to reduce the number of false positives reported by the virtual bumper, thereby saving valuable mission time and increasing certainty about whether or not obstacles are present.
Abstract: The virtual bumper is a safeguarding mechanism for autonomous and remotely operated robots to conservatively avoid collisions even in the face of sensor uncertainty. In this paper we take a new approach to the virtual bumper system by applying a powerful but rarely examined statistical test. By using a modified version of Wald's sequential probability ratio test, we demonstrate that we can reduce the number of false positives reported by the virtual bumper, thereby saving valuable mission time. We further use the concept of sequential probability ratio to control vehicle speed in the presence of possible obstacles in order to increase certainty about whether or not obstacles are present. With this principled obstacle certainty measure, our new algorithm reduces the chances of collision by approximately 98% relative to traditional virtual bumper safeguarding without speed control.

Journal ArticleDOI
TL;DR: In this paper, the authors considered a group-sequential test for testing a simple hypothesis against a composite one-sided alternative, which defines the following sequential statistical procedure: At each stage a random number of independent identically distributed observations (a group of observations) is observed and, based on the collected data, the decision to accept or to reject the hypothesis or to continue the observation is made.
Abstract: We consider a group-sequential test for testing a simple hypothesis against a composite one-sided alternative, which defines the following sequential statistical procedure: At each stage a random number of independent identically distributed observations (a group of observations) is observed and, based on the collected data, the decision to accept or to reject the hypothesis or to continue the observation is made. For the tests with finite number of observations, we prove the existence of the derivative of the power function and establish the information-type inequalities relating that derivative to other characteristics of the test: the average number of observations and the type I error.

Posted Content
TL;DR: In this paper, the authors considered the problem of sequential signal detection in a multichannel system where the number and location of signals is a priori unknown and established the asymptotic optimality of a generalized sequential likelihood ratio test and a mixture-based sequential likelihood-ratio test.
Abstract: We consider the problem of sequential signal detection in a multichannel system where the number and location of signals is a priori unknown. We assume that the data in each channel are sequentially observed and follow a general non-i.i.d. stochastic model. Under the assumption that the local log-likelihood ratio processes in the channels converge r-completely to positive and finite numbers, we establish the asymptotic optimality of a generalized sequential likelihood ratio test and a mixture-based sequential likelihood ratio test. Specifically, we show that both tests minimize the first r moments of the stopping time distribution asymptotically as the probabilities of false alarm and missed detection approach zero. Moreover, we show that both tests asymptotically minimize all moments of the stopping time distribution when the local log-likelihood ratio processes have independent increments and simply obey the Strong Law of Large Numbers. This extends a result previously known in the case of i.i.d. observations when only one channel is affected. We illustrate the general detection theory using several practical examples, including the detection of signals in Gaussian hidden Markov models, white Gaussian noises with unknown intensity, and testing of the first-order autoregression's correlation coefficient. Finally, we illustrate the feasibility of both sequential tests when assuming an upper and a lower bound on the number of signals and compare their non-asymptotic performance using a simulation study.

Journal ArticleDOI
TL;DR: In this article, the problem of evaluating a military or GPS/GSM system's precision quality is considered, where one sequentially observes bivariate normal data (X i, Y i ) and wants to test hypotheses on the circular error probability (CEP) or the probability of nonconforming, i.e., the probabilities of the system hitting or missing a pre-specified disk target.

Journal ArticleDOI
TL;DR: This paper implemented Sequential Probability Ratio Test (SPRT) for Burr Type III model based on time domain data and results exemplify that the adopted model has given a rejection decision for the used datasets.
Abstract: Increased dependence on software systems elicited the assessment of their reliability, a crucial task in software development. Effective tools and mechanisms are required to facilitate the assessment of software reliability. Classical approaches like hypothesis testing are significantly time consuming as the conclusion can only be drawn after collecting huge amounts of data. Statistical method such as Sequential Analysis can be applied to arrive at a decision quickly. This paper implemented Sequential Probability Ratio Test (SPRT) for Burr Type III model based on time domain data. For this, parameters were estimated using Maximum Likelihood Estimation to apply SPRT on five real time software failure datasets borrowed from different software projects. The results exemplify that the adopted model has given a rejection decision for the used datasets. Full Text: PDF DOI: http://dx.doi.org/10.11591/ijece.v6i6.11511