scispace - formally typeset
Search or ask a question

Showing papers on "Sequential probability ratio test published in 2018"


Journal ArticleDOI
TL;DR: In this article, the authors analyzed the second-order asymptotic performance of fully distributed sequential hypothesis testing procedures as the type-I and type-II error rates approach zero in the context of a sensor network without a fusion center.
Abstract: This paper analyzes the asymptotic performances of fully distributed sequential hypothesis testing procedures as the type-I and type-II error rates approach zero, in the context of a sensor network without a fusion center. In particular, the sensor network is defined by an undirected graph, where each sensor can observe samples over time, access the information from the adjacent sensors, and perform the sequential test based on its own decision statistic. Different from most literature, the sampling process and the information exchange process in our framework take place simultaneously (or, at least in comparable time-scales), thus cannot be decoupled from one another. Our goal is to achieve second-order asymptotically optimal performance at every sensor, i.e., the average detection delay is within a constant gap from the centralized optimal sequential test as the error rates approach zero for the fixed number of sensors. To that end, a type of test procedure that resembles the well-known sequential probability ratio test (SPRT), termed as distributed SPRT (DSPRT) in this paper, is studied based on two message-passing schemes, respectively. The first scheme features the dissemination of the raw samples. In specific, every sample propagates over the network by being relayed from one sensor to another until it reaches all the sensors in the network. Although the sample propagation-based DSPRT is shown to yield the asymptotically optimal performance at each sensor, it incurs excessive inter-sensor communication overhead due to the exchange of raw samples with index information. The second scheme adopts the consensus algorithm, where the local decision statistic is exchanged between sensors instead of the raw samples, thus significantly lowering the communication requirement compared with the first scheme. In particular, the decision statistic for DSPRT at each sensor is updated by the weighted average of the decision statistics in the neighborhood at every message-passing step. We show that, under certain regularity conditions, the consensus algorithm-based DSPRT also yields the second-order asymptotically optimal performance at all sensors given a fixed number of sensors. Our asymptotic analyses of the two message-passing-based DSPRTs are then corroborated by simulations using the Gaussian and Laplacian samples.

22 citations


Journal ArticleDOI
TL;DR: This work proposes scheduling of sequential compressed spectrum sensing which jointly exploits compressed sensing (CS) and sequential periodic detection techniques to achieve more accurate and timely wideband sensing.
Abstract: The support for high data rate applications with the cognitive radio technology necessitates wideband spectrum sensing. However, it is costly to apply long-term wideband sensing and is especially difficult in the presence of uncertainty, such as high noise, interference, outliers, and channel fading. In this work, we propose scheduling of sequential compressed spectrum sensing which jointly exploits compressed sensing (CS) and sequential periodic detection techniques to achieve more accurate and timely wideband sensing. Instead of invoking CS to reconstruct the signal in each period, our proposed scheme performs backward grouped-compressed-data sequential probability ratio test (backward GCD-SPRT) using compressed data samples in sequential detection, while CS recovery is only pursued when needed. This method on one hand significantly reduces the CS recovery overhead, and on the other takes advantage of sequential detection to improve the sensing quality. Furthermore, we propose (a) an in-depth sensing scheme to accelerate sensing decision-making when a change in channel status is suspected, (b) a block-sparse CS reconstruction algorithm to exploit the block sparsity properties of wide spectrum, and (c) a set of schemes to fuse results from the recovered spectrum signals to further improve the overall sensing accuracy. Extensive performance evaluation results show that our proposed schemes can significantly outperform peer schemes under sufficiently low SNR settings.

17 citations


Journal ArticleDOI
TL;DR: A stratified sampling method to statistically check Probabilistic Computation Tree Logic formulas on discrete-time Markov chains with sequential probability ratio test, which uses stratified samples that are negatively correlated, thus give lower variance.

14 citations


Journal ArticleDOI
TL;DR: The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post–market drug and vaccine safety surveillance.
Abstract: Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size Unlike in post-market drug and vaccine safety surveillance, that is not important In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance This is shown for both, continuous and group sequential analysis

13 citations


Proceedings ArticleDOI
12 Apr 2018
TL;DR: The problem of sequential multiple hypothesis testing in a distributed sensor network is considered and two algorithms are proposed: the Consensus + Innovations Matrix Sequential Probability Ratio Test and the robust Least-Favorable-Density- $\mathcal{CI}\mathrm{MSPRT}$ for hypotheses with uncertainties in the corresponding distributions.
Abstract: The problem of sequential multiple hypothesis testing in a distributed sensor network is considered and two algorithms are proposed: the Consensus + Innovations Matrix Sequential Probability Ratio Test $(\mathcal{CI}\mathrm{MSPRT}$ for multiple simple hypotheses and the robust Least-Favorable-Density- $\mathcal{CI}\mathrm{MSPRT}$ for hypotheses with uncertainties in the corresponding distributions. Simulations are performed to verify and evaluate the performance of both algorithms under different network conditions and noise contaminations.

12 citations


Journal ArticleDOI
20 Apr 2018
TL;DR: Simulations show that the SeqRDT approach leads to faster decision making compared to its fixed sam-ple counterpart Block-RDT and is robust to model mismatches compared to the Sequential Probability Ratio Test (SPRT) when the actual signal is a distorted version of the assumed signal.
Abstract: In this work, we propose a non-parametric sequential hypothesis test based on random distortion testing (RDT). RDT addresses the problem of testing whether or not a random signal, $\Xi$ , observed in independent and identically distributed (i.i.d) additive noise deviates by more than a specified tolerance, $\tau$ , from a fixed model, $\xi _0$ . The test is non-parametric in the sense that the underlying signal distributions under each hypothesis are assumed to be unknown. The need to control the probabilities of false alarm (PFA) and missed detection (PMD), while reducing the number of samples required to make a decision, leads to a novel sequential algorithm, Seq RDT. We show that under mild assumptions on the signal, Seq RDT follows the properties desired by a sequential test. We introduce the concept of a buffer and derive bounds on PFA and PMD, from which we choose the buffer size. Simulations show that Seq RDT leads to faster decision-making on an average compared to its fixed-sample-size (FSS) counterpart, Block RDT. These simulations also show that the proposed algorithm is robust to model mismatches compared to the sequential probability ratio test (SPRT).

10 citations


Journal ArticleDOI
TL;DR: This paper implements an Armitage sequential test for nonmaneuvering and maneuvering targets using both feature data and kinematic measurements for target classification by using both centralized and distributed fusion architectures.
Abstract: This paper deals with target classification by using both feature data and kinematic measurements. The problem is tackled by multihypothesis sequential testing with embedded target tracking. We implement an Armitage sequential test for nonmaneuvering and maneuvering targets. Both (centralized and distributed) fusion architectures are used for the embedded tracking. The contributions of the kinematic measurements to classification are analyzed, and classification performance improvement is shown analytically for a special case. Numerical results are provided to demonstrate the performance of our algorithms.

7 citations


Proceedings ArticleDOI
Zhenyu Zhao1, Mandie Liu1, Anirban Deb1
28 Jul 2018
TL;DR: In this paper, the authors proposed a methodology for rolling out features in an automated way using an adaptive experimental design, where a feature is gradually ramped up from a small proportion of users to a larger population based on real-time evaluation of the performance of important metrics.
Abstract: During the rapid development cycle for Internet products (websites and mobile apps), new features are developed and rolled out to users constantly. Features with code defects or design flaws can cause outages and significant degradation of user experience. The traditional method of code review and change management can be time-consuming and error-prone. In order to make the feature rollout process safe and fast, this paper proposes a methodology for rolling out features in an automated way using an adaptive experimental design. Under this framework, a feature is gradually ramped up from a small proportion of users to a larger population based on real-time evaluation of the performance of important metrics. If there are any regression detected during the ramp-up step, the ramp-up process stops and the feature developer is alerted. There are two main algorithm components powering this framework: 1) a continuous monitoring algorithm - using a variant of the sequential probability ratio test (SPRT) to monitor the feature performance metrics and alert feature developers when a metric degradation is detected, 2) an automated ramp-up algorithm - deciding when and how to ramp up to the next stage with larger sample size. This paper presents one monitoring algorithm and three ramping up algorithms including time-based, power-based, and risk-based (a Bayesian approach) schedules. These algorithms are evaluated and compared on both simulated data and real data. There are three benefits provided by this framework for feature rollout: 1) for defective features, it can detect the regression early and reduce negative effect, 2) for healthy features, it rolls out the feature quickly, 3) it reduces the need for manual intervention via the automation of the feature rollout process.

5 citations


Journal ArticleDOI
TL;DR: It is proved that continuous sequential analysis is uniformly better than group sequential under a comprehensive class of statistical performance measures, and optimal solutions are in the class of continuous designs.
Abstract: Statistical sequential hypothesis testing is meant to analyze cumulative data accruing in time. The methods can be divided in two types, group and continuous sequential approaches, and a question that arises is if one approach suppresses the other in some sense. For Poisson stochastic processes, we prove that continuous sequential analysis is uniformly better than group sequential under a comprehensive class of statistical performance measures. Hence, optimal solutions are in the class of continuous designs. This paper also offers a pioneer study that compares classical Type I error spending functions in terms of expected number of events to signal. This was done for a number of tuning parameters scenarios. The results indicate that a log-exp shape for the Type I error spending function is the best choice in most of the evaluated scenarios.

5 citations


Journal ArticleDOI
TL;DR: The sequential probability ratio test (SPRT) is a useful statistical method which can conclude a null hypothesis H0 or an alternative hypothesis H1 with 50% of the required sample size of a non-sequential test on average as mentioned in this paper.
Abstract: In medical, health, and sports sciences, researchers desire a device with high reliability and validity. This article focuses on reliability and validity studies with n subjects and m ≥2 repeated measurements per subject. High statistical power can be achieved by increasing n or m, and increasing m is often easier than increasing n in practice unless m is too high to result in systematic bias. The sequential probability ratio test (SPRT) is a useful statistical method which can conclude a null hypothesis H0 or an alternative hypothesis H1 with 50% of the required sample size of a non-sequential test on average. The traditional SPRT requires the likelihood function for each observed random variable, and it can be a practical burden for evaluating the likelihood ratio after each observation of a subject. Instead, m observed random variables per subject can be transformed into a test statistic which has a known sampling distribution under H0 and under H1. This allows us to formulate a SPRT based on a sequence of test statistics. In this article, three types of study are considered: reliabilityof a device, reliability of a device relative to a criterion device, and validity of a device relative to a criterion device. Using SPRT for testing the reliability of a device, for small m, results in an average sample size of about 50% of the fixed sample size for a non-sequential test. For comparing a device to criterion, the average sample size approaches to 60% approximately as m increases. The SPRT tolerates violation of normality assumption for validity study, but it does not for reliability study.

4 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of classification via composite sequential hypothesis testing with an alternative solution derived based on Bayesian considerations, similar to the ones used for the Bayesian information criterion and asymptotic maximum a posteriori probability criterion for model order selection.
Abstract: This paper addresses the problem of classification via composite sequential hypothesis testing. We focus on two possible schemes for the hypotheses: non-nested and nested. For the first case, we present the generalized sequential probability ratio test (GSPRT) and provide an analysis of its asymptotic optimality. Yet, for the nested case, this algorithm is shown to be inconsistent. Consequently, an alternative solution is derived based on Bayesian considerations, similar to the ones used for the Bayesian information criterion and asymptotic maximum a posteriori probability criterion for model order selection. The proposed test, named penalized GSPRT (PGSPRT), is based on restraining the exponential growth of the GSPRT with respect to the sequential probability ratio test. Furthermore, the commonly used performance measure for sequential tests, known as the average sample number, is evaluated for the PGSPRT under each of the hypotheses. Simulations are carried out to compare the performance measures of the proposed algorithms for two nested model order selection problems.

Journal ArticleDOI
01 Jul 2018
TL;DR: A new numerical approach to approximate test characteristics for a sequential probability ratio test (SPRT) and a truncated SPRT is constructed and the two-side truncated functions are proposed to be used for constructing the robustified SPRT.
Abstract: In this article the problem of a sequential test for the model of independent non-identically distributed observations is considered. Based on recursive calculation a new numerical approach to approximate test characteristics for a sequential probability ratio test (SPRT) and a truncated SPRT (TSPRT) is constructed. The problem of robustness evaluation is also studied when the contamination is presented by the distortion of the distributions of all increments of the log-likelihood ratio statistics. The two-side truncated functions are proposed to be used for constructing the robustified SPRT. An algorithm to choose the thresholds of these truncated functions is indicated. The results are applied for a sequential test on parameters of time series with trend. Some kinds of the contaminated models of time series with trend are used to study the robustness of the truncated SPRT. Numerical examples confirming the theoretical results mentioned above are given.

Dissertation
04 Dec 2018
TL;DR: This dissertation aims at solving these tasks jointly by providing generic algorithms that are applicable to a wide variety of real-world problems by introducing a novel way of performing the Basic Probability Assignment.
Abstract: Statistical robustness and collaborative inference in a distributed sensor network are two challenging requirements posed on many modern signal processing applications. This dissertation aims at solving these tasks jointly by providing generic algorithms that are applicable to a wide variety of real-world problems. The first part of the thesis is concerned with sequential detection---a branch of detection theory that is focused on decision-making based on as few measurements as possible. After reviewing some fundamental concepts of statistical hypothesis testing, a general formulation of the Consensus+Innovations Sequential Probability Ratio Test for sequential binary hypothesis testing in distributed networks is derived. In a next step, multiple robust versions of the algorithm based on two different robustification paradigms are developed. The functionality of the proposed detectors is verified in simulations, and their performance is examined under different network conditions and outlier concentrations. Subsequently, the concept is extended to multiple hypotheses by fusing it with the Matrix Sequential Probability Ratio Test, and robust versions of the resulting algorithm are developed. The performance of the proposed algorithms is verified and evaluated in simulations. Finally, the Dempster-Shafer Theory of Evidence is applied to distributed sequential hypothesis testing for the first time in the literature. After introducing a novel way of performing the Basic Probability Assignment, an evidence-based sequential detector for application in distributed sensor networks is developed and its performance is verified in simulations. The second part of the thesis deals with multi-target tracking in distributed sensor networks. The problem of data association is discussed and the considered state-space and measurement models are introduced. Next, the concept of random finite sets as well as Probability Hypothesis Density filtering are reviewed. Subsequently, a novel distributed Particle Filter implementation of the Probability Hypothesis Density Filter is developed, which is based on a two-step communication scheme. A robust as well as a centralized version of the algorithm are derived. Furthermore, the computational complexity and communication load of the distributed as well as the centralized trackers are analyzed. Finally, simulations are performed to compare the proposed algorithms with an existing distributed tracker. To this end, a distributed version of the Posterior Cramer-Rao Lower Bound is developed, which serves as a performance bound. The results show that the proposed algorithms perform well under different environmental conditions and outperform the competition.

Proceedings ArticleDOI
22 Oct 2018
TL;DR: A novel method for diagnosing gear crack in gearbox based on principal component analysis (PCA) and sequential probability ratio test (SPRT) and the results indicate that this method is feasible and practical to classify different conditions of the gearbox.
Abstract: This paper1 presents a novel method for diagnosing gear crack in gearbox based on principal component analysis(PCA) and sequential probability ratio test(SPRT). The vibration signal collected in the gearbox experimental system is denoised. The wavelet packet transform is suitable for the noise reduction to extract the features for the identification of the faulty gears. PCA is used to extract useful parameters of vibration signals. The parameter with the largest contribution rate after dimensionality reduction is chosen as the testing parameter of SPRT. The results indicate that this method is feasible and practical to classify different conditions of the gearbox.

Patent
23 Nov 2018
TL;DR: In this paper, the authors proposed a distance measurement-based security positioning method for solving the security positioning problem of a wireless sensor network. According to the method, the respective advantages of the proposed enhanced density clustering algorithm and hypothesis testing on the distance consistency are combined, the influences of malicious anchor nodes on the positioning process are eliminated by means of detection on the malicious anchors, and therefore the positioning validity is guaranteed.
Abstract: The invention provides a distance measurement-based security positioning method for solving the security positioning problem of a wireless sensor network. According to the method, the respective advantages of the proposed enhanced density clustering algorithm and hypothesis testing on the distance consistency are combined, the influences of malicious anchor nodes on the positioning process are eliminated by means of detection on the malicious anchor nodes, and therefore the positioning validity is guaranteed. According to the proposed MNDCC and EMNDCC algorithms, the four stages of data collection, adaptive multiple times of DBSCAN(Density-Based Spatial Clustering of Applications with Noise) clustering, detection model building and sequential probability ratio test are included, the malicious anchor nodes are detected by means of the characteristic that two measurement values (RSSI and TOA) of the distance have the consistency, the detection result is judged according to the sequentialprobability ratio test of a statistical decision, and therefore two types of errors (true abandon and false taking) are effectively reduced. According to the overall algorithm, the detection rate ofthe malicious anchor nodes is effectively increased, therefore, the positioning precision is improved, and the positioning validity is guaranteed.


Posted Content
TL;DR: In this article, a modified sequential probability ratio test that can be used to reduce the average sample size required to perform statistical hypothesis tests at specified levels of significance and power is described.
Abstract: We describe a modified sequential probability ratio test that can be used to reduce the average sample size required to perform statistical hypothesis tests at specified levels of significance and power. Examples are provided for $z$ tests, $t$ tests, and tests of binomial success probabilities. A description of a software package to implement the test designs is provided. We compare the sample sizes required in fixed design tests conducted at 5$\%$ significance levels to the average sample sizes required in sequential tests conducted at 0.5$\%$ significance levels, and we find that the two sample sizes are approximately equal.

Proceedings ArticleDOI
01 Sep 2018
TL;DR: Simulations verify the competitive performance of the robustify the Consensus + Innovations Matrix Sequential Probability Ratio Test against distributional uncertainties using robust estimators.
Abstract: We show how to robustify the Consensus + Innovations Matrix Sequential Probability Ratio Test against distributional uncertainties using robust estimators. Furthermore, we propose four distributed sequential tests for multiple hypotheses based on the median, the Hodges-Lehmann estimator, the M-estimator, and the sample myriad. Simulations verify the competitive performance of the proposed approach in comparison to an alternative method based on least favorable densities.

Posted Content
TL;DR: Point-to-point molecular communication system design is examined wherein synchronisation errors are explicitly considered and the proposed receiver and modulation designs achieve strongly improved asynchronous detection performance for the same data rate as a decision feedback based receiver by a factor of 1/2.
Abstract: Achieving precise synchronisation between transmitters and receivers is particularly challenging in diffusive molecular communication environments. To this end, point-to-point molecular communication system design is examined wherein synchronisation errors are explicitly considered. Two transceiver design questions are considered: the development of a sequential probability ratio test-based detector which allows for additional observations in the presence of uncertainty due to mis-synchronisation at the receiver, and a modulation design which is optimised for this receiver strategy. The modulation is based on optimising an approximation for the probability of error for the detection strategy and directly exploits the structure of the probability of molecules hitting a receiver within a particular time slot. The proposed receiver and modulation designs achieve strongly improved asynchronous detection performance for the same data rate as a decision feedback based receiver by a factor of 1/2.

Posted Content
20 Nov 2018
TL;DR: In this paper, a modified sequential probability ratio test is proposed to reduce the average sample size required to perform statistical hypothesis tests at specified levels of significance and power, such as Z-tests, T-Tests, and binomial success probabilities.
Abstract: We describe a modified sequential probability ratio test that can be used to reduce the average sample size required to perform statistical hypothesis tests at specified levels of significance and power. Examples are provided for Z-tests, T-Tests, and tests of binomial success probabilities. A description of a software package to implement the tests is provided. We also compare the sample sizes required in fixed design tests conducted at 5% significance levels to the average sample sizes required in sequential tests conducted at 0.5% significance levels, and find that the two sample sizes are approximately the same. This illustrates that the proposed sequential tests can provide higher levels of significance using smaller sample sizes.

Journal ArticleDOI
TL;DR: In this paper, the most powerful (MP) test is obtained for scale parameter when shape parameters are known and the likelihood ratio tests (LRT) for scale parameters are derived when the shape parameters were known and unknown.
Abstract: SYNOPTIC ABSTRACTKozubowski and Podgorski (2003) have discussed properties, characterizations, and estimation of parameters of skew log Laplace distribution (SLLD). In this article, classical optimum tests for scale parameter of SLLD are derived. The most powerful (MP) test is obtained for scale parameter when shape parameters are known. Wald’s sequential probability ratio test (SPRT) is obtained, and its properties are studied. The likelihood ratio tests (LRT) for scale parameter are derived when the shape parameters are known and unknown. Finally, the SPRT and LRT are illustrated to the real life data.

Journal ArticleDOI
TL;DR: This work proposes a sequential canonical correlation technique (S-CCT) method to estimate the number of active PUs quickly and accurately, based on the classical CCT and multi-hypothesis sequential probability ratio test.
Abstract: In cognitive radio networks, a priori information on the number of primary users (PUs) is helpful to estimate more specific parameters, such as carrier frequency, direction of arrival, and location. We propose a sequential canonical correlation technique (S-CCT) method to estimate the number of active PUs quickly and accurately. The proposed method is based on the classical CCT and multi-hypothesis sequential probability ratio test. We also deduce the asymptotic expressions of average sample number and detection probability. Simulation results show that our proposed S-CCT method can achieve better performance with fewer samples than the CCT.

Journal ArticleDOI
19 Feb 2018-Sensors
TL;DR: This paper proposes a Hard decision-based STC (HSTC) method, which takes all the decision error rate, timeliness, and estimation error into account and has the performance superiority in both decision and estimation.
Abstract: Methods dealing with the problem of Joint Tracking and Classification (JTC) are abundant, among which Simultaneous Tracking and Classification (STC) provides a modularized scheme solving tracking and classification subproblems simultaneously. However, there is no explicit hard decision on the class label but only soft decision (class probability) is provided. This does not fit many practical cases, in which a hard decision is urgently needed. To solve this problem, this paper proposes a Hard decision-based STC (HSTC) method. HSTC takes all the decision error rate, timeliness, and estimation error into account. Specifically, for decision, the sequential probability ratio test is adopted due to its nice properties and also the adaptability to our situation. For estimation, by utilizing the two-way information exchange between the tracker and the classifier, we propose flexible three tracking schemes related to decision. The HSTC tracking result is divided into three parts according to the time of making the hard decision. In general, the proposed HSTC method takes advantage of both SPRT and STC. Finally, two illustrative JTC examples with hard decision verify the effectiveness of the the proposed HSTC method. They show that HSTC can meet the demand of the problem, and also has the performance superiority in both decision and estimation.

Proceedings ArticleDOI
01 Dec 2018
TL;DR: In this paper, a sequential probability ratio test-based detector and a modulation design are proposed for point-to-point molecular communication system design, where synchronisation errors are explicitly considered.
Abstract: Achieving precise synchronisation between transmitters and receivers is particularly challenging in diffusive molecular communication environments. To this end, pointto-point molecular communication system design is examined wherein synchronisation errors are explicitly considered. Two transceiver design questions are considered: the development of a sequential probability ratio test-based detector which allows for additional observations in the presence of uncertainty due to miss-ynchronisation at the receiver, and a modulation design which is optimised for this receiver strategy. The modulation is based on optimising an approximation for the probability of error for the detection strategy and directly exploits the structure of the probability of molecules hitting a receiver within a particular time slot. The proposed receiver and modulation designs achieve strongly improved asynchronous detection performance for the same data rate as a decision feedback based receiver by a factor of 1/2.


Journal ArticleDOI
TL;DR: In this article, a new monitoring procedure for patient recruitment in a clinical trial is proposed based on the sequential probability ratio test using improved stopping boundaries by Woodroofe, the method allows for continuous monitoring of the rate of enrollment.
Abstract: We propose Sequential Patient Recruitment Monitoring (SPRM), a new monitoring procedure for patient recruitment in a clinical trial. Based on the sequential probability ratio test using improved stopping boundaries by Woodroofe, the method allows for continuous monitoring of the rate of enrollment. It gives an early warning when the recruitment is unlikely to achieve the target enrollment. The packet data approach combined with the Central Limit Theorem makes the method robust to the distribution of the recruitment entry pattern. A straightforward application of the counting process framework can be used to estimate the probability to achieve the target enrollment under the assumption that the current trend continues. The required extension of the recruitment period can also be derived for a given confidence level. SPRM is a new, continuous patient recruitment monitoring tool that provides an opportunity for corrective action in a timely manner. It is suitable for the modern, centralized data management environment and requires minimal effort to maintain. We illustrate this method using real data from two well-known, multicenter, phase III clinical trials.


Journal ArticleDOI
TL;DR: The procedure showed some desirable properties and is ready to use for a few settings but demands adjustments for others, and future work might consider refinements of the geographical structure.
Abstract: Common cancer monitoring practice is seldom prospective and rather driven by public requests. This study aims to assess the performance of a recently developed prospective cancer monitoring method and the statistical tools used, in particular the sequential probability ratio test in regard to specificity, sensitivity, observation time and heterogeneity of size of the geographical unit. A simulation study based on a predefined selection of cancer types, geographical unit and time period was set up. Based on the population structure of Lower Saxony the mean number of cases of three diagnoses were randomly assigned to the geographical units during 2008–2012. A two-stage monitoring procedure was then executed considering the standardized incidence ratio and sequential probability ratio test. Scenarios were constructed differing by the simulation of clusters, significance level and test parameter indicating a risk to be elevated. Performance strongly depended on the choice of the test parameter. If the expected numbers of cases were low, the significance level was not fully exhausted. Hence, the number of false positives was lower than the chosen significance level suggested, leading to a high specificity. Sensitivity increased with the expected number of cases and the amount of risk and decreased with the size of the geographical unit. The procedure showed some desirable properties and is ready to use for a few settings but demands adjustments for others. Future work might consider refinements of the geographical structure. Inhomogeneous unit size could be addressed by a flexible choice of the test parameter related to the observation time.

Proceedings ArticleDOI
01 Dec 2018
TL;DR: From the Receiver Operating Characteristic (ROC), and the average sample number (ASN) metrics, it is observed that the energy based sequential sensing procedure yields a better probability of detection than the SPRT based procedure for a given probability of false-alarm.
Abstract: We consider a Cognitive Radio Network having one Primary User (PU) and N Secondary Users (SUs). In this paper, we study the problem of joint channel–sensing and channel– access for SUs. When the channel is in use by the PU, the signal that the PU sends and the channel fading gains are unknown to SUs. The channel sensing problem that we consider is detecting whether or not there is an unknown signal (with random fading) in noise. For this channel–sensing problem, we propose a sequential detection procedure based on the energy of samples that each SU observes. As soon as an SU detects the idle/busy state of the channel, it broadcasts it’s local decision to all other SUs. We propose a global decision rule that makes a decision that the channel is idle, only if at least $\Gamma$ out of N SUs have broadcast idle local decisions; otherwise, the global decision rule makes a decision that the channel is busy. Also, the channel access is provided to the SU that is the first one to broadcast an idle decision. We study the detection and false-alarm performance of our proposed procedure, and compare the performance with that of Sequential Probability Ratio Test (SPRT) based sensing procedure. From the Receiver Operating Characteristic (ROC), and the average sample number (ASN) metrics, we observe that our energy based sequential sensing procedure yields a better probability of detection than the SPRT based procedure for a given probability of false-alarm. Also, as the threshold on the number of idle local decisions $\Gamma$ increases, probability of detection also increases, but at the cost of detection delay.

Journal ArticleDOI
TL;DR: The proposed methodology can be the basis for the improvement of additional standards, for example, in ISO 8422:2006 and has been accepted to the work plan of TC-56 of IEC.
Abstract: The Sequential Probability Ratio Test (SPRT) is widely used in the field of reliability and quality control. This paper is a continuation and a significant extension of the authors' earlier paper; it is dedicated to various risk ratios (α/β) and will lead to the increased use of the Sequential Probability Ratio Test for practical and research needs. The sample number (SN) until the test stops is a random value, and its distribution tails can be extremely long relative to the average SN (ASN). This is not suitable for practical use; therefore, truncation is required, usually by a pair of lines whose intersection, denoted as the Truncation Apex (TA), determines the maximum SN (maxSN). The optimality of the test is determined by the minimality of the SN (by means of maxSN and ASN) for a given Operating Characteristic. Presented are formulas and an algorithm for the TA and other parameters of the optimal test stopping boundaries for various α/β. This methodology also shortens the test planning process. Displacement of the TA from the optimal location results in a significant increase in ASN. The study was implemented in the Israeli standard SI-61123. Revision of IEC 61123 and IEC 61124 (for exponential distributed data), by this study, has been accepted to the work plan of TC-56 of IEC. The proposed methodology can be the basis for the improvement of additional standards, for example, in ISO 8422:2006.