scispace - formally typeset
Search or ask a question

Showing papers on "Sequential probability ratio test published in 2017"


Journal ArticleDOI
TL;DR: The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff.
Abstract: To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain–computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects’ recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the ...

22 citations


Journal ArticleDOI
TL;DR: This paper discusses power and sample-size computation for likelihood ratio and Wald testing of the significance of covariate effects in latent class models, and shows how to calculate the non-centrality parameter using a large simulated data set from the model under the alternative hypothesis.
Abstract: This paper discusses power and sample-size computation for likelihood ratio and Wald testing of the significance of covariate effects in latent class models. For both tests, asymptotic distributions can be used; that is, the test statistic can be assumed to follow a central Chi-square under the null hypothesis and a non-central Chi-square under the alternative hypothesis. Power or sample-size computation using these asymptotic distributions requires specification of the non-centrality parameter, which in practice is rarely known. We show how to calculate this non-centrality parameter using a large simulated data set from the model under the alternative hypothesis. A simulation study is conducted evaluating the adequacy of the proposed power analysis methods, determining the key study design factor affecting the power level, and comparing the performance of the likelihood ratio and Wald test. The proposed power analysis methods turn out to perform very well for a broad range of conditions. Moreover, apart from effect size and sample size, an important factor affecting the power is the class separation, implying that when class separation is low, rather large sample sizes are needed to achieve a reasonable power level.

21 citations


Proceedings ArticleDOI
01 Jun 2017
TL;DR: This work provides a novel analysis of Wald's sequential probability ratio test based on information theoretic measures for symmetric thresholds, symmetric noise, and equally likely hypotheses, and shows that the decision time of the Wald test contains no information on which hypothesis is true beyond the decision outcome.
Abstract: We provide a novel analysis of Wald's sequential probability ratio test based on information theoretic measures for symmetric thresholds, symmetric noise, and equally likely hypotheses. This test is optimal in the sense that it yields the minimum mean decision time. To analyze the decision-making process we consider information densities, which represent the stochastic information content of the observations yielding a stochastic termination time of the test. Based on this, we show that the conditional probability to decide for hypothesis H 1 (or the counter-hypothesis H 0 ) given that the test terminates at time instant k is independent of time k. An analogous property has been found for a continuous-time first passage problem with two absorbing boundaries in the contexts of non-equilibrium statistical physics and communication theory. Moreover, we study the evolution of the mutual information between the binary variable to be tested and the output of the Wald test. Notably, we show that the decision time of the Wald test contains no information on which hypothesis is true beyond the decision outcome.

12 citations


Book ChapterDOI
14 Jun 2017
TL;DR: A study of web usage logs is reported on to verify whether it is possible to achieve good recognition rates in the task of distinguishing between human users and automated bots using computational intelligence techniques.
Abstract: This work reports on a study of web usage logs to verify whether it is possible to achieve good recognition rates in the task of distinguishing between human users and automated bots using computational intelligence techniques. Two problem statements are given, offline (for completed sessions) and on-line (for sequences of individual HTTP requests). The former is solved with several standard computational intelligence tools. For the second, a learning version of Wald’s sequential probability ratio test is used.

12 citations


Proceedings ArticleDOI
03 Apr 2017
TL;DR: It is shown that the proposed test controls type 1 error at any time, has good power, is robust to misspecification in the distribution generating the data, and allows quick inference in online randomized experiments.
Abstract: We propose a nonparametric sequential test that aims to address two practical problems pertinent to online randomized experiments: (i) how to do a hypothesis test for complex metrics; (ii) how to prevent type 1 error inflation under continuous monitoring. The proposed test does not require knowledge of the underlying probability distribution generating the data. We use the bootstrap to estimate the likelihood for blocks of data followed by mixture sequential probability ratio test. We validate this procedure on data from a major online e-commerce website. We show that the proposed test controls type 1 error at any time, has good power, is robust to misspecification in the distribution generating the data, and allows quick inference in online randomized experiments.

11 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: It is demonstrated how the proposed cluster based methodology can be successfully applied for anomaly detection on a marine diesel engine in operation, and the vast reduction in computation time compared to the original framework.
Abstract: In this paper we propose a cluster based version of the anomaly detection methodology based on signal reconstruction, using Auto Associative Kernel Regression (AAKR), combined with residuals analysis, using Sequential Probability Ratio Test (SPRT). We demonstrate how the proposed cluster based methodology can be successfully applied for anomaly detection on a marine diesel engine in operation. Furthermore, we demonstrate the vast reduction in computation time compared to the original framework, and discuss other possible advantages and disadvantages of the proposed methodology.

10 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: The con-sensus+innovations sequential probability ratio test (ciSPRT) is generalized for arbitrary binary hypothesis tests and a robust version is developed.
Abstract: This paper addresses the problem of sequential binary hypothesis testing in a multi-agent network to detect a random signal in non-Gaussian noise. To this end, the con-sensus+innovations sequential probability ratio test (ciSPRT) is generalized for arbitrary binary hypothesis tests and a robust version is developed. Simulations are performed to validate the performance of the proposed algorithms in terms of the average run length (ARL) and the error probabilities.

9 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: Three robust extensions of the Consensus+Innovations Sequential Probability Ratio Test (CISPRT) are developed, namely, the Median, the M, and the Myriad, and they are validated in a shift-in-mean as well as a change- in-variance test.
Abstract: We study the problem of sequential binary hypothesis testing in a distributed multi-sensor network in non-Gaussian noise. To this end, we develop three robust extensions of the Consensus+Innovations Sequential Probability Ratio Test (CISPRT), namely, the Median-CISPRT, the M-CISPRT, and the Myriad-CISPRT, and validate their performance in a shift-in-mean as well as a change-in-variance test. Simulations show the superiority of the proposed algorithms over the alternative R-CISPRT.

9 citations


Journal ArticleDOI
TL;DR: Simulations of fault detection and identification on the sensors and components in reactor coolant system of Qinshan NPP are carried out, and the results demonstrate the effectiveness of the proposed comprehensive diagnosis system (CDS).

8 citations


Journal ArticleDOI
TL;DR: It is shown that verification of AA-diagnosability is equivalent to verification of the termination of the cumulative sum procedure for hidden Markov models, and that, for a specific class of SDES called fault-immediate systems, the sequential probability ratio test (SPRT) minimizes the expected number of observable events required to distinguish between the normal and faulty modes.
Abstract: Stochastic discrete event systems (SDES) are systems whose evolution is described by the occurrence of a sequence of events, where each event has a defined probability of occurring from each state. The diagnosability problem for SDES is the problem of determining the conditions under which occurrences of a fault can be detected in finite time with arbitrarily high probability. (IEEE Trans Autom Control 50(4):476–492 2005) proposed a class of SDES and proposed two definitions of stochastic diagnosability for SDES called A- and A A-diagnosability and reported a necessary and sufficient condition for A-diagnosability, but only a sufficient condition for A A-diagnosability. In this paper, we provide a condition that is both necessary and sufficient for determining whether or not an SDES is A A-diagnosable. We also show that verification of A A-diagnosability is equivalent to verification of the termination of the cumulative sum (CUSUM) procedure for hidden Markov models, and that, for a specific class of SDES called fault-immediate systems, the sequential probability ratio test (SPRT) minimizes the expected number of observable events required to distinguish between the normal and faulty modes.

7 citations


Proceedings ArticleDOI
Jun Wu1, Tiecheng Song1, Cong Wang1, Yue Yu1, Miao Liu2, Jing Hu1 
01 Sep 2017
TL;DR: A general SSDF attack model is developed and a robust data fusion scheme, named robust weighted sequential probability ratio test (RWSPRT), which can deal with various attack probabilities is proposed, which performs more robust than traditional data fusion techniques whereas requires less number of samples.
Abstract: Cooperative spectrum sensing is one of the key technologies to accurately detect the primary user (PU) activity in cognitive radio networks (CRNs). However, collaboration among multiusers provides malicious users (MUs) with an opportunity to launch spectrum sensing data falsification (SSDF) attack. Various approaches have been proposed regarding how to mitigate the negative effect of SSDF attack, while extensive references have strong assumptions such as MUs are in minority and need more decision samples. In this paper, we develop a general SSDF attack model. We further propose a robust data fusion scheme, named robust weighted sequential probability ratio test (RWSPRT), which can deal with various attack probabilities. In the proposed RWSPRT, according to the correct decision ability, the reputation value (RV) of each SU is integrated into weight coefficient of weighted sequential probability ratio test (WSPRT) to improve the performance of cooperative spectrum sensing. Simulation results show that RWSPRT performs more robust than traditional data fusion techniques whereas requires less number of samples, even when a large number of MUs exists in CRNs.

Posted Content
TL;DR: In this paper, the adaptive sequential probability ratio test (Ada-SPRT) framework is used to solve the binary hypothesis testing problem to determine the true label of a single object.
Abstract: In this paper, we aim at solving a class of multiple testing problems under the Bayesian sequential decision framework. Our motivating application comes from binary labeling tasks in crowdsourcing, where the requestor needs to simultaneously decide which worker to choose to provide the label and when to stop collecting labels under a certain budget constraint. We start with the binary hypothesis testing problem to determine the true label of a single object, and provide an optimal solution by casting it under the adaptive sequential probability ratio test (Ada-SPRT) framework. We characterize the structure of the optimal solution, i.e., optimal adaptive sequential design, which minimizes the Bayes risk through log-likelihood ratio statistic. We also develop a dynamic programming algorithm that can efficiently approximate the optimal solution. For the multiple testing problem, we further propose to adopt an empirical Bayes approach for estimating class priors and show that our method has an averaged loss that converges to the minimal Bayes risk under the true model. The experiments on both simulated and real data show the robustness of our method and its superiority in labeling accuracy as compared to several other recently proposed approaches.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper develops a network detection method that uses the slope of a linear regression fit as a test statistic for detecting a point radioactive source within a field of detectors and determines that the detection method usinglinear regression fit has slightly better overall performance than SPRT method.
Abstract: Detection of radioactive sources is an important capability that led to the deployment of networks for detecting and localizing low-level, hazardous radiation sources. It is generally expected that such networks outperform the individual detectors by intelligently fusing information from several dispersed sensors. In this paper, we develop a network detection method that uses the slope of a linear regression fit as a test statistic for detecting a point radioactive source within a field of detectors. In our regression model, we compute a least-squared linear fit between the average radiation counts at the detectors and the inverse-squared distances of known detector locations to an estimated source location. We show that the slope of this regression fit is an estimate of the source intensity and can be used as a threshold for source detection purposes. We compare performance of our proposed detection method with that of a fusion-based Sequential Probability Ratio Test (SPRT) method. For performance analyses, two datasets from the Domestic Nuclear Detection Ofces Intelligence Radiation Sensors Systems (IRSS) outdoor tests are used. Each of these tests consists of several runs of a single radioactive source moving in and out of a detector network. We present receiver operating characteristic (ROC) curves and optimal threshold values for the performance each detection method, and determine that our detection method using linear regression fit has slightly better overall performance than SPRT method.

Journal Article
TL;DR: In this article, the authors discuss new theoretical approaches to searching for methods to assess the safety space of civil aviation activity. But they do not consider the safety level of the air transport system.
Abstract: Increasing of civil aviation safety level is one of principal objectives of world air transport development. Present research paper discusses new theoretical approaches to searching for methods to assess the “safety space” of civil aviation activity. Special attention is paid to an effective test, proposed by A. Wald .

Proceedings ArticleDOI
Wenyu Wang1, Hong Wan1
03 Dec 2017
TL;DR: A sequential procedure for the Multi-Objective Ranking and Selection (MOR&S) problems that identifies the Pareto front with a guaranteed probability of correct selection (PCS) using the test statistics built upon the generalized sequential probability ratio test (GSPRT).
Abstract: In this paper, we introduce a sequential procedure for the Multi-Objective Ranking and Selection (MORS 2) an indifference-zone-free formulation, the new procedure eliminates the necessity of indifference-zone parameter; 3) asymptotically optimality, the GSPRT achieves asymptotically the shortest expected sample size among all sequential tests; 4) general distribution, the procedure uses the empirical likelihood for generally distributed observation. A numerical evaluation demonstrates the efficiency of the new procedure.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: The sequential probability ratio test, one of the classical statistical sampling scheme, is adapted to solve a multi-classes hypothesis test problem in this work and the computational cost is reduced significantly without sacrificing the performance of the underlying system.
Abstract: Human action recognition from video sequences is a challenging topic in computer vision research. In recent years, many studies have explored the use of deep learning representations to consistently improve the analysis accuracy. Meanwhile, designing a fast and reliable framework is becoming increasingly important given the exponential growth of video data collected for many purposes (e.g. public security, entertainment, and early medical diagnosis etc.). In order to design a more efficient automatic human action annotation method, the sequential probability ratio test, one of the classical statistical sampling scheme, is adapted to solve a multi-classes hypothesis test problem in our work. With the proposed algorithm, the computational cost is reduced significantly without sacrificing the performance of the underlying system. The experimental results based on the UCF101 data set demonstrated the efficiency of the framework compared to the fixed sampling scheme.

Journal ArticleDOI
TL;DR: Through simulations and theoretical derivations, it is demonstrated that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test.

Proceedings Article
01 Dec 2017
TL;DR: A new algorithm is presented that combines the bootstrap and the generalized sequential probability ratio test that admits the beneficial properties of sequential tests in terms of the expected number of samples and can be useful for applications where making observations is expensive or time critical.
Abstract: A new algorithm is presented that combines the bootstrap and the generalized sequential probability ratio test. The latter replaces all unknown parameters with suitable estimates so that the test statistic is subject to uncertainty. The question of how to choose the decision thresholds for the generalized sequential probability ratio test such that it fulfills given constraints on the error probabilities is still open. We propose to address this problem not by adjusting the thresholds, but by bootstrapping the estimates of the unknown parameters and constructing confidence intervals for the test statistic. The stopping rule of the test is then defined in terms of this confidence interval instead of the test statistic itself. The proposed procedure is reliable and admits the beneficial properties of sequential tests in terms of the expected number of samples. It can hence be useful for applications where making observations is expensive or time critical, as is often the case in Internet-of-Things, data analytics or wireless communications.

Journal ArticleDOI
26 Jul 2017
TL;DR: Experimental results show that the proposed elevator fault diagnosis method based on Sequential Probability Ratio Test has high accuracy in practical applications and is important to improve the performance of the fault diagnosis for the elevator mechanical system.
Abstract: An elevator fault diagnosis method based on Sequential Probability Ratio Test (SPRT) is proposed in this paper. In order to verify the effectiveness of the method, this paper designed the fault diagnosis experiment for elevator mechanical system. Firstly, the wavelet transformation is used to filter the noise of the vibration signal collected in the experiment. Then the kurtosis value of the filtered signal is extracted as the index to represent the practical status of the elevator. Finally, the SPRT algorithm is used to diagnose the faults of the elevator mechanical system. Experimental results show that this method has high accuracy in practical applications. It is important to improve the performance of the fault diagnosis for the elevator mechanical system.

Journal ArticleDOI
TL;DR: In this paper, the authors developed truncated sequential probability ratio test (SPRT) procedures for multivariate normal data, which includes a general cost structure and arbitrary mean and covariance structures.
Abstract: We develop truncated sequential probability ratio test (SPRT) procedures for multivariate normal data. The framework includes a general cost structure and arbitrary mean and covariance structures. ...

Journal ArticleDOI
TL;DR: In this paper, the SPRT method for Weibull life distribution is derived in order to enable the implementation of reliability compliance tests for gearboxes, which can significantly save on testing time and reduce costs.
Abstract: Assumptions accompanying exponential failure models are often not met in the standard sequential probability ratio test (SPRT) of many products. However, for most of the mechanical products, Weibull distribution conforms to their life distributions better compared to other techniques. The SPRT method for Weibull life distribution is derived in this paper, which enables the implementation of reliability compliance tests for gearboxes. Using historical failure data and condition monitoring data, a life prediction model based on hidden Markov model (HMM) is established to describe the deterioration process of gearboxes, then the predicted remaining useful life (RUL) is transformed into failure data that is used in SPRT for further analysis, which can significantly save on testing time and reduce costs. Explicit expression for the distribution of RUL is derived in terms of the posterior probability that the system is in the unhealthy state. The predicted and actual values of the residual life are compared, and the average relative error is 3.90 %, which verifies the validity of the proposed residual life prediction approach. A comparison with other life prediction and SPRT methods is given to elucidate the efficacy of the proposed approach.

Posted Content
TL;DR: In this paper, the sampling process for general Markov random fields (MRFs) is shown to be optimal in the asymptote of large numbers of random variables, where the objective is to sequentially sample the random variables (one-at-a-time) such that the true MRF model can be detected with the fewest number of samples, while in parallel the decision reliability is controlled.
Abstract: Consider $n$ random variables forming a Markov random field (MRF). The true model of the MRF is unknown, and it is assumed to belong to a binary set. The objective is to sequentially sample the random variables (one-at-a-time) such that the true MRF model can be detected with the fewest number of samples, while in parallel, the decision reliability is controlled. The core element of an optimal decision process is a rule for selecting and sampling the random variables over time. Such a process, at every time instant and adaptively to the collected data, selects the random variable that is expected to be most informative about the model, rendering an overall minimized number of samples required for reaching a reliable decision. The existing studies on detecting MRF structures generally sample the entire network at the same time and focus on designing optimal detection rules without regard to the data-acquisition process. This paper characterizes the sampling process for general MRFs, which, in conjunction with the sequential probability ratio test, is shown to be optimal in the asymptote of large $n$. The critical insight in designing the sampling process is devising an information measure that captures the decisions' inherent statistical dependence over time. Furthermore, when the MRFs can be modeled by acyclic probabilistic graphical models, the sampling rule is shown to take a computationally simple form. Performance analysis for the general case is provided, and the results are interpreted in several special cases: Gaussian MRFs, non-asymptotic regimes, connection to Chernoff's rule to controlled (active) sensing, and the problem of cluster detection.

Journal ArticleDOI
Yeon-Jea Cho1, Dong-Jo Park1
TL;DR: This paper uses a practical detection method that optimally approximates the ideal log-likelihood ratio with respect to its statistical mean to design a sequential detector in a robust manner to guarantee predefined false-alarm and missed-detection constraints for both the ideal and practical SPRTs.
Abstract: This paper studies signal detection methods in cognitive radios based on both the well-known likelihood ratio test (LRT) and the sequential probability ratio test (SPRT) considering instantaneously nonidentically distributed samples Since it is rather impractical to perform the ideal test that fully reflects such a change of a sample statistic, we used a practical detection method that optimally approximates the ideal log-likelihood ratio with respect to its statistical mean In the LRT scenario, we found that there exists a nonnegligible performance gap between the ideal and practical tests, and the related experiments and theoretical analyses are also explored Furthermore, we propose how to design a sequential detector in a robust manner to guarantee predefined false-alarm and missed-detection constraints for both the ideal and practical SPRTs

Journal ArticleDOI
TL;DR: In this article, the generalized residuals of correctly specified predictive density models are independent and identically distributed uniform, and the proposed sequential test examines the hypotheses of serial independence and uniformity in two stages.
Abstract: Summary We develop a specification test of predictive densities, based on the fact that the generalized residuals of correctly specified predictive density models are independent and identically distributed uniform. The proposed sequential test examines the hypotheses of serial independence and uniformity in two stages, wherein the first-stage test of serial independence is robust to violation of uniformity. The approach of the data-driven smooth test is employed to construct the test statistics. The asymptotic independence between the two stages facilitates proper control of the overall type I error of the sequential test. We derive the asymptotic null distribution of the test, which is free of nuisance parameters, and we establish its consistency. Monte Carlo simulations demonstrate excellent finite sample performance of the test. We apply this test to evaluate some commonly used models of stock returns.

Proceedings ArticleDOI
01 Dec 2017
TL;DR: A consensus algorithm is presented which guarantees asymptotic convergence to the true hypothesis of the Gaussian model, and which involves exchange of information, i.e., the decision of the observers.
Abstract: We consider the problem of detecting which Gaussian model generates an observed time series data. We consider as possible generative models two linear systems driven by white Gaussian noise with Gaussian initial conditions. We also consider two collaborating observers. The observers observe a function of the state of the systems. Using these observations, the aim is to find which one of the two Gaussian models has generated the observations. For each observer we formulate a sequential hypothesis testing problem. Each observer computes its own likelihood ratio based on its own observations. Using the likelihood ratio, each observer performs sequential probability ratio test (SPRT) to arrive at its decision on the hypothesis. Taking into account the random and asymmetric stopping times of the two observers, we present a consensus algorithm which guarantees asymptotic convergence to the true hypothesis. The consensus algorithm involves exchange of information, i.e., the decision of the observers. Through simulations, the “value” of the information exchanged, probability of error and average time to consensus are computed.

Journal Article
TL;DR: In this paper, the authors explore continuous sequential monitoring when they do not allow the null hypothesis to be rejected until a minimum number of observed events have occurred, and also evaluate continuous sequential analysis with a delayed start until a certain sample size has been attained.
Abstract: The CDC Vaccine Safety Datalink project has pioneered the use of near real-time post-market vaccine safety surveillance for the rapid detection of adverse events. Doing weekly analyses, continuous sequential methods are used, allowing investigators to evaluate the data near-continuously while still maintaining the correct overall alpha level. With continuous sequential monitoring, the null hypothesis may be rejected after only one or two adverse events are observed. In this paper, we explore continuous sequential monitoring when we do not allow the null to be rejected until a minimum number of observed events have occurred. We also evaluate continuous sequential analysis with a delayed start until a certain sample size has been attained. Tables with exact critical values, statistical power and the average times to signal are provided. We show that, with the first option, it is possible to both increase the power and reduce the expected time to signal, while keeping the alpha level the same. The second option is only useful if the start of the surveillance is delayed for logistical reasons, when there is a group of data available at the first analysis, followed by continuous or near-continuous monitoring thereafter.

Journal ArticleDOI
TL;DR: In this article, the joint Laplace transform of the sequential probability ratio test and the resulting stopped random walk process for the negative exponential model was derived by solving a related difference equation.
Abstract: In this article, we derive the joint Laplace transform of the sequential probability ratio test (SPRT) and the resulting stopped random walk process for the negative exponential model. The Laplace transform is derived by solving a related difference equation. This technique is novel because it only takes advantage of the Markov structure and does not rely on the typical martingale methods used for deriving the Laplace transform of other SPRTs. The joint Laplace transform provides the joint distribution of the SPRT and the associated stopped process, which is a new result. Even the marginal distributions were hitherto unknown.

Posted Content
TL;DR: In this paper, the authors consider sequential detection based on quantized data in the presence of eavesdropper and characterize the asymptotic performance of the MSPRT in terms of the expected sample size as a function of the vanishing error probabilities.
Abstract: We consider sequential detection based on quantized data in the presence of eavesdropper. Stochastic encryption is employed as a counter measure that flips the quantization bits at each sensor according to certain probabilities, and the flipping probabilities are only known to the legitimate fusion center (LFC) but not the eavesdropping fusion center (EFC). As a result, the LFC employs the optimal sequential probability ratio test (SPRT) for sequential detection whereas the EFC employs a mismatched SPRT (MSPRT). We characterize the asymptotic performance of the MSPRT in terms of the expected sample size as a function of the vanishing error probabilities. We show that when the detection error probabilities are set to be the same at the LFC and EFC, every symmetric stochastic encryption is ineffective in the sense that it leads to the same expected sample size at the LFC and EFC. Next, in the asymptotic regime of small detection error probabilities, we show that every stochastic encryption degrades the performance of the quantized sequential detection at the LFC by increasing the expected sample size, and the expected sample size required at the EFC is no fewer than that is required at the LFC. Then the optimal stochastic encryption is investigated in the sense of maximizing the difference between the expected sample sizes required at the EFC and LFC. Although this optimization problem is nonconvex, we show that if the acceptable tolerance of the increase in the expected sample size at the LFC induced by the stochastic encryption is small enough, then the globally optimal stochastic encryption can be analytically obtained; and moreover, the optimal scheme only flips one type of quantized bits (i.e., 1 or 0) and keeps the other type unchanged.

Journal ArticleDOI
TL;DR: The seqtest package as mentioned in this paper provides functions to perform sample size determination and a sequential triangular test for the expectation in one and two samples, probabilities in 1 and 2 samples, and...
Abstract: The R package seqtest provides functions to perform sample size determination and a sequential triangular test for the expectation in one and two samples, probabilities in one and two samples, and ...

Proceedings ArticleDOI
03 Dec 2017
TL;DR: The controlled Morris method (CMM) is proposed that acts in a sequential manner to keep the computational effort down to a minimum and enables to identify the factors with significant main and/or interaction effects while controlling Type I and Type II familywise error rates at desired levels.
Abstract: Morris's elementary effects method (MM) has been known as a model-free factor screening approach especially well-suited when the number of factors is large or when the computer model is computationally expensive to run. In this paper, we propose the controlled Morris method (CMM) that acts in a sequential manner to keep the computational effort down to a minimum. The sequential probability ratio test-based multiple testing procedure adopted by CMM enables to identify the factors with significant main and/or interaction effects while controlling Type I and Type II familywise error rates at desired levels. A numerical example is provided to demonstrate the efficacy and efficiency of CMM.