scispace - formally typeset
Search or ask a question

Showing papers in "Sequential Analysis in 2014"


Journal ArticleDOI
TL;DR: In this paper, the authors adopt the probability maximizing approach in place of the classical minimization of the average detection delay and propose modified versions of the Shiryaev, Lorden, and Pollak performance measures.
Abstract: For the problem of sequential detection of changes, we adopt the probability maximizing approach in place of the classical minimization of the average detection delay and propose modified versions of the Shiryaev, Lorden, and Pollak performance measures. For these alternative formulations, we demonstrate that the optimum sequential detection scheme is the simple Shewhart rule. Interestingly, we can also solve problems that under the classical setup have been open for many years, as optimum change detection with time-varying observations or with multiple postchange probability measures. For the latter, we also offer the exact solution for Lorden's original setup involving average detection delays, for the case where the average false alarm period is within certain limits.

35 citations


Journal ArticleDOI
TL;DR: It is shown that this sequential test is asymptotically optimal in the sense that it achieves asymPTotically the shortest expected sample size as the maximal type I and type II error probabilities tend to zero.
Abstract: In this paper, we consider the problem of testing two separate families of hypotheses via a generalization of the sequential probability ratio test. In particular, the generalized likelihood ratio statistic is considered and the stopping rule is the first boundary crossing of the generalized likelihood ratio statistic. We show that this sequential test is asymptotically optimal in the sense that it achieves asymptotically the shortest expected sample size as the maximal type I and type II error probabilities tend to zero.

33 citations


Journal ArticleDOI
TL;DR: In this article, the negative binomial (NB) mean is estimated with a fixed-accuracy confidence interval such that both confidence limits are positive, assuming that the thatch parameter remains known.
Abstract: Our main focus is on count data arising from statistical ecology. Anscombe (1949) emphasized negative binomial (NB) modeling for overdispersed count data. A large majority of existing methodologies, both sequential and nonsequential, were reviewed by Mukhopadhyay and Banerjee (2012). We assume that the thatch parameter remains known and revisit a sequential confidence interval estimation method for an NB mean proposed by Willson and Folks (1983) that may not always guarantee a positive lower confidence limit. Moreover, any postsampling adjustment of their lower confidence limit would compromise the preset confidence coefficient. In this article, we provide an appropriate resolution by estimating the NB mean with a fixed-accuracy confidence interval such that both confidence limits are positive. Our proposed purely sequential and two-stage estimation methodologies enjoy asymptotic consistency and asymptotic efficiency properties. Next, we consider estimating the ratio of two NB means under a two-s...

24 citations


Journal ArticleDOI
TL;DR: In this paper, a numerical method to compute the generalized Shiryaev-Roberts (GSR) detection procedure's pre-change run length distribution is presented, which is based on the integral equations approach and uses the collocation framework with the basis functions chosen to exploit a certain change-of-measure identity and a specific martingale property of the GSR procedure's detection statistic.
Abstract: Change-of-measure is a powerful technique in wide use across statistics, probability, and analysis. Particularly known as Wald's likelihood ratio identity, the technique enabled the proof of a number of exact and asymptotic optimality results pertaining to the problem of quickest change-point detection. Within the latter problem's context we apply the technique to develop a numerical method to compute the generalized Shiryaev–Roberts (GSR) detection procedure's pre-change run length distribution. Specifically, the method is based on the integral equations approach and uses the collocation framework with the basis functions chosen to exploit a certain change-of-measure identity and a specific martingale property of the GSR procedure's detection statistic. As a result, the method's accuracy and robustness improve substantially, even though the method's theoretical rate of convergence is shown to be merely quadratic. A tight upper bound on the method's error is supplied as well. The method is not re...

21 citations


Journal ArticleDOI
TL;DR: In this article, the authors revisited sequential estimation of the autoregressive parameter β in a first-order AR model and constructed a sequential confidence region for a parameter vector θ in a TAR(1) model.
Abstract: This article revisits sequential estimation of the autoregressive parameter β in a first-order autoregressive (AR(1)) model and construction of a sequential confidence region for a parameter vector θ in a first-order threshold autoregressive (TAR(1)) model. To resolve a theoretical conjecture raised in Sriram (1986), we provide a comprehensive numerical study that strongly suggests that the regret in using a sequential estimator of β can be significantly negative for many heavy-tailed error distributions and even for normal errors. Secondly, to investigate yet another conjecture about the limiting distribution of a sequential pivotal quantity for θ in a TAR(1) model, we conduct an extensive numerical study that strongly suggests that the sequential confidence region has much better coverage probability than that of a fixed sample counterpart, regardless of whether the θ values are inside or on or near the boundary of the ergodic region of the series. These highlight the usefulness of sequential s...

19 citations


Journal ArticleDOI
TL;DR: In this paper, an information approach method based on the Schwarz information criterion (SIC) was proposed to detect changes in the parameters of a skew normal distribution, which can model skewed data in many applied fields such as finance, economics and medical research.
Abstract: The skew normal distribution is an extension of the normal distribution allowing for the presence of skewness Such a distribution can model skewed data in many applied fields such as finance, economics and medical research The skew distribution family has been studied extensively since it was introduced by Azzalini (1985) for the first time However, few works have been done on the change-point problem for this distribution family In this article, we will propose an information approach method based on the Schwarz information criterion (SIC) to detect changes in the parameters of a skew normal distribution Performance of the proposed test is investigated through simulations under different changes among parameters and various sample sizes We successfully apply the method to two Latin American emerging market data sets to detect multiple changes

14 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed nonparametric procedures for testing change-point by using the ℙ-ℙ and ℚ -ℚ plots processes, which are characterized under the null hypothesis of no change and also under contiguous alternatives.
Abstract: We propose nonparametric procedures for testing change-point by using the ℙ-ℙ and ℚ-ℚ plots processes. The limiting distributions of the proposed statistics are characterized under the null hypothesis of no change and also under contiguous alternatives. We give an estimator of the change-point coefficient and obtain its strong consistency. We introduce the bootstrapped version of ℙ-ℙ and ℚ-ℚ processes, requiring the estimation of quantile density, and obtain their limiting laws. Finally, we propose and investigate the exchangeable bootstrap of the empirical ℙ-ℙ plot and ℚ-ℚ plot processes which avoids the problem of the estimation of quantile density, which is of its own interest. These results are used for calculating p-values of the proposed test statistics. Emphasis is placed on the explanation of the strong approximation methodology.

12 citations


Journal ArticleDOI
TL;DR: In this article, for a given multinomial probability vector, lower bound on (CS), and upper bound on trials, the authors use linear programming (LP) to construct a procedure that is guaranteed to minimize the expected number of trials.
Abstract: Multinomial selection is concerned with selecting the most probable (best) multinomial alternative. The alternatives compete in a number of independent trials. In each trial, each alternative wins with an unknown probability specific to that alternative. A long-standing research goal has been to find a procedure that minimizes the expected number of trials subject to a lower bound on the probability of correct selection ( (CS)). Numerous procedures have been proposed over the past 55 years, all of them suboptimal, for the version where the number of trials is bounded. We achieve the goal in the following sense: For a given multinomial probability vector, lower bound on (CS), and upper bound on trials, we use linear programming (LP) to construct a procedure that is guaranteed to minimize the expected number of trials. This optimal procedure may necessarily be randomized. We also present a mixed-integer linear program (MIP) that produces an optimal deterministic procedure. In our computational stud...

12 citations


Journal ArticleDOI
TL;DR: A Bayesian framework for sequential classification on finite lattice models is described in which response distributions are allowed to vary according to experiment, and optimal rates of convergence in classification are established.
Abstract: A Bayesian framework for sequential classification on finite lattice models is described in which response distributions are allowed to vary according to experiment. Optimal rates of convergence in classification are established. Intuitive and computationally simple experiment selection rules are proposed, and it is shown that this class of rules attains optimal rates almost surely under general conditions. A simulation study demonstrates that sequential classification can be conducted efficiently on lattices, with potentially great savings in experiment adminstration while maintaining high classification accuracy. This framework can be applied to adaptive testing for cognitive assessment and to other sequential classification problems such as group testing when experimental response distributions depend on pool composition.

10 citations


Journal ArticleDOI
TL;DR: In this paper, new sequential methods of multiple testing problems based on special properties of hypotheses acceptance regions in the constrained Bayesian tasks of testing hypotheses are offered, and results of an investigation on the properties of one of these methods are given.
Abstract: New sequential methods of multiple testing problems based on special properties of hypotheses acceptance regions in the constrained Bayesian tasks of testing hypotheses are offered. Results of an investigation on the properties of one of these methods are given. They show the consistency, simplicity, and optimality of the results obtained in the sense of the chosen criterion. The essence of the criterion is to restrict from above the probability of the error of one type and to minimize the probability of the error of the second type. The facts of the validity of the suitable properties of the method are proved. Examples of testing of hypotheses for the sequentially obtained independent samples from the multivariate normal distribution with correlated components are cited. They show the high quality of the proffered methods. The results of the Wald sequential method are given for the examples with two hypotheses and compared with the results obtained by the proffered method.

9 citations


Journal ArticleDOI
TL;DR: In this paper, an approach based on Schwartz information criterion (SIC) is used to detect the changes of the parameters of the noncentral skew t distribution, which is successfully applied to the stock returns of several Latin American countries.
Abstract: The change-point problem for the noncentral skew t distribution is studied in this article. An approach based on Schwartz information criterion (SIC) is used to detect the changes of the parameters of this distribution. Simulations are conducted to illustrate the performance of the proposed procedure. The method is successfully applied to the stock returns of several Latin American countries.

Journal ArticleDOI
TL;DR: In this paper, a generalized likelihood ratio (GLR) control chart based on sequential sampling (the SS GLR chart) is proposed to detect a wide range of two-sided shifts in the process mean.
Abstract: Variable sampling rate (VSR) control charts have been used where the sampling rate changes as a function of the data from the process. In this article we investigate a generalized likelihood ratio (GLR) control chart based on sequential sampling (the SS GLR chart). The objective of process monitoring is assumed to be the effective detection of a wide range of two-sided shifts in the process mean. The performance of the SS GLR chart is evaluated and compared with other control charts. The SS GLR chart has much better performance than that of the fixed sampling rate GLR chart. It is also shown that the overall performance of the SS GLR chart is better than that of the variable sampling interval (VSI) GLR chart and the VSR CUSUM chart. The SS GLR chart has the additional advantage that it requires fewer parameters to be specified than other VSR charts. The optimal parameter choices are given in the article, and regression equations are provided to find the limits for the SS GLR chart.

Journal ArticleDOI
TL;DR: In this article, a Bayesian multichannel change-point detection problem is studied in the following general setting: a multidimensional stochastic process is observed; some or all of its components may experience changes in distribution, simultaneously or not.
Abstract: A Bayesian multichannel change-point detection problem is studied in the following general setting. A multidimensional stochastic process is observed; some or all of its components may experience changes in distribution, simultaneously or not. The loss function penalizes for false alarms and detection delays, and the penalty increases with each missed change-point. For wide classes of stochastic processes, with or without nuisance parameters and practically any joint prior distribution of change-points, asymptotically pointwise optimal (APO) rules are obtained, translating the classical concept of Bickel and Yahav to the sequential change-point detection. These APO rules are attractive because of their simple analytic form and straightforward computation. An application to a multidimensional autoregressive time series is shown.

Journal ArticleDOI
TL;DR: In this paper, a modified three-stage sampling procedure was proposed to construct fixed-width confidence intervals of the common variance of equicorrelated normal distributions, which substantially reduced the expected sample size compared to that of the two-stage sample sampling scheme.
Abstract: In this work, we present a modified three-stage sampling procedure to construct fixed-width confidence intervals of the common variance of equicorrelated normal distributions. We derive the exact distribution of the stopping variable of this sampling scheme and also the exact distribution of the estimator of the common variance at stopping. The modified three-stage sampling substantially reduces the expected sample size compared to that of the two-stage sampling scheme of Haner and Zacks (2013). The coverage probabilities of the proposed interval estimators are computed exactly and are compared with the coverage probabilities obtained by two-stage sampling. We derive exact formulae for the functionals of the stopping variable and the estimator of the common variance at stopping.

Journal ArticleDOI
TL;DR: In this article, a method for the detection of abrupt changes in the generating mechanisms (stochastic, deterministic, or mixed) of a time series, without any prior knowledge about them, is developed.
Abstract: A novel methodology for the quickest detection of abrupt changes in the generating mechanisms (stochastic, deterministic, or mixed) of a time series, without any prior knowledge about them, is developed. This methodology has two components: the first is a novel concept of the e-complexity and the second is a method for the quickest change point detection (Darkhovsky, 2013). The e-complexity of a continuous function given on a compact segment is defined. The expression for the e-complexity of functions with the same modulus of continuity is derived. It is found that, for the Holder class of functions, there exists an effective characterization of the e-complexity. The conjecture that the e-complexity of an individual function from the Holder class has a similar characterization is formulated. The algorithm for the estimation of the e-complexity coefficients via finite samples of function values is described. The second conjecture that a change of the generating mechanism of a time series leads to ...

Journal ArticleDOI
TL;DR: In this article, the slippage configuration is shown to be the worst for all procedures that have a finite expected number of trials and always select the alternative with more successes, and a generalization of the key inequality in the proof to an arbitrary number of alternatives is conjectured.
Abstract: Performance guarantees for multinomial selection procedures are usually derived by finding the least favorable configuration (LFC)—the one for which the probability of correct selection is minimum outside the indifference zone—and then evaluating the procedure on that configuration. The slippage configuration has been proved to be the LFC for several procedures and has been conjectured to be the worst for some other procedures. The principal result of this article unifies and extends all previous results for two alternatives: the slippage configuration is the worst for all procedures that have a finite expected number of trials and always select the alternative with more successes. A generalization of the key inequality in the proof to an arbitrary number of alternatives is conjectured.

Journal ArticleDOI
TL;DR: It is shown that when monitoring a weighted log-rank statistic, the entry and survival distribution needs to be estimated at interim analyses, and a constrained boundaries approach is proposed to maintain the planned operating characteristics of a group sequential design.
Abstract: We consider the repeated group sequential testing of a survival endpoint with a time-varying treatment effect using a weighted logrank statistic. The emphasis of this paper is on the monitoring of this statistic where information growth is non-linear. We propose using a constrained boundaries approach to maintain the planned operating characteristics of a group sequential design. A simulation study is presented to demonstrate the operating characteristics of the method together with a case study to illustrate the procedure. We show that when monitoring a weighted logrank statistic, the entry and survival distribution needs to be estimated at interim analyses.

Journal ArticleDOI
TL;DR: The thoughts and understanding with regard to possible negative regret in some of the point estimation problems under a time series model are laid down.
Abstract: Among other issues, Sriram and Iaci (2014) have elegantly drawn attention to the notion of “negative regret” for a number of interesting sequential minimum risk point estimation problems under a time series model. We briefly lay down our thoughts and understanding with regard to possible negative regret in some of the point estimation problems.

Journal ArticleDOI
TL;DR: In this article, the problem of Bayes sequential estimation of the unknown parameter in a particular exponential family of distributions with relative LINEX loss and fixed cost for each observation is considered, and the approximate optimal procedures are shown to be asymptotically non-sufficient in the sense of Woodroofe.
Abstract: The problem of Bayes sequential estimation of the unknown parameter in a particular exponential family of distributions with relative LINEX loss and fixed cost for each observation is considered in this article. Optimal, nearly optimal, and asymptotically pointwise optimal procedures with deterministic stopping rules are derived, and the approximate optimal procedures are shown to be asymptotically nondeficient in the sense of Woodroofe (1981). In addition, a robust procedure with a deterministic stopping rule, which does not depend on the parameters of the prior distribution, is proposed, and the asymptotic second-order expansion of the corresponding Bayes risk is obtained.

Journal ArticleDOI
TL;DR: In this article, a clinical trial with three competing treatments and study designs that allocate subjects sequentially in order to maximize the power of relevant tests is considered, and two different criteria are considered: the first is to find the best treatment and the second is to order all three.
Abstract: We consider a clinical trial with three competing treatments and study designs that allocate subjects sequentially in order to maximize the power of relevant tests. Two different criteria are considered: the first is to find the best treatment and the second is to order all three. The power converges to one in an exponential rate and we find the optimal allocation that maximizes this rate by large deviation theory. For the first criterion the optimal allocation has the plausible property that it assigns a small fraction of subjects to the inferior treatment. The optimal allocation depends heavily on the unknown parameters and, therefore, in order to implement it, a sequential adaptive scheme is considered. At each stage of the trial the parameters are estimated and the next subject is allocated according to the estimated optimal allocation. We study the asymptotic properties of this design by large deviations theory and the small sample behavior by simulations. Our results demonstrate that, unlik...

Journal ArticleDOI
TL;DR: It is shown that the optimized sequential search strategy nonetheless dominates the hybrid strategy for all values of T and N and for all underlying distributions, assuming a fixed cost per prospect.
Abstract: A search strategy is introduced that combines features of both the sequential and best-of-N search strategies. This hybrid search strategy has two parameters, an acceptance threshold T and a maximum search length N. It is shown that the optimized sequential search strategy nonetheless dominates the hybrid strategy for all values of T and N and for all underlying distributions, assuming a fixed cost per prospect.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of constructing an online algorithm for tracking a conditional spatial median, a center of a multivariate distribution, and establish a nonasymptotic upper bound for the L p -risk.
Abstract: We consider the problem of constructing an on-line (recursive) algorithm for tracking a conditional spatial median, a center of a multivariate distribution. In the one-dimensional case we also track conditional quantiles of arbitrary level. We establish a nonasymptotic upper bound for the L p -risk of the algorithm, which is then minimized under different assumptions on the magnitude of the variation of the spatial median or quantile. We derive convergence rates for the examples we consider.

Journal ArticleDOI
TL;DR: Group sequential methods for a two-treatment clinical trial with normal responses are discussed in this article, where the sample sizes for two treatments are possibly unequal between the treatments due to an unequal randomization.
Abstract: Group sequential methods for a two-treatment clinical trial with normal responses are discussed. First we consider the case where the sample sizes for two treatments are possibly unequal between the treatments due to an unequal randomization. Then we discuss group sequential design in the context of a historical control study, that is, under the partial sequential sampling scheme, in which the samples on one treatment, say control, are available at the outset, and the samples on the other treatment, say experimental, are obtained in the group sequential way. We discuss the cases of known and unknown variance for both unbalanced and partial group sequential setups. All of the procedures are discussed with numerical studies.

Journal ArticleDOI
TL;DR: The authors made some comments on the invited paper of Professors Sriram and Iaci, and these comments taken in the light of the paper under discussion will hopefully broaden the scope of future research in this important field.
Abstract: We are glad to offer some comments on the invited paper of Professors Sriram and Iaci. These comments taken in the light of the paper under discussion will hopefully broaden the scope of future research in this important field.

Journal ArticleDOI
TL;DR: Based on the excellent review and numerical study presented by Professor T. N. Sriram and Professor R. Iaci, another sequential problem for autoregressive processes is discussed in this paper.
Abstract: Based on the excellent review and numerical study presented by Professor T. N. Sriram and Professor R. Iaci, another sequential problem for autoregressive processes is discussed. © 2014 Copyright Taylor & Francis Group, LLC.

Journal ArticleDOI
TL;DR: In this article, a sequential procedure for point and confidence region estimating parameters in AR(1) is presented, where the uniform asymptotic normality of estimators is considered.
Abstract: In this discussion, we focus on a sequential procedure for point and confidence region estimating parameters in autoregressive processes AR(1). The uniform asymptotic normality of estimators is considered.

Journal ArticleDOI
Albrecht Irle1
TL;DR: In this article, the possibility of exact computations for stopping times in AR(1) processes is discussed based on the thorough and inspiring review and numerical study presented by Professor T. N. Sriram and Professor R. Iaci.
Abstract: Based on the thorough and inspiring review and numerical study presented by Professor T. N. Sriram and Professor R. Iaci, the possibility of exact computations for stopping times in AR(1) processes is discussed.

Journal ArticleDOI
TL;DR: In this article, the authors discuss sequential estimation of AR(p) models from the viewpoint of information geometry and show that it is possible to estimate the AR models from information geometry.
Abstract: We discuss sequential estimation of AR(p) models from the viewpoint of information geometry.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on sequential fixed-width confidence intervals for linear time series, which were reviewed in Section 2.1.1 of their paper and explain how this work stimulated subsequent developments in econometric time series modeling.
Abstract: This discussion focuses on sequential fixed-width confidence intervals for linear time series, which Sriram and Iaci (2014) have reviewed masterfully in Section 2.1.1 of their paper. After describing the background of my 1983 paper with Siegmund, I explain how this work stimulated subsequent developments in econometric time series modeling. In this connection, I also point out certain subtle issues with autoregressive and more general stochastic regression models and discuss how they have been addressed.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed estimating f(x; μ, σ2) with both two-stage and purely sequential methodologies under the mean integrated squared error (MISE) loss function, where the goal is to make the associated risk not to exceed a preassigned positive number c, referred to as the risk bound.
Abstract: Consider independent observations having a common normal probability density function with x ∈ R, unknown mean μ(∈R), and unknown variance σ2(∈R +). We propose estimating f(x; μ, σ2) with both two-stage and purely sequential methodologies under the mean integrated squared error (MISE) loss function. Our goal is to make the associated risk not to exceed a preassigned positive number c, referred to as the risk bound. No fixed-sample-size methodology would handle this estimation problem. We show that both density estimation methodologies satisfy an asymptotic (a) first-order efficiency property and a (b) first-order risk-efficiency property. Interestingly, purely sequential density estimation methodology has a better second-order efficiency property than that associated with two-stage methodology. Some robustness issues have been addressed. Small, moderate, and large sample performances are examined with the help of simulations. Illustrations are given with real data sets.