scispace - formally typeset
Search or ask a question

Showing papers by "William H. Woodall published in 2015"


Journal ArticleDOI
TL;DR: The performance of the Shewhart X-bar control chart with estimated in-control parameters has been discussed a number of times in the literature as discussed by the authors, and previous studies showed that at least 400/(n - 1) phase I samples, where n being greater than 1 is the sampl..
Abstract: The performance of the Shewhart X-bar control chart with estimated in-control parameters has been discussed a number of times in the literature. Previous studies showed that at least 400/(n - 1) phase I samples, where n being greater than 1 is the sampl..

144 citations


Journal ArticleDOI
TL;DR: In this paper, the authors assess the in-control performance of the exponentially weighted moving average (EWMA) control chart in terms of the SDARL and percentiles of the ARL distribution when the process parameters are estimated.
Abstract: The authors assess the in-control performance of the exponentially weighted moving average (EWMA) control chart in terms of the SDARL and percentiles of the ARL distribution when the process parameters are estimated.

123 citations


Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the in-control performance of the S2 control chart with estimated parameters conditional on the Phase I sample and adjust the control limits such that the in control ARL is guaranteed to be above a specified value with a certain specified probability.
Abstract: We evaluate the in-control performance of the S2 control chart with estimated parameters conditional on the Phase I sample. Simulation results indicate no realistic amount of Phase I data is enough to have confidence that the in-control average run length (ARL) obtained will be near the desired value. To overcome this problem, we adjust the S2 chart’s control limits such that the in-control ARL is guaranteed to be above a specified value with a certain specified probability. The required adjustment does not have too much of an adverse effect on the out-of-control performance of the chart.

76 citations


Journal ArticleDOI
TL;DR: The highly effective American College of Surgeons National Surgical Quality Improvement Program (NSQIP), which offers data-based benchmarking of participating hospitals and provides information on best surgical practices, is described.
Abstract: In this expository paper, we review methods for monitoring medical outcomes with a focus on surgical quality. We discuss the importance and role of risk adjustment. We give the advantages and disad...

66 citations


Journal ArticleDOI
TL;DR: The ECUSUM chart is shown to be much less robust to departures from the exponential distribution than was previously claimed in the literature and the advantages of using one of the other two charts, which show surprisingly similar performance.
Abstract: The Weibull distribution can be used to effectively model many different failure mechanisms due to its inherent flexibility through the appropriate selection of a shape and a scale parameter. In this paper, we evaluate and compare the performance of three cumulative sum (CUSUM) control charts to monitor Weibull-distributed time-between-event observations. The first two methods are the Weibull CUSUM chart and the exponential CUSUM (ECUSUM) chart. The latter is considered in literature to be robust to the assumption of the exponential distribution when observations have a Weibull distribution. For the third CUSUM chart included in this study, an adjustment in the design of the ECUSUM chart is used to account for the true underlying time-between-event distribution. This adjustment allows for the adjusted ECUSUM chart to be directly comparable to the Weibull CUSUM chart. By comparing the zero-state average run length and average time to signal performance of the three charts, the ECUSUM chart is shown to be much less robust to departures from the exponential distribution than was previously claimed in the literature. We demonstrate the advantages of using one of the other two charts, which show surprisingly similar performance. Copyright © 2014 John Wiley & Sons, Ltd.

45 citations


Journal ArticleDOI
TL;DR: Simulation‐based dynamic probability control limits (DPCLs) patient‐by‐patient for risk‐adjusted Bernoulli CUSUM charts with DPCLs have consistent in‐control performance at the desired level with approximately geometrically distributed run lengths.
Abstract: The risk-adjusted Bernoulli cumulative sum (CUSUM) chart developed by Steiner et al. (2000) is an increasingly popular tool for monitoring clinical and surgical performance. In practice, however, the use of a fixed control limit for the chart leads to a quite variable in-control average run length performance for patient populations with different risk score distributions. To overcome this problem, we determine simulation-based dynamic probability control limits (DPCLs) patient-by-patient for the risk-adjusted Bernoulli CUSUM charts. By maintaining the probability of a false alarm at a constant level conditional on no false alarm for previous observations, our risk-adjusted CUSUM charts with DPCLs have consistent in-control performance at the desired level with approximately geometrically distributed run lengths. Our simulation results demonstrate that our method does not rely on any information or assumptions about the patients' risk distributions. The use of DPCLs for risk-adjusted Bernoulli CUSUM charts allows each chart to be designed for the corresponding particular sequence of patients for a surgeon or hospital. Copyright © 2015 John Wiley & Sons, Ltd.

40 citations


Journal ArticleDOI
TL;DR: This approach accounts for practitioner-to-practitioner variability in the in-control average run length (ARL) of self-starting charts, which has not been considered previously.
Abstract: The recommended size of the Phase I data set used to estimate the in-control parameters has been discussed many times in the process monitoring literature. Collecting baseline data, however, can be difficult or slow in some applications. Such issues hav..

33 citations


Journal Article
TL;DR: In this paper, the authors illustrate the development and implementation of a proposed CUSUM chart and evaluate its properties through a simulation study, and demonstrate the properties of the proposed chart.
Abstract: The authors illustrate the development and implementation of a proposed CUSUM chart and evaluate its properties through a simulation study.

31 citations


Journal ArticleDOI
TL;DR: The authors' simulation results show that the in-control ARLs of the risk-adjusted Bernoulli CUSUM chart with fixed control limits and a given risk-adjustment equation vary significantly for different patient population distributions, and theIn-controlARLs decrease as the mean of the Parsonnet scores increases.
Abstract: Objective: This research is designed to examine the impact of varying patient population distributions on the in-control performance of the risk-adjusted Bernoulli CUSUM chart. Design: The in-control performance of the chart is compared based on sampling the Parsonnet scores with replacement from five realistic subsets of a given distribution. Settings: Five patient mixes with different Parsonnet score distributions are created from a real patient population. Main Outcome Measures: The outcome measures for this research are the in-control average run lengths (ARLs) given varying patient populations. Results: Our simulation results show that the in-control ARLs of the risk-adjusted Bernoulli CUSUM chart with fixed control limits and a given risk-adjustment equation vary significantly for different patient population distributions, and the in-control ARLs decrease as the mean of the Parsonnet scores increases. Conclusions: The simulation results imply that the control limits should vary based on the particular patient population of interest in order to control the in-control performance of the risk-adjusted Bernoulli CUSUM method.

27 citations


Journal ArticleDOI
TL;DR: Several examples are discussed to highlight the promising uses of invariant Poisson control charting methods in this much broader set of applications, which includes risk-adjusted monitoring in healthcare, public health surveillance, and monitoring of continuous time nonhomogeneous Poisson processes.
Abstract: The use of varying sample size monitoring techniques for Poisson count data has drawn a great deal of attention in recent years. Specifically, these methods have been used in public health surveillance, manufacturing, and safety monitoring. A number of approaches have been proposed, from the traditional Shewhart charts to cumulative sum (CUSUM) and exponentially weighted moving average (EWMA) methods. It is convenient to use techniques based on statistics that are invariant to the units of measurement since in most cases these units are arbitrarily selected. A few of the methods reviewed in our expository article are not inherently invariant, but most are easily modified to be invariant. Most importantly, if methods are invariant to the choice of units of measurement, they can be applied in situations where the in-control Poisson mean varies over time, even if there is no associated varying sample size. Several examples are discussed to highlight the promising uses of invariant Poisson control charting me...

21 citations


Book ChapterDOI
01 Jan 2015
TL;DR: The important practical issue of the effect of aggregation of counts over time, some generalizations of standard methods, and some promising research ideas are discussed.
Abstract: A growing number of applications involve monitoring with rare event data. The event of interest could be, for example, a nonconforming manufactured item, a congenital malformation, or an industrial accident. The most common approaches for monitoring such processes involve using an exponential distribution to model the time between the events or using a Bernoulli distribution to model whether or not each opportunity for the event results in its occurrence. The use of a sequence of independent Bernoulli random variables leads to a geometric distribution for the number of non-occurrences between the occurrences of the rare events. One surveillance method is to use a power transformation on the exponential or geometric observations to achieve approximate normality of the in-control distribution and then use a standard individuals control chart. We add to the argument that use of this approach is very counterproductive and cover some alternative approaches. We discuss the choice of appropriate performance metrics. The strong adverse effect of Phase I parameter estimation on Phase II performance of various charts is then summarized. In addition, the important practical issue of the effect of aggregation of counts over time, some generalizations of standard methods, and some promising research ideas are discussed.

Journal ArticleDOI
TL;DR: This study demonstrates that, when the out-of-control process corresponds to a sustained shift, the cluster-based method using the successive difference estimator is clearly the superior method, among those methods considered, based on all performance criteria.
Abstract: The author propose a new technique, referred to as cluster-based profile monitoring, to aid in determining the possible existence of profiles in the historical data set resulting from an out-of-control process.

Journal ArticleDOI
TL;DR: A Monte Carlo study demonstrates that the cluster‐based method results in superior performance over a non‐cluster based method with respect to better classification and higher power in detecting out‐of‐control profiles.
Abstract: A cluster-based method has been used successfully to analyze parametric profiles in Phase I of the profile monitoring process. Performance advantages have been demonstrated when using a cluster-based method of analyzing parametric profiles over a non-cluster based method with respect to more accurate estimates of the parameters and improved classification performance criteria. However, it is known that, in many cases, profiles can be better represented using a nonparametric method. In this study, we use the cluster-based method to analyze profiles that cannot be easily represented by a parametric function. The similarity matrix used during the clustering phase is based on the fits of the individual profiles with p-spline regression. The clustering phase will determine an initial main cluster set that contains greater than half of the total profiles in the historical data set. The profiles with in-control T2 statistics are sequentially added to the initial main cluster set, and upon completion of the algorithm, the profiles in the main cluster set are classified as the in-control profiles and the profiles not in the main cluster set are classified as out-of-control profiles. A Monte Carlo study demonstrates that the cluster-based method results in superior performance over a non-cluster based method with respect to better classification and higher power in detecting out-of-control profiles. Also, our Monte Carlo study shows that the cluster-based method has better performance than a non-cluster based method whether the model is correctly specified or not. We illustrate the use of our method with data from the automotive industry. Copyright © 2014 John Wiley & Sons, Ltd.