scispace - formally typeset
Search or ask a question

Showing papers in "Quality Engineering in 2011"


Journal Article
TL;DR: In this article, the authors outline methods for using fixed and random effects power analysis in the context of meta-analysis, and discuss the value of confidence intervals, show how they could be used in addition to or instead of retrospective power analysis, and also demonstrate that confidence intervals can convey information more effectively in some situations than power analyses alone.
Abstract: In this article, the authors outline methods for using fixed and random effects power analysis in the context of meta-analysis. Like statistical power analysis for primary studies, power analysis for meta-analysis can be done either prospectively or retrospectively and requires assumptions about parameters that are unknown. The authors provide some suggestions for thinking about these parameters, in particular for the random effects variance component. The authors also show how the typically uninformative retrospective power analysis can be made more informative. The authors then discuss the value of confidence intervals, show how they could be used in addition to or instead of retrospective power analysis, and also demonstrate that confidence intervals can convey information more effectively in some situations than power analyses alone. Finally, the authors take up the question “How many studies do you need to do a meta-analysis?” and show that, given the need for a conclusion, the answer is “two studies...

157 citations


Journal Article
TL;DR: In this paper, the authors focus on the monitoring of time series to provide early alerts of anomalies to stimulate investigation of potential outbreaks, with a brief summary of methods to detect significant spatial and spatio-temporal case clusters.
Abstract: Modern biosurveillance is the monitoring of a wide range of prediagnostic and diagnostic data for the purpose of enhancing the ability of the public health infrastructure to detect, investigate, and respond to disease outbreaks. Statistical control charts have been a central tool in classic disease surveillance and also have migrated into modern biosurveillance; however, the new types of data monitored, the processes underlying the time series derived from these data, and the application context all deviate from the industrial setting for which these tools were originally designed. Assumptions of normality, independence, and stationarity are typically violated in syndromic time series. Target values of process parameters are time-dependent and hard to define, and data labeling is ambiguous in the sense that outbreak periods are not clearly defined or known. Additional challenges include multiplicity in several dimensions, performance evaluation, and practical system usage and requirements. Our focus is mainly on the monitoring of time series to provide early alerts of anomalies to stimulate investigation of potential outbreaks, with a brief summary of methods to detect significant spatial and spatiotemporal case clusters. We discuss the statistical challenges in monitoring modern biosurveillance data, describe the current state of monitoring in the field, and survey the most recent biosurveillance literature.

149 citations



Journal Article
TL;DR: In this paper, the authors introduce a quantitative model to support the decision on the reliability level of a critical component during its design, and formulate the portions of Life Cycle Costs (LCC) which are affected by a component's reliability and its spare parts inventory level.
Abstract: We introduce a quantitative model to support the decision on the reliability level of a critical component during its design. We consider an OEM who is responsible for the availability of its systems in the field through service contracts. Upon a failure of a critical part in a system during the exploitation phase, the failed part is replaced by a ready-for-use part from a spare parts inventory. In an out-of-stock situation, a costly emergency procedure is applied. The reliability levels and spare parts inventory levels of the critical components are the two main factors that determine the downtime and corresponding costs of the systems. These two levels are decision variables in our model. We formulate the portions of Life Cycle Costs (LCC) which are affected by a component’s reliability and its spare parts inventory level. These costs consist of design costs, production costs, and maintenance and downtime costs in the exploitation phase. We conduct exact analysis and provide an efficient optimization algorithm. We provide managerial insights through a numerical experiment which is based on real-life data.

83 citations


Journal Article
TL;DR: In this paper, the authors present an integrated framework for the systematic assessment of an organization's quality, i.e. value-generating capability of its innovation management and the timely identification of ways to measure and improve it.
Abstract: Most firms competing in the global economy are paying increasing attention to innovation as the key driver of competitiveness. Innovation's key dimensions address the quality, the quantity and the speed of introducing innovations. With an increasing rate of change in the global economy, especially after the onset of the 2008–2009 economic crisis, innovation management emerges as a powerful way to facilitate a firm's adaptation to new conditions. However, despite a widespread acceptance of innovation's importance by the leadership of most companies, there is a general dissatisfaction with the results realised from innovation investments. For any organisation to address effectively the innovation challenge, its leadership must properly define the innovation system and process and apply sound quality and innovation management principles, as was done in the development of quality management and finance management. This requires the periodic assessment not only of innovation outputs, i.e. new products, services or business models, but also of inputs that determine a firm's innovation capability and the innovation process itself. Managing the effectiveness of the innovation process requires a balanced set of innovation metrics related to all innovation drivers, i.e. leadership, culture and people participation together with innovation results, such as time to market and financial metrics. This paper describes an integrated framework for the systematic assessment of an organisation's quality, i.e. value-generating capability of its innovation management and the timely identification of ways to measure and improve it.

79 citations


Journal Article
TL;DR: A new single control chart is proposed which integrates the exponentially weighted moving average procedure with the generalized likelihood ratio (GLR) test for jointly monitoring both the multivariate process mean and variability.
Abstract: Recently, monitoring the process mean and variability simultaneously for multivariate processes by using a single control chart has drawn some attention. However, due to the complexity of multivariate distributions, existing methods in univariate processes cannot be readily extended to multivariate processes. In this paper, we propose a new single control chart which integrates the exponentially weighted moving average (EWMA) procedure with the generalized likelihood ratio (GLR) test for jointly monitoring both the multivariate process mean and variability. Due to the powerful properties of the GLR test and the EWMA procedure, the new chart provides quite robust and satisfactory performance in various cases, including detection of the decrease in variability and individual observation at the sampling point, which are very important cases in many practical applications but may not be well handled by existing approaches in the literature. The application of our proposed method is illustrated by a real data example in ambulatory monitoring.

74 citations


Journal ArticleDOI
TL;DR: In this paper, a Markov chain approach is used to determine the run-length distribution of the two-sided nonparametric exponentially weighted moving average (EWMA) control chart for i.i.d. individual data and some associated performance characteristics.
Abstract: A Markov chain approach is used to determine the run-length distribution of the two-sided nonparametric exponentially weighted moving average (EWMA) control chart for i.i.d. individual data and some associated performance characteristics.

68 citations


Journal Article
TL;DR: A survey of broad classes of methodologies accompanied by selected illustrative examples of traditional statistical techniques as well as more recent machine learning and data mining algorithms are provided.

63 citations



Journal Article
TL;DR: In this article, the authors discuss different predictors of times to failure of units censored in multiple stages in a progressively censored sample from Pareto distribution, including linear unbiased predictors, maximum likelihood predictors and approximate maximum likelihood predictor.
Abstract: In this paper, we discuss different predictors of times to failure of units censored in multiple stages in a progressively censored sample from Pareto distribution. The best linear unbiased predictors, maximum likelihood predictors and approximate maximum likelihood predictors are considered. We also present two methods for obtaining prediction intervals for the times to failure of units. A numerical simulation study involving two data sets is presented to illustrate the methods of prediction.

62 citations


Journal Article
TL;DR: In this paper, a hierarchical model for bias with common mean across treatment comparisons of active treatment versus control is presented, where bias adjustment reduces the estimated relative efficacy of the treatments and the extent of between-trial heterogeneity.
Abstract: Summary. There is good empirical evidence that specific flaws in the conduct of randomized controlled trials are associated with exaggeration of treatment effect estimates. Mixed treatment comparison meta-analysis, which combines data from trials on several treatments that form a network of comparisons, has the potential both to estimate bias parameters within the synthesis and to produce bias-adjusted estimates of treatment effects. We present a hierarchical model for bias with common mean across treatment comparisons of active treatment versus control. It is often unclear, from the information that is reported, whether a study is at risk of bias or not. We extend our model to estimate the probability that a particular study is biased, where the probabilities for the ‘unclear’ studies are drawn from a common beta distribution. We illustrate these methods with a synthesis of 130 trials on four fluoride treatments and two control interventions for the prevention of dental caries in children. Whether there is adequate allocation concealment and/or blinding are considered as indicators of whether a study is at risk of bias. Bias adjustment reduces the estimated relative efficacy of the treatments and the extent of between-trial heterogeneity.


Journal Article
TL;DR: In this paper, the ability of cumulative sum control and exponentially weighted moving average (EWMA) control charts to detect increases in the Poisson rate is evaluated by calculating the steady-state average run length performance for the charts.
Abstract: The ability of cumulative sum control (CUSUM) and exponentially weighted moving average (EWMA) control charts to detect increases in the Poisson rate is evaluated by calculating the steady-state average run length performance for the charts. Results ind..

Journal Article
TL;DR: In this paper, the authors provide a review on the reported methodologies by comparing these two categories of methodologies in terms of their variation propagation modeling, process monitoring and diagnostic capability with an illustrative case study.
Abstract: The performance of Multistage Manufacturing Processes (MMPs) can be measured by quality, productivity and cost, which are inversely related to the variation of key product characteristics (KPCs) Therefore, it is crucial to reduce KPCs' variations by not only detecting the changes of process parameters, but also identifying the variation sources and eliminating them with corrective actions Recent developments in the Stream-of-Variation (SoV) and Statistical-Process-Control (SPC) methodologies significantly improve the variation reduction for MMPs This paper provides a review on the reported methodologies by comparing these two categories of methodologies in terms of their variation propagation modeling, process monitoring and diagnostic capability With an illustrative case study, it is concluded that the recent advancements of SoV and SPC methodologies significantly improve the effectiveness of variation reduction The discussion on the drawbacks of both methodologies also suggests the future research directions Copyright © 2010 John Wiley & Sons, Ltd

Journal Article
TL;DR: The effects of parameter estimation on the performance measures of the exponential EWMA control chart are investigated to provide explicit sample size recommendations in constructing these charts to reach a satisfactory performance in both the in-control and the out-of-control situation.

Journal Article
TL;DR: An infinite-horizon Markov decision process model is formulated and key structural properties of the resulting optimal cost function are derived that are sufficient to establish the existence of an optimal threshold-type policy with respect to the system's deterioration level and cumulative number of repairs.
Abstract: We consider the problem of optimally maintaining a periodically inspected system that deteriorates according to a discrete-time Markov process and has a limit on the number of repairs that can be performed before it must be replaced. After each inspection, a decision maker must decide whether to repair the system, replace it with a new one, or leave it operating until the next inspection, where each repair makes the system more susceptible to future deterioration. If the system is found to be failed at an inspection, then it must be either repaired or replaced with a new one at an additional penalty cost. The objective is to minimize the total expected discounted cost due to operation, inspection, maintenance, replacement and failure. We formulate an infinite-horizon Markov decision process model and derive key structural properties of the resulting optimal cost function that are sufficient to establish the existence of an optimal threshold-type policy with respect to the system’s deterioration level and cumulative number of repairs. We also explore the sensitivity of the optimal policy to inspection, repair and replacement costs. Numerical examples are presented to illustrate the structure and the sensitivity of the optimal policy.

Journal Article
TL;DR: In this article, an approach for detecting and identifying faults in railway infrastructure components is presented based on pattern recognition and data analysis algorithms, which is employed to reduce the complexity of the data to two and three dimensions.
Abstract: This paper presents an approach for detecting and identifying faults in railway infrastructure components. The method is based on pattern recognition and data analysis algorithms. Principal component analysis (PCA) is employed to reduce the complexity of the data to two and three dimensions. PCA involves a mathematical procedure that transforms a number of variables, which may be correlated, into a smaller set of uncorrelated variables called ‘principal components’. In order to improve the results obtained, the signal was filtered. The filtering was carried out employing a state–space system model, estimated by maximum likelihood with the help of the well-known recursive algorithms such as Kalman filter and fixed interval smoothing. The models explored in this paper to analyse system data lie within the so-called unobserved components class of models. Copyright © 2009 John Wiley & Sons, Ltd.

Journal Article
TL;DR: It is found that under normal survey conditions, specific information about the risk of identity or attribute disclosure influences neither respondents' expressed willingness to participate in a hypothetical survey nor their actual participation in a real survey, but when the possible harm resulting from disclosure is made explicit, the effect on response becomes significant.
Abstract: This article extends earlier work (Couper et al. 2008) that explores how survey topic and risk of identity and attribute disclosure, along with mention of possible harms resulting from such disclosure, affect survey participation. The first study uses web-based vignettes to examine respondents' expressed willingness to participate in the hypothetical surveys described, whereas the second study uses a mail survey to examine actual participation. Results are consistent with the earlier experiments. In general, we find that under normal survey conditions, specific information about the risk of identity or attribute disclosure influences neither respondents' expressed willingness to participate in a hypothetical survey nor their actual participation in a real survey. However, when the possible harm resulting from disclosure is made explicit, the effect on response becomes significant. In addition, sensitivity of the survey topic is a consistent and strong predictor of both expressed willingness to participate and actual participation.

Journal ArticleDOI
TL;DR: It is shown that the optimal design approach is applicable to any design problem and necessary when there are situations involving resource constraints or nonstandard design regions or models.
Abstract: There are many situations in which the requirements of a standard experimental design do not fit the research requirements of the problem. This article provides an introduction to optimal design.

Journal Article
TL;DR: In this article, a new method for determining the membership functions of parameter estimates and the reliability functions of multi-parameter lifetime distributions is proposed, and a preventive maintenance policy is formulated using a fuzzy reliability framework.
Abstract: Reliability assessment is an important issue in reliability engineering. Classical reliability-estimating methods are based on precise (also called “crisp”) lifetime data. It is usually assumed that the observed lifetime data take precise real numbers. Due to the lack, inaccuracy, and fluctuation of data, some collected lifetime data may be in the form of fuzzy values. Therefore, it is necessary to characterize estimation methods along a continuum that ranges from crisp to fuzzy. Bayesian methods have proved to be very useful for small data samples. There is limited literature on Bayesian reliability estimation based on fuzzy reliability data. Most reported studies in this area deal with single-parameter lifetime distributions. This article, however, proposes a new method for determining the membership functions of parameter estimates and the reliability functions of multi-parameter lifetime distributions. Also, a preventive maintenance policy is formulated using a fuzzy reliability framework. An artifici...

Journal Article
TL;DR: In this article, a mixture model is proposed to estimate the prevalence of sensitive behavior and cheating in the case of a dual sampling scheme with direct questioning and randomized response, where cheating is modelled separately for direct questions and randomized responses.
Abstract: Randomized response is a misclassification design to estimate the prevalence of sensitive behaviour. Respondents who do not follow the instructions of the design are considered to be cheating. A mixture model is proposed to estimate the prevalence of sensitive behaviour and cheating in the case of a dual sampling scheme with direct questioning and randomized response. The mixing weight is the probability of cheating, where cheating is modelled separately for direct questioning and randomized response. For Bayesian inference, Markov chain Monte Carlo sampling is applied to sample parameter values from the posterior. The model makes it possible to analyse dual sample scheme data in a unified way and to assess cheating for direct questions as well as for randomized response questions. The research is illustrated with randomized response data concerning violations of regulations for social benefit.

Journal Article
TL;DR: In this paper, the authors demonstrate the use of maxima nomination sampling (MNS) technique in design and evaluation of single AQL, LTPD, and EQL acceptance sampling plans for attributes.
Abstract: This paper demonstrates the use of maxima nomination sampling (MNS) technique in design and evaluation of single AQL, LTPD, and EQL acceptance sampling plans for attributes. We exploit the effect of sample size and acceptance number on the performance of our proposed MNS plans using operating characteristic (OC) curve. Among other results, we show that MNS acceptance sampling plans with smaller sample size and bigger acceptance number perform better than commonly used acceptance sampling plans for attributes based on simple random sampling (SRS) technique. Indeed, MNS acceptance sampling plans result in OC curves which, compared to their SRS counterparts, are much closer to the ideal OC curve. A computer program is designed which can be used to specify the optimum MNS acceptance sampling plan and to show, visually, how the shape of the OC curve changes when parameters of the acceptance sampling plan vary. Theoretical results and numerical evaluations are given.

Journal Article
TL;DR: In this article, the authors proposed a geoadditive sample selection model to correct for non-randomly selected data in a two-model hierarchy where, on the first level, a binary selection equation determines whether a particular observation will be available for the second level, i.e. in the outcome equation.
Abstract: Summary. Sample selection models attempt to correct for non-randomly selected data in a two-model hierarchy where, on the first level, a binary selection equation determines whether a particular observation will be available for the second level, i.e. in the outcome equation. Ignoring the non-random selection mechanism that is induced by the selection equation may result in biased estimation of the coefficients in the outcome equation. In the application that motivated this research, we analyse relief supply in earthquake-affected communities in Pakistan, where the decision to deliver goods represents the dependent variable in the selection equation whereas factors that determine the amount of goods supplied are analysed in the outcome equation. In this application, the inclusion of spatial effects is necessary since the available covariate information on the community level is rather scarce. Moreover, the high temporal dynamics underlying the immediate delivery of relief supply after a natural disaster calls for non-linear, time varying effects. We propose a geoadditive sample selection model that allows us to address these issues in a general Bayesian framework with inference being based on Markov chain Monte Carlo simulation techniques. The model proposed is studied in simulations and applied to the relief supply data from Pakistan.

Journal Article
TL;DR: The studies show that the TC-CUSUM scheme is several times more effective than many existing charts for event monitoring, so that cost or loss incurred by an event can be reduced by using this scheme.
Abstract: This article proposes a Cumulative Sum (CUSUM) scheme, called the TC-CUSUM scheme, for monitoring a negative or hazardous event. This scheme is developed using a two-dimensional Markov model. It is able to check both the time interval (T) between occurrences of the event and the size (C) of each occurrence. For example, a traffic accident may be defined as an event, and the number of injured victims in each case is the event size. Our studies show that the TC-CUSUM scheme is several times more effective than many existing charts for event monitoring, so that cost or loss incurred by an event can be reduced by using this scheme. Moreover, the TC-CUSUM scheme performs more uniformly than other charts for detecting both T shift and C shift, as well as the joint shift in T and C. The improvement in the performance is achieved because of the use of the CUSUM feature and the simultaneous monitoring of T and C. The TC-CUSUM scheme can be applied in manufacturing systems, and especially in non-manufacturing sectors (e.g. supply chain management, health-care industry, disaster management, and security control). Copyright © 2009 John Wiley & Sons, Ltd.

Journal Article
TL;DR: In this article, a control chart for nonparametric profile monitoring when within-profile data are correlated is proposed, which incorporates local linear kernel smoothing into the exponentially weighted moving average (EWMA) control scheme.
Abstract: In some applications, the quality of a process is characterized by the functional relationship between a response variable and one or more explanatory variables Profile monitoring is for checking the stability of this relationship over time Control charts for monitoring nonparametric profiles are useful when the relationship is too complicated to be described parametrically Most existing control charts in the literature are for monitoring parametric profiles They require the assumption that within-profile measurements are independent of each other, which is often invalid in practice This article focuses on nonparametric profile monitoring when within-profile data are correlated A novel control chart is suggested, which incorporates local linear kernel smoothing into the exponentially weighted moving average (EWMA) control scheme In this method, within-profile correlation is described by a nonparametric mixed-effects model Our proposed control chart is fast to compute and convenient to use Numeric

Journal Article
TL;DR: The J divergence is shown to be helpful for choosing auxiliary densities that minimize the error of the PS estimators and this results increase appreciably the usefulness of PS.
Abstract: Estimating normalizing constants is a common and often difficult problem in statistics, and path sampling (PS) is among the most powerful methods that have been put forward to this end. Using an identity that arises in the formulation of PS, we derive expressions for the Kullback-Leibler (KL) and J divergences between two distributions from possibly different parametric families. These expressions naturally stem from PS when the geometric path is used to link the two extreme densities. We examine the use of the KL and J divergence measures in PS in a variety of model selection examples. In this context, one challenging aspect of PS is that of selecting an appropriate auxiliary density that will yield a high quality estimate of the marginal likelihood without incurring excessive computational effort. The J divergence is shown to be helpful for choosing auxiliary densities that minimize the error of the PS estimators. These results increase appreciably the usefulness of PS.


Journal ArticleDOI
TL;DR: The phases, steps and tools of Design For Six Sigma, a methodology to apply the basic principles of Six Sigma in product and process development, are explained.
Abstract: This article explains the phases, steps and tools of Design For Six Sigma, a methodology to apply the basic principles of Six Sigma in product and process development.

Journal ArticleDOI
TL;DR: A Bayesian procedure to calculate posterior probabilities of active effects for unreplicated two-level factorials is proposed where the principles of effects sparsity, hierarchy, and heredity are successively considered.
Abstract: This article proposes a Bayesian procedure to calculate posterior probabilities of active effects for unreplicated two-level factorials. The results from a literature survey are used to specify individual prior probabilities for the activity of effects and the posterior probabilities are then calculated in a three-step procedure where the principles of effects sparsity, hierarchy, and heredity are successively considered. We illustrate our approach by reanalyzing experiments found in the literature.

Journal Article
TL;DR: The several sample case of the so-called nonparametric Behrens-Fisher problem in repeated measures designs is considered, even under the null hypothesis, the marginal distribution functions in the different groups may have different shapes, and are not assumed to be equal.