scispace - formally typeset
Search or ask a question

Showing papers in "Quality and Reliability Engineering International in 2016"


Journal ArticleDOI
TL;DR: Three goodness metrics of correlation, monotonicity and robustness are defined and combined for automatically more relevant degradation feature selection in this paper and effectiveness of the proposed method is verified by rolling element bearing degradation experiments.
Abstract: Rolling element bearings are among the most widely used and also vulnerable components in rotating machinery equipment. Recently, prognostics and health management of rolling element bearings is more and more attractive both in academics and industry. However, many studies have been focusing on the prognostic aspect of bearing prognostics and health management and few efforts have been performed in relation to the optimal degradation feature selection issue. For more effective and efficient remaining useful life predictions, three goodness metrics of correlation, monotonicity and robustness are defined and combined for automatically more relevant degradation feature selection in this paper. Effectiveness of the proposed method is verified by rolling element bearing degradation experiments. Copyright © 2015 John Wiley & Sons, Ltd.

154 citations


Journal ArticleDOI
TL;DR: This paper identifies barriers to SME uptake of big data analytics and recognises their complex challenge to all stakeholders, including national and international policy makers, IT, business management and data science communities.
Abstract: Big data is big news, and large companies in all sectors are making significant advances in their customer relations, product selection and development and consequent profitability through using this valuable commodity. Small and medium enterprises (SMEs) have proved themselves to be slow adopters of the new technology of big data analytics and are in danger of being left behind. In Europe, SMEs are a vital part of the economy, and the challenges they encounter need to be addressed as a matter of urgency. This paper identifies barriers to SME uptake of big data analytics and recognises their complex challenge to all stakeholders, including national and international policy makers, IT, business management and data science communities. The paper proposes a big data maturity model for SMEs as a first step towards an SME roadmap to data analytics. It considers the ‘state-of-the-art’ of IT with respect to usability and usefulness for SMEs and discusses how SMEs can overcome the barriers preventing them from adopting existing solutions. The paper then considers management perspectives and the role of maturity models in enhancing and structuring the adoption of data analytics in an organisation. The history of total quality management is reviewed to inform the core aspects of implanting a new paradigm. The paper concludes with recommendations to help SMEs develop their big data capability and enable them to continue as the engines of European industrial and business success. Copyright © 2016 John Wiley & Sons, Ltd.

138 citations


Journal ArticleDOI
TL;DR: This paper provides a tutorial on Latin hypercube design of experiments, highlighting potential reasons of its widespread use and going all the way to the pitfalls of the indiscriminate use of Latin hyper cube designs.
Abstract: The growing power of computers enabled techniques created for design and analysis of simulations to be applied to a large spectrum of problems and to reach high level of acceptance among practitioners. Generally, when simulations are time consuming, a surrogate model replaces the computer code in further studies (e.g., optimization, sensitivity analysis, etc.). The first step for a successful surrogate modeling and statistical analysis is the planning of the input configuration that is used to exercise the simulation code. Among the strategies devised for computer experiments, Latin hypercube designs have become particularly popular. This paper provides a tutorial on Latin hypercube design of experiments, highlighting potential reasons of its widespread use. The discussion starts with the early developments in optimization of the point selection and goes all the way to the pitfalls of the indiscriminate use of Latin hypercube designs. Final thoughts are given on opportunities for future research. Copyright © 2015 John Wiley & Sons, Ltd.

93 citations


Journal ArticleDOI
TL;DR: Time-between-events control charts detect an out-of-control situation without great loss of sensitivity as compared with existing charts, and draw a precise conclusion from the statistical point of view.
Abstract: Major difficulties in the study of high-quality processes with traditional process monitoring techniques are a high false alarm rate and a negative lower control limit. The purpose of time-between-events control charts is to overcome existing problems in the high-quality process monitoring setup. Time-between-events charts detect an out-of-control situation without great loss of sensitivity as compared with existing charts. High-quality control charts gained much attention over the last decade because of the technological revolution. This article is dedicated to providing an overview of recent research and presenting it in a unifying framework. To summarize results and draw a precise conclusion from the statistical point of view, cross-tabulations are also given in this article. Copyright © 2016 John Wiley & Sons, Ltd.

90 citations


Journal ArticleDOI
TL;DR: The results of the application reveal that the proposed maintenance planning framework can effectively and efficiently be used in practice and a real-world implementation in an international food company is presented.
Abstract: A maintenance planning framework is developed in this study to reduce and stabilize the maintenance costs of the manufacturing companies. The framework is based on fuzzy technique for order preference by similarity to ideal solution(TOPSIS) and failure mode and effects analysis (FMEA) techniques and supports maintenance planning decisions in a dynamic way. The proposed framework is general and can easily be adapted to a host of manufacturing environments in a variety of sectors. To determine the maintenance priorities of the machines, fuzzy TOPSIS technique is employed. In this regard, ‘risk priority number’ obtained by FMEA and ‘current technology’, ‘substitutability’, ‘capacity utilization’, and ‘contribution to profit’ are used as the criteria. Performance of the resulting maintenance plan is monitored, and maintenance priorities of the machines are updated by the framework. To confirm the viability of the proposed framework, a real-world implementation in an international food company is presented. The results of the application reveal that the proposed maintenance planning framework can effectively and efficiently be used in practice. Copyright © 2015 John Wiley & Sons, Ltd.

88 citations


Journal ArticleDOI
TL;DR: An evidential downscaling method for risk evaluation in FMEA is proposed and the results and comparison show that the proposed method is more flexible and reasonable for real applications.
Abstract: Failure mode and effects analysis (FMEA) is an engineering and management technique, which is widely used to define, identify, and eliminate known or potential failures, problems, errors, and risk from the design, process, service, and so on. In a typical FMEA, the risk evaluation is determined by using the risk priority number (RPN), which is obtained by multiplying the scores of the occurrence, severity, and detection. However, because of the uncertainty in FMEA, the traditional RPN has been criticized because of several shortcomings. In this paper, an evidential downscaling method for risk evaluation in FMEA is proposed. In FMEA model, we utilize evidential reasoning approach to express the assessment from different experts. Multi-expert assessments are transformed to a crisp value with weighted average method. Then, Euclidean distance from multi-scale is applied to construct the basic belief assignments in Dempster–Shafer evidence theory application. According to the proposed method, the number of ratings is decreased from 10 to 3, and the frame of discernment is decreased from 210 to 23, which greatly decreases the computational complexity. Dempster's combination rule is utilized to aggregate the assessment of risk factors. We illustrate a numerical example and use the proposed method to deal with the risk priority evaluation in FMEA. The results and comparison show that the proposed method is more flexible and reasonable for real applications. Copyright © 2014 John Wiley & Sons, Ltd.

71 citations


Journal ArticleDOI
TL;DR: The distributional properties of the sample CV for multivariate data and the procedures to implement the chart are presented, and the usefulness and applicability of the proposed chart on real data are demonstrated.
Abstract: Existing charts in the literature usually monitor either the mean or the variance of the process. However, in certain scenarios, the practitioner is not interested in the changes in the mean or the variance but is instead interested in monitoring the relative variability compared with the mean. This relative variability is called the coefficient of variation (CV). In the existing literature, none of the control charts that monitor the CV are applied for multivariate data. To fill this gap in research, this paper proposes a CV chart that monitors the CV for multivariate data. To the best of the authors' knowledge, this proposed chart is the first control chart for this purpose. The distributional properties of the sample CV for multivariate data and the procedures to implement the chart are presented in this paper. Formulae to compute the control limits, the average run length, the standard deviation of the run length, and the expected average run length for the case of unknown shift size are derived. From the numerical examples provided, the effects of the number of variables, the sample size, the shift size and the in-control value of the CV are studied. Finally, we demonstrate the usefulness and applicability of the proposed chart on real data. Copyright © 2015 John Wiley & Sons, Ltd.

60 citations


Journal ArticleDOI
TL;DR: This paper present the features and functionality of a railway transportation system, and principles and rules of TFTs, and demonstrates the applicability of the framework by a case study on a simple railway Transportation system.
Abstract: Safety is an essential requirement for railway transportation. There are many methods that have been developed to predict, prevent, and mitigate accidents in this context. All of these methods have their own purpose and limitations. This paper presents a new useful analysis technique: timed fault tree analysis. This method extends traditional fault tree analysis with temporal events and fault characteristics. Timed fault trees (TFTs) can determine which faults need to be eliminated urgently, and it can also provide how much time have been left at least to eliminate the root failure to prevent accidents. They can also be used to determine the time taken for railway maintenance requirements, and thereby improve maintenance efficiency, and reduce risks. In this paper, we present the features and functionality of a railway transportation system, and principles and rules of TFTs. We demonstrate the applicability of our framework by a case study on a simple railway transportation system. Copyright © 2014 John Wiley & Sons, Ltd.

51 citations


Journal ArticleDOI
Olexandr Yevkin1
TL;DR: Approximate Markov chain method for dynamic fault tree analysis is suggested for both reparable and non-reparable systems, and it is true ifmean time to repair is much less than mean time to failure.
Abstract: Approximate Markov chain method for dynamic fault tree analysis is suggested for both reparable and non-reparable systems. The approximation is based on truncation, aggregation and elimination of Markov chain states during the process of dynamic fault tree transformation to corresponding Markov chain. The method is valid for small probabilities. For reparable systems, it is true if mean time to repair is much less than mean time to failure. Several examples are studied. Additional simplification is considered in case the system is in a steady state. Copyright © 2015 John Wiley & Sons, Ltd.

49 citations


Journal ArticleDOI
TL;DR: In this work, a simulation model has been produced of the maintenance process for a wind turbine with the aim of developing a procedure that can be used to optimise the process.
Abstract: With large expansion plans for the offshore wind turbine industry there has never been a greater need for effective operations and maintenance. The two main problems with the current operations and maintenance of an offshore wind turbine are the cost and availability. In this work a simulation model has been produced of the maintenance process for a wind turbine with the aim of developing a procedure that can be used to optimise the process. This initial model considers three types of maintenance; periodic, conditional and corrective and also considers the weather in order to determine the accessibility of the turbine. Petri nets have been designed to simulate each type of maintenance and weather conditions. It has been found that Petri nets are a very good method to model the maintenance process due to their dynamic modelling and adaptability and their ability to test optimisation techniques. Due to their versatility Petri net models are developed for both system hardware and the maintenance processes and these are combined in an efficient and concise manner.

45 citations


Journal ArticleDOI
TL;DR: This paper presents a study of systems consisting of multi-state units connected as two performance sharing groups, and an algorithm based on the universal generating function technique is proposed to evaluate the system reliability and the expected system performance deficiency.
Abstract: The performance sharing can be widely seen in different kinds of engineering systems, such as meshed power distribution systems and interconnected data transmission systems. This paper presents a study of systems consisting of multi-state units connected as two performance sharing groups, and the suggested methodology can be adapted for the case of three or more performance sharing groups. To be more general, the system unit is allowed to be in one single performance sharing group or both. Each unit has a random demand to satisfy, and the units can transmit capacity with each other given that the total performance transmitted in each performance sharing group does not surpass its maximum transmission capacity. An algorithm based on the universal generating function technique is proposed to evaluate the system reliability and the expected system performance deficiency. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Comparisons revealed that the newly proposed charts (making dual use of auxiliary information) performed significantly better compared with the existing location charts.
Abstract: In this study, a new idea has been proposed in statistical process control by making dual use of auxiliary information, that is, for ranking as well as at the estimation stage. The control charts are proposed based on regression estimator in three basic ranked set schemes, that is, ranked-set sampling (RSS), median RSS, and extreme RSS. The power of detection is used as a performance measure for evaluation and comparison of control charts. Comparisons revealed that the newly proposed charts (making dual use of auxiliary information) performed significantly better compared with the existing location charts. An illustrative example is also provided for the application of the proposed charts. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The generalized structure mainly depends on three auxiliary information-based estimators with dual use of auxiliary information under different sampling strategies and runs rules, three bivariate process distributions, and variety of sampling schemes.
Abstract: During the last decade, variance control charts based on different sampling schemes have attracted research interest in the field of statistical process control. These charts used extra (auxiliary) information either for ranking of units or estimation rather than using it for both. The effectiveness of a control chart can be increased by utilizing the auxiliary information for dual purposes. This article is focused on developing a generalized structure of variance control charts based on dual use of auxiliary information under different sampling strategies and runs rules. The generalized structure mainly depends on three auxiliary information-based estimators with dual use of auxiliary information, three bivariate process distributions, and variety of sampling schemes. The performance of the proposed control charts is investigated by assessing the power curve. We have observed that the proposals of the study perform better than its complement. An application example is also provided for practitioners' concerns to monitor the stability of physicochemical parameter of groundwater. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The simulation results in this paper show that estimating the parameters results in extensively excessive false alarms and as a result a large number of Phase I samples is needed to achieve the desired in-control performance of the MAEWMA chart.
Abstract: The multivariate adaptive exponentially weighted moving average control chart (MAEWMA) can detect shifts of different sizes while diminishing the inertia problem to a large extent. Although it has several advantages compared to various multivariate charts, previous literature has not considered its performance when the parameters are estimated. In this study, the performance of the MAEWMA chart with estimated parameters is studied while considering the practitioner-to-practitioner variation. This kind of variation occurs due to using different Phase I samples by different practitioners in estimating the unknown parameters. The simulation results in this paper show that estimating the parameters results in extensively excessive false alarms and as a result a large number of Phase I samples is needed to achieve the desired in-control performance. Using small number of Phase I samples in estimating the parameters may result in an in-control ARL distribution that almost completely lies below the desired value. To handle this problem, we strongly recommend the use of a bootstrap-based algorithm to adjust the control limit of the MAEWMA chart. This algorithm enables practitioners to achieve, with a certain probability, an in-control ARL that is greater than or equal to the desired value while using the available number of Phase I samples. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A model selection framework for analysing the failure data of multiple repairable units when they are working in different operational and environmental conditions is proposed.
Abstract: This paper proposes a model selection framework for analysing the failure data of multiple repairable units when they are working in different operational and environmental conditions. The paper pr ...

Journal ArticleDOI
TL;DR: The results show that the descriptive ability of PCA may be seriously affected by autocorrelation causing a need to incorporate additional principal components to maintain the model's explanatory ability.
Abstract: A basic assumption when using principal component analysis (PCA) for inferential purposes, such as in statistical process control (SPC), is that the data are independent in time. In many industrial processes, frequent sampling and process dynamics make this assumption unrealistic rendering sampled data autocorrelated (serially dependent). PCA can be used to reduce data dimensionality and to simplify multivariate SPC. Although there have been some attempts in the literature to deal with autocorrelated data in PCA, we argue that the impact of autocorrelation on PCA and PCA-based SPC is neither well understood nor properly documented. This article illustrates through simulations the impact of autocorrelation on the descriptive ability of PCA and on the monitoring performance using PCA-based SPC when autocorrelation is ignored. In the simulations, cross-correlated and autocorrelated data are generated using a stationary first-order vector autoregressive model. The results show that the descriptive ability of PCA may be seriously affected by autocorrelation causing a need to incorporate additional principal components to maintain the model's explanatory ability. When all variables have equal coefficients in a diagonal autoregressive coefficient matrix, the descriptive ability is intact, while a significant impact occurs when the variables have different degrees of autocorrelation. We also illustrate that autocorrelation may impact PCA-based SPC and cause lower false alarm rates and delayed shift detection, especially for negative autocorrelation. However, for larger shifts, the impact of autocorrelation seems rather small. © 2015 The Authors. Quality and Reliability Engineering International published by John Wiley & Sons Ltd.

Journal ArticleDOI
TL;DR: An efficient and rational working process and a model resulting in a hierarchy of assets, based on risk analysis and cost–benefit principles, which will be ranked according to their importance for the business to meet specific goals are described.
Abstract: The purpose of this paper is to establish a basis for a criticality analysis, considered here as a prerequisite, a first required step to review the current maintenance programs, of complex in-service engineering assets. Review is understood as a reality check, a testing of whether the current maintenance activities are well aligned to actual business objectives and needs. This paper describes an efficient and rational working process and a model resulting in a hierarchy of assets, based on risk analysis and cost–benefit principles, which will be ranked according to their importance for the business to meet specific goals. Starting from a multicriteria analysis, the proposed model converts relevant criteria impacting equipment criticality into a single score presenting the criticality level. Although detailed implementation of techniques like Root Cause Failure Analysis and Reliability Centered Maintenance will be recommended for further optimization of the maintenance activities, the reasons why criticality analysis deserves the attention of engineers and maintenance and reliability managers are precisely explained here. A case study is presented to help the reader understand the process and to operationalize the model. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Results show that the proposed PCIs could be used as the standard Cp, Cpu, Cpl, and Cpk if a short-term variance is analyzed.
Abstract: Because the normal process capability indices (PCIs) Cp, Cpu, Cpl, and Cpk represent the times that the process standard deviation is within the specification limits; then, based on and by using the direct relations among the parameters of the Weibull, Gumbel (minimum extreme value type I) and lognormal distributions, the Weibull and lognormal PCIs are derived in this paper. On the other hand, because the proposed PCIs Pp, Ppu, Ppl, and Ppk were derived as a function of the mean and standard deviation of the analyzed process, they have the same practical meaning with those of the normal distribution. Results show that the proposed PCIs could be used as the standard Cp, Cpu, Cpl, and Cpk if a short-term variance is analyzed. An application to a set of simulated data is presented. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A new vulnerability evaluation based on fuzzy logics is proposed to obtain the vulnerability of the networks, fuzzy logic is utilized to model uncertain environment.
Abstract: Vulnerability of networks is not only associated with the ability to resist disturbances but also has an impact on stable development of the networks in the long run. In this paper, a new vulnerability evaluation based on fuzzy logics is proposed. To obtain the vulnerability of the networks, fuzzy logic is utilized to model uncertain environment. Therefore, this evaluation can be divided into two steps. One is to use a graph to represent the network and analyze the main properties of the network, including average path length, edge betweenness, degree, and clustering coefficient. The other is to use fuzzy logics according to the main properties. Namely, this step is to calculate deviations, design rule database, and obtain vulnerability. Two examples are given to show the efficiency and practicability of the proposed method at the end. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: An approach based on the U statistic is first proposed to eliminate the effect of between-profile autocorrelation of error terms in Phase-II monitoring of general linear profiles and provides significantly better results than the competing methods to detect shifts in the regression parameters.
Abstract: In this paper, an approach based on the U statistic is first proposed to eliminate the effect of between-profile autocorrelation of error terms in Phase-II monitoring of general linear profiles. Then, a control chart based on the adjusted parameter estimates is designed to monitor the parameters of the model. The performance of the proposed method is compared with the ones of some existing methods in terms of average run length for weak, moderate, and strong autocorrelation coefficients under different shift scenarios. The results show that the proposed method provides significantly better results than the competing methods to detect shifts in the regression parameters, while the competing methods perform better in detecting shifts in the standard deviation. At the end, the applicability of the proposed method is illustrated by an example. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A new combination of GO operator, which is composed of a new logical GO operator and a new auxiliary GO operator named Type 20 operator, is created to represent standby mode and the results show that this new GO method is applicable and reasonable for reliability analysis of repairable system with standby structure.
Abstract: This paper presents a reliability analysis method on repairable system with standby structure based on goal oriented (GO) methodology. Firstly, a new combination of GO operator, which is composed of a new logical GO operator named Type 18A operator and a new auxiliary GO operator named Type 20 operator, is created to represent standby mode. The availability formula of standby equipment with translation exception is deduced based on Markov process theory. Then, the application method of combination of GO operator for standby mode and the analysis process of repairable system with standby structure based on GO method are proposed. Thirdly, this new combination of GO operator is applied in availability analysis of the hydraulic oil supply system of power-shift steering transmission. Finally, the results obtained by the new GO method are compared with the results of fault tree analysis, Monte Carlo simulation, GO methods using Type 2 operator and Type 18 operator to represent the standby mode, respectively. And the comparison results show that this new GO method is applicable and reasonable for reliability analysis of repairable system with standby structure. All in all, this paper provides guidance for reliability analysis of repairable systems with standby structure. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A hybrid autoregressive integrated moving average–linear regression (ARIMA–LR) approach, which combines ARIMA and LR in a sequential manner, is developed because of its ability to capture seasonal trend and effects of predictors.
Abstract: An accurate forecast of patient visits in emergency departments (EDs) is one of the key challenges for health care policy makers to better allocate medical resources and service providers. In this paper, a hybrid autoregressive integrated moving average–linear regression (ARIMA–LR) approach, which combines ARIMA and LR in a sequential manner, is developed because of its ability to capture seasonal trend and effects of predictors. The forecasting performance of the hybrid approach is compared with several widely used models, generalized linear model (GLM), ARIMA, ARIMA with explanatory variables (ARIMAX), and ARIMA–artificial neural network (ANN) hybrid model, using two real-world data sets collected from hospitals in DaLian, LiaoNing Province, China. The hybrid ARIMA–LR model is shown to outperform existing models in terms of forecasting accuracy. Moreover, involving a smoothing process is found helpful in reducing the interference by holiday outliers. The proposed approach can be a competitive alternative to forecast short-term daily ED volume. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Research of the first author was supported in part by STATOMET at the University of Pretoria, South Africa and National Research Foundation through the SARChI Chair at the School of Pharmacy and Dentistry.
Abstract: Research of the first author was supported in part by STATOMET at the University of Pretoria, South Africa and National Research Foundation through the SARChI Chair at the University of Pretoria, South Africa.

Journal ArticleDOI
TL;DR: An exponentially weighted moving average (EWMA) control chart for monitoring exponential distributed quality characteristics and it is shown that the proposed control chart outperforms the MA controlchart for all shift parameters.
Abstract: We propose an exponentially weighted moving average (EWMA) control chart for monitoring exponential distributed quality characteristics. The proposed control chart first transforms the sample data to approximate normal variables, then calculates the moving average (MA) statistic for each subgroup, and finally constructs the EWMA statistic based on the current and the previous MA statistics. The upper and the lower control limits are derived using the mean and the variance of EWMA statistics. The in-control and the out-of-control average run lengths are derived and tabularized according to process shift parameters and smoothing constants. It is shown that the proposed control chart outperforms the MA control chart for all shift parameters. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A Monte Carlo simulation is developed to assess the average run lengths for in-control and out-of-control process and revealed that the proposed mixed chart is more sensitive to detect a small shift in the process.
Abstract: The control charts have been widely used for process monitoring. Shewhart control chart has ability to detect large disturbances in the process. The EWMA and CUSUM charts have ability to detect a small change in the manufacturing process. In this paper, we will design a mixed EWMA–CUSUM control chart for Weibull-distributed quality characteristics. The Weibull distribution is one of the most popular distributions used to model failure mechanism because of flexible selection of shape and scale parameters. A Monte Carlo simulation is developed to assess the average run lengths for in-control and out-of-control process. The comparison revealed that the proposed mixed chart is more sensitive to detect a small shift in the process. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This work proposes estimating the parameter from a phase I (reference) sample and study the effects of estimation on the design and performance of the charts, and focuses on the conditional run length distribution so as to incorporate the ‘practitioner-to- Practitioner’ variability (inherent in the estimates).
Abstract: Monitoring times between events (TBE) is an important aspect of process monitoring in many areas of applications. This is especially true in the context of high-quality processes, where the defect rate is very low, and in this context, control charts to monitor the TBE have been recommended in the literature other than the attribute charts that monitor the proportion of defective items produced. The Shewhart-type t-chart assuming an exponential distribution is one chart available for monitoring the TBE. The t-chart was then generalized to the tr-chart to improve its performance, which is based on the times between the occurrences of r (≥1) events. In these charts, the in-control (IC) parameter of the distribution is assumed known. This is often not the case in practice, and the parameter has to be estimated before process monitoring and control can begin. We propose estimating the parameter from a phase I (reference) sample and study the effects of estimation on the design and performance of the charts. To this end, we focus on the conditional run length distribution so as to incorporate the ‘practitioner-to-practitioner’ variability (inherent in the estimates), which arises from different reference samples, that leads to different control limits (and hence to different IC average run length [ARL] values) and false alarm rates, which are seen to be far different from their nominal values. It is shown that the required phase I sample size needs to be considerably larger than what has been typically recommended in the literature to expect known parameter performance in phase II. We also find the minimum number of phase I observations that guarantee, with a specified high probability, that the conditional IC ARL will be at least equal to a given small percentage of a nominal IC ARL. Along the same line, a lower prediction bound on the conditional IC ARL is also obtained to ensure that for a given phase I sample, the smallest IC ARL can be attained with a certain (high) probability. Summary and recommendations are given. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The SSGR CV chart surpasses the other charts under comparison, for most upward and downward CV shifts, and is compared with the Shewhart CV, runs rules CV, synthetic CV and exponentially weighted moving average CV charts by means of ARLs and standard deviation of the run lengths.
Abstract: Many quality characteristics have means and standard deviations that are not independent. Instead, the standard deviations of these quality characteristics are proportional to their corresponding means. Thus, monitoring the coefficient of variation (CV), for these quality characteristics, using a control chart has gained remarkable attention in recent years. This paper presents a side sensitive group runs chart for the CV (called the SSGR CV chart). The implementation and optimization procedures of the proposed chart are presented. Two optimization procedures are developed, i.e. (i) by minimizing the average run length (ARL) when the shift size is deterministic and (ii) by minimizing the expected average run length (EARL) when the shift size is unknown. An application of the SSGR CV chart using a real dataset is also demonstrated. Additionally, the SSGR CV chart is compared with the Shewhart CV, runs rules CV, synthetic CV and exponentially weighted moving average CV charts by means of ARLs and standard deviation of the run lengths. The performance comparison is also conducted using EARLs when the shift size is unknown. In general, the SSGR CV chart surpasses the other charts under comparison, for most upward and downward CV shifts. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: For the phase I monitoring, a new ATS-unbiased design with unknown parameters is developed, and a sequential sampling scheme is adopted to start process monitoring as soon as possible.
Abstract: Gamma charts for time between events are very useful in the high-quality processes, which monitor the time until the rth event. The average time to signal (ATS) is adopted to evaluate the performance of Gamma charts, because it reflects both the number and the sampling interval of samples inspected until an out-of-control signal occurs. An ATS-unbiased design for Gamma charts with known parameters is proposed based on the hypothesis test of the scale parameter. For the phase I monitoring, a new ATS-unbiased design with unknown parameters is developed, and a sequential sampling scheme is adopted to start process monitoring as soon as possible. Some specific guidelines to stop updating the control limits are suggested from the convergence of the width between control limits with different phase I sample sizes. Finally, a real example is illustrated to demonstrate the implementation of the proposed approach. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The origins of quality engineering are in manufacturing, where quality engineers apply basic statistical methodologies to improve the quality and productivity of products and processes.
Abstract: The origins of quality engineering are in manufacturing, where quality engineers apply basic statistical methodologies to improve the quality and productivity of products and processes. In the past ...

Journal ArticleDOI
TL;DR: In the proposed MDS sampling plan, the operating characteristic curve is derived based on the exact sampling distribution, and the performance of the proposed plan is investigated and compared with the existing variables single sampling plan.
Abstract: A good acceptance sampling plan is necessary for quality control, and different acceptance sampling plans have been developed to meet different purposes. Under a multiple dependent state (MDS) sampling plan, the lot disposition is not only based on the current sample from the lot but also on the sample results from preceding lots. This paper aims to develop a variables MDS sampling plan for normally distributed processes with two-sided specification limits. In the proposed MDS sampling plan, the operating characteristic curve is derived based on the exact sampling distribution. The performance of the proposed plan is investigated and compared with the existing variables single sampling plan. Copyright © 2015 John Wiley & Sons, Ltd.