scispace - formally typeset
Search or ask a question

Showing papers in "Quality Engineering in 2003"


Journal Article
TL;DR: The posterior mean deviance is suggested as a Bayesian measure of fit or adequacy, and the contributions of individual observations to the fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages.
Abstract: Summary. We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. Using an information theoretic argument we derive a measure pD for the effective number of parameters in a model as the difference between the posterior mean of the deviance and the deviance at the posterior means of the parameters of interest. In general pD approximately corresponds to the trace of the product of Fisher's information and the posterior covariance, which in normal models is the trace of the ‘hat’ matrix projecting observations onto fitted values. Its properties in exponential families are explored. The posterior mean deviance is suggested as a Bayesian measure of fit or adequacy, and the contributions of individual observations to the fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages. Adding pD to the posterior mean deviance gives a deviance information criterion for comparing models, which is related to other information criteria and has an approximate decision theoretic justification. The procedure is illustrated in some examples, and comparisons are drawn with alternative Bayesian and classical proposals. Throughout it is emphasized that the quantities required are trivial to compute in a Markov chain Monte Carlo analysis.

763 citations


Journal Article
TL;DR: In this article, an easy to implement cluster-based method for identifying groups of nonhomogeneous means is proposed, which overcomes the common problem of the classical multiple-comparison methods that leads to the construction of groups that often have substantial overlap.
Abstract: This article proposesan easy to implement cluster-based method for identifying groups of nonhom ogeneous means. The method overcomes the common problem of the classical multiple-comparison methods that lead to the construction of groups that often have substantial overlap. In addition, it solves the problem of other cluster-based methods that do not have a known level of significance and are not easy to apply. The new procedure is compared by simulation with a set of classical multiple-comparison methods and a cluster-based one. Results show that the new procedure compares quite favorably with those included in this article.

319 citations



Journal Article
TL;DR: In this article, the effects of non-normality on the statistical performance of the multivariate exponentially moving average (MEWMA) control chart, and the Hotelling chi-squared chart in particular, are investigated when used in individual observations to monitor the me.
Abstract: The effects of non-normality on the statistical performance of the multivariate exponentially moving average (MEWMA) control chart, and the Hotelling chi-squared chart in particular, is investigated when used in individual observations to monitor the me..

177 citations


Journal Article
TL;DR: Adopting a Bayesian perspective, the Bayesian SSD problem is moved from the rather elementary models addressed in the literature to date in the direction of the wide range of hierarchical models which dominate the current Bayesian landscape.
Abstract: Sample size determination (SSD) is a crucial aspect of experimental design. Two SSD problems are considered here. The first concerns how to select a sample size to achieve specified performance with regard to one or more features of a model. Adopting a Bayesian perspective, we move the Bayesian SSD problem from the rather elementary models addressed in the literature to date in the direction of the wide range of hierarchical models which dominate the current Bayesian landscape. Our approach is generic and thus, in principle, broadly applicable. However, it requires full model specification and computationally intensive simulation, perhaps limiting it practically to simple instances of such models. Still, insight from such cases is of useful design value. In addition, we present some theoretical tools for studying performance as a function of sample size, with a variety of illustrative results. Such results provide guidance with regard to what is achievable. We also offer two examples, a survival model with censoring and a logistic regression model. The second problem concerns how to select a sample size to achieve specified separation of two models. We approach this problem by adopting a screening criterion which in turn forms a model choice criterion. This criterion is set up to choose model 1 when the value is large, model 2 when the value is small. The SSD problem then requires choosing $n_{1}$ to make the probability of selecting model 1 when model 1 is true sufficiently large and choosing $n_{2}$ to make the probability of selecting model 2 when model 2 is true sufficiently large. The required n is $\max(n_{1}, n_{2})$. Here, we again provide two illustrations. One considers separating normal errors from t errors, the other separating a common growth curve model from a model with individual growth curves.

173 citations


Journal ArticleDOI
TL;DR: In this paper, a generalized control chart called the generally weighted moving average (GWMA) control chart was proposed and analyzed for detecting small shifts in the mean of a process, with time varying control limits to detect start-up shifts more sensitively.
Abstract: A generalization of the exponentially weighted moving average (EWMA) control chart is proposed and analyzed. The generalized control chart we have proposed is called the generally weighted moving average (GWMA) control chart. The GWMA control chart, with time‐varying control limits to detect start‐up shifts more sensitively, performs better in detecting small shifts of the process mean. We use simulation to evaluate the average run length (ARL) properties of the EWMA control chart and GWMA control chart. An extensive comparison reveals that the GWMA control chart is more sensitive than the EWMA control chart for detecting small shifts in the mean of a process. To enhance the detection ability of the GWMA control chart, we submit the composite Shewhart‐GWMA scheme to monitor process mean. The composite Shewhart‐GWMA control chart with/without runs rules is more sensitive than the GWMA control chart in detecting small shifts of the process mean. The resulting ARLs obtained by the GWMA control chart...

148 citations


Journal Article
TL;DR: In most industrial and service applications, however, the parameters are unknown as discussed by the authors, and it is generally assumed that the parameters of an exponential weighted moving average (EWMA) control chart are known.
Abstract: [This abstract is based on the author's abstract.] In designing exponentially weighted moving average (EWMA) control charts it is generally assumed that the parameters are known. In most industrial and service applications, however, the parameters are u..

128 citations


Journal Article
TL;DR: In this article, a survey summarizes, classifies, and compares various existing maintenance policies for both single-unit and multi-unit systems, and relations among different maintenance policies are also addressed.
Abstract: In the past several decades, maintenance and replacement problems of deteriorating systems have been extensively studied in the literature. Thousands of maintenance and replacement models have been created. However, all these models can fall into some categories of maintenance policies: age replacement policy, random age replacement policy, block replacement policy, periodic preventive maintenance policy, failure limit policy, sequential preventive maintenance policy, repair cost limit policy, repair time limit policy, repair number counting policy, reference time policy, mixed age policy, preparedness maintenance policy, group maintenance policy, opportunistic maintenance policy, etc. Each kind of policy has different characteristics, advantages and disadvantages. This survey summarizes, classifies, and compares various existing maintenance policies for both single-unit and multi-unit systems. The emphasis is on single-unit systems. Relationships among different maintenance policies are also addressed.

123 citations


Journal Article
TL;DR: An effective approach for evaluating service quality of domestic passenger airlines by customer surveys is presented, which provides airlines with their internal and external competitive advantages, relative to competitors in terms of customer-perceived quality levels of service.
Abstract: This paper presents an effective approach for evaluating service quality of domestic passenger airlines by customer surveys. To reflect the inherent subjectiveness and imprecision of the customers' perceptions to the quality levels provided by airlines with respect to multiple service attributes, crisp survey results are represented and processed as fuzzy sets. A fuzzy multicriteria analysis (MA) model is used to formulate the evaluation problem. The model is solved by an effective algorithm which incorporates the decision maker's attitude or preference for customers' assessments on criteria weights and performance ratings. An empirical study of domestic airlines on a highly competitive route in Taiwan is conducted to demonstrate the effectiveness of the approach. The evaluation outcome provides airlines with their internal and external competitive advantages, relative to competitors in terms of customer-perceived quality levels of service.

123 citations



Journal Article
TL;DR: In this paper, the authors compared ratings of banks, medical care, retail clothing stores, postal facilities, and restaurants in Germany and the United States using items from established measures of service quality, and found that German respondents had lower service expectations, and generally lower perceived service outcomes than did the American subjects.
Abstract: Marketing services internationally requires that companies become familiar with consumer attitudes in different service settings across different cultures. Using items from established measures of service quality, this study compared ratings of banks, medical care, retail clothing stores, postal facilities, and restaurants in Germany and the United States. The German respondents had lower service expectations, and generally lower perceived service outcomes, than did the American subjects. Five dimensions of service — reliability, empathy, responsiveness, assurance, and tangibles — explained 56% of overall service quality in the German sample and 69% in the American sample. Other important differences and some similarities between the samples appeared when service factors were examined across settings.

Journal Article
TL;DR: The CUSUM chart is proposed as a tool to monitor emissions data so that abnormal changes can be detected in a timely manner, and the process capability indices are proposed to evaluate environmental performance in terms of the risk of non-compliance situations arising.
Abstract: This paper builds on recent work on measuring and evaluating environmental performance of a process using statistical process control (SPC) techniques. We propose the CUSUM chart as a tool to monitor emissions data so that abnormal changes can be detected in a timely manner, and we propose using process capability indices to evaluate environmental performance in terms of the risk of non-compliance situations arising. In doing so, the paper fills an important gap in the ISO 14000 and TQEM literatures, which have focused more on environmental management systems and qualitative aspects rather than on quantitative tools. We explore how process capability indices have the potential to be useful as a risk management tool for practitioners and to help regulators execute and prioritize their enforcement efforts. Together, this should help in setting up useful guidelines for evaluating actual environmental performance against the firm’s environmental objectives and targets and regulatory requirements, as well as encouraging further development and application of SPC techniques to the field of environmental quality management and data analysis. 2002 Elsevier Science B.V. All rights reserved.

Journal Article
TL;DR: A Bayesian method based on the idea of model discrimination that uncovers the active factors is developed for designing a follow-up experiment to resolve ambiguity in fractional experiments.
Abstract: Fractional factorial, Plackett-Burman, and other multifactor designs are often effective in practice due to factor sparsity. That is, just a few of the many factors studied will have major effects. In those active factors, these designs can have high resolution. We have previously developed a Bayesian method based on the idea of model discrimination that uncovers the active factors. Sometimes, the results of a fractional experiment are ambiguous due to confounding among the possible effects, and more than one model may be consistent with the data. Within the Bayesian construct, we have developed a method for designing a follow-up experiment to resolve this ambiguity. The idea is to choose runs that allow maximum discrimination among the plausible models. This method is more general than methods that algebraically decouple aliased interactions and more appropriate than optimal design methods that require specification of a single model. The method is illustrated through examples of fractional experiments.

Journal Article
TL;DR: A two-stage Bayesian model selection strategy, able to keep all possible models under consideration while providing a level of robustness akin to Bayesian analyses incorporating noninformative priors, is proposed.
Abstract: In early stages of experimentation, one often has many candidate factors of which only few have significant influence on the response. Supersaturated designs can offer important advantages. However, standard regression techniques of fitting a prediction line using all candidate variables fail to analyze data from such designs. Stepwise regression may be used but has drawbacks as reported in the literature. A two-stage Bayesian model selection strategy, able to keep all possible models under consideration while providing a level of robustness akin to Bayesian analyses incorporating noninformative priors, is proposed. The strategy is demonstrated on a well-known dataset and compared to competing methods via simulation.

Journal Article
TL;DR: A general strategy for constructing response surface designs in multistratum unit structures and three examples are given to show the applicability of the method and to check the relationship of the final design to the choice of treatment set.
Abstract: Response surface designs are usually described as if the treatments have been completely randomized to the experimental units. However, in practice there is often a structure to the units, implying the need for blocking. If, in addition, some factors are more difficult to vary between units than others, a multistratum structure arises naturally. We present a general strategy for constructing response surface designs in multistratum unit structures. Designs are constructed stratum by stratum, starting in the highest stratum. In each stratum a prespecified treatment set for the factors applied in that stratum is arranged to be nearly orthogonal to the units in the higher strata, allowing for all the effects that have to be estimated. Three examples are given to show the applicability of the method and are also used to check the relationship of the final design to the choice of treatment set. Finally, some practical considerations in randomization are discussed.

Journal Article
TL;DR: In this article, a procedure for measuring the effect of each stage's performance on the output quality of subsequent stages including the quality of the signal product, and identifying stages in a manufacturing system where management should concentrate investments in process quality improvement is presented.
Abstract: Manufacturing systems typically contain processing and assembly stages whose output quality is significantly affected by the output quality of preceding stages in the system. This study offers and empirically validates a procedure for 1 measuring the effect of each stage's performance on the output quality of subsequent stages including the quality of the signal product, and 2 identifying stages in a manufacturing system where management should concentrate investments in process quality improvement. Our proposed procedure builds on the precedence ordering of the stages in the system and uses the information provided by correlations between the product quality measurements across stages. The starting point of our procedure is a computer executable network representation of the statistical relationships between the product quality measurements; execution automatically converts the network to a simultaneous-equations model and estimates the model parameters by the method of least squares. The parameter estimates are used to measure and rank the impact of each stage's performance on variability in intermediate stage and final product quality. We extend our work by presenting an economic model, which uses these results, to guide management in deciding on the amount of investment in process quality improvement for each stage. We report some of the findings from an extensive empirical validation of our procedure using circuit board production line data from a major electronics manufacturer. The empirical evidence presented here highlights the importance of accounting for quality linkages across stages in a identifying the sources of variation in product quality and b allocating investments in process quality improvement.

Journal ArticleDOI
TL;DR: In this article, the authors present a quality function deployment (QFD) analysis of the design of school furniture in developing countries, using Costa Rica as the baseline, using a dynamic hierarchy process model for QFD to help the product development team make effective decisions in satisfying the requirements of the customer constrained by limited resources.
Abstract: This paper presents a quality function deployment (QFD) analysis of the design of school furniture in developing countries, using Costa Rica as the baseline. The dynamic hierarchy process model for QFD was used to help the product development team make effective decisions in satisfying the requirements of the customer constrained by limited resources. A number of total quality management (TQM) tools were employed during the development of the school furniture solution. A dynamic, cross-functional team organization was used. A simple form of quality function deployment was used to identify the desirable product design, safety, and service features.

Journal ArticleDOI
TL;DR: In this article, plots of various runs rules schemes are given to simplify the determination of control limits based on a desired in-control average run length (ARL0) in a Shewhart control chart.
Abstract: Runs rules are often used to increase the sensitivity of a Shewhart control chart. In this work, plots of various runs rules schemes are given to simplify the determination of control limits based on a desired in-control average run length (ARL0).

Journal Article
TL;DR: In this paper, the authors proposed a robust estimator of the mode, based on densest half ranges, which has a much lower bias while having similar robustness, and quantified by the rejection point, the largest absolute value that is not rejected.
Abstract: Measures of location based on the shortest half sample, including the shorth and the location of the least median of squares, are more robust than the median to outliers, but less robust to contamination near the location. Although such measures can estimate the mode, the proposed estimator of the mode, based on densest half ranges, has a much lower bias while having similar robustness. Like the median, this mode estimator has the highest breakdown point possible: the estimator has meaning if less than half the sample consists of outliers. The mode is more robust than the median in that the mode estimates are unaffected by outliers, whereas the median is influenced by each outlier. Robustness in this sense is quantified by the rejection point, the largest absolute value that is not rejected, which is low for the mode but infinite for the median. Even though the median is changed less by contamination near the location than is the mode, outliers generally pose more of a problem to estimation than contamination near the location, so the mode is more robust for data that may have a large number of outliers. A robust estimator of skewness is based on this mode estimator.

Journal Article
TL;DR: An empirically grounded model of technology and capability transfer during acquisition implementation is developed and proposals are developed to help guide further inquiry into the dynamics of acquisition implementation processes in general and, more specifically, the process of acquiring new technologies and capabilities from other firms.
Abstract: In this study, we explore seven in-depth cases of high-technology acquisitions and develop an empirically grounded model of technology and capability transfer during acquisition implementation. We assess how the nature of the acquired firms' knowledge-based resources, as well as multiple dimensions of acquisition implementation, have both independent and interactive effects on the successful appropriation of technologies and capabilities by the acquirer. Our inquiry contributes to the growing body of research examining the transfer of knowledge both between and within organizations. Propositions are developed to help guide further inquiry into the dynamics of acquisition implementation processes in general and, more specifically, the process of acquiring new technologies and capabilities from other firms.

Journal ArticleDOI
TL;DR: In this article, some alternative techniques are described for the monitoring and control of a process that has been successfully improved; the techniques are particularly useful to Six Sigma Black Belts in dealing with high-quality processes.
Abstract: Six Sigma as a methodology for quality improvement is often presented and deployed in terms of the dpmo metric, i.e., defects per million opportunities. As the sigma level of a process improves beyond three, practical interpretation problems could arise when conventional Shewhart control charts are applied during the Control phase of the define-measure-analyze-improve-control framework. In this article, some alternative techniques are described for the monitoring and control of a process that has been successfully improved; the techniques are particularly useful to Six Sigma Black Belts in dealing with high-quality processes. The approach used would thus ensure a smooth transition from a low-sigma process management to maintenance of a high-sigma performance in the closing phase of a Six Sigma project.

Journal Article
TL;DR: In this paper, the authors argue that there may be a deeper underlying cause of the problems with teams to be found in a mental model, one of command-and-control, that has prevailed in managerial thought for years.
Abstract: If the use of teams in the workplace makes such good intuitive sense, then why do managers and workers often find team experiences frustrating and their results disappointing? Fortune, Business Week, and Industry Week all report the struggles that managers have with team implementation (Dumaine, 1994; Tully, 1995; Verespeg, 1990, and Zellner, 1994). The Economist reports a study indicating that as many as seven out of ten U.S. teams fail to produce desired results (1995). Researchers, too, experience difficulty showing that people working together perform better than people working alone (Erez and Somech, 1996; Nahavandi and Aranda, 1994). America's team troubles have been blamed on a variety of factors. Nahavandi and Aranda (1994), for example, suggest that in collectivist cultures such as in Japan, teams have been successful because individual performance is less important than group performance, conflict is avoided and conformity is expected, and workers are more accepting of management's authority than in individualistic, western cultures. Other explanations for team woes focus on inadequate team leadership and team structure. While these explanations have merit, there may be an even more fundamental explanation for the difficulties of teams in western cultures. In our view teams often run into trouble because their members subscribe to a prevailing view of organizations that sharply limits their ability to maximize team contributions. This prevailing view, command and control, presents organizations as machine-like systems that are designed with an emphasis on order, predictability and control. The assumptions inherent in these organizations about how to think about work, how to get work done, and about the roles of managers and employees often run counter to the requirements for successful teamwork today. In particular, the unspoken image of good managers as ones that create order and exercise control may in fact thwart team performance. While managers often talk about empowerment and the use of teams as enabling individuals to influence the organization, teams are instead used as just another way to help restore order, improve predictability, and achieve specific outcomes. In one semi-conductor supplier firm, for example, managers using quality teams experienced pressure from upper management to continually identify quality barriers; this sometimes led team members to make up new barriers just to have something to report (Beyer et al., 1997). Evidence also shows that another type of workplace team, the self-managed team, can exercise concertive control, a form of group control even more powerful than the hierarchical, bureaucratic control systems that may be found elsewhere in the organization. The powerful combination of peer pressure and rational rules in the team can create a new iron cage with bars that are almost invisible to the workers (Barker, 1993). When teams don't increase organizational productivity, we often assume that it is because of poor leadership or inappropriate team composition. In this article we argue that there may be a deeper underlying cause of the problems with teams to be found in a mental model, one of command-and-control, that has prevailed in managerial thought for years. This model, appropriate to previous times, continues to persist despite the fact that today's social and business environment becomes only more staggeringly complex, rapid changing, and unpredictable (Bogner and Barr, 2000; Wheatley, 1992). What is badly needed is a means to open up our thinking about teams to incorporate a model that is capable of embracing turbulence, complexity, equivocality, rapid change, and increasingly unknowable futures (Ashmos, 1997). Current literature suggests that turbulence, a perpetual state of change and ferment, has become a relatively permanent situation for organizations (Stacey, 1992; Wheatley, 1992; Bogner and Barr, 2000). …

Journal Article
TL;DR: It is found that the new model can provide a significant improved goodness-of-fit and estimation power and Optimal release policies that minimize the expected total cost subject to the reliability requirement are developed.
Abstract: This paper proposes a software reliability model that incorporates testing coverage information. Testing coverage is very important for both software developers and customers of software products. For developers, testing coverage information helps them to evaluate how much effort has been spent and how much more is needed. For customers, this information estimates the confidence of using the software product. Although research has been conducted and software reliability models have been developed, some practical issues have not been addressed. Testing coverage is one of these issues. The model is developed based on a nonhomogeneous Poisson process (NHPP) and can be used to estimate and predict the reliability of software products quantitatively. We examine the goodness-of-fit of this proposed model and present the results using several sets of software testing data. Comparisons of this model and other existing NHPP models are made. We find that the new model can provide a significant improved goodness-of-fit and estimation power. A software cost model incorporating testing coverage is also developed. Besides some traditional cost items such as testing cost and error removal cost, risk cost due to potential faults in the uncovered code is also included associated with the number of demands from customers. Optimal release policies that minimize the expected total cost subject to the reliability requirement are developed.

Journal ArticleDOI
TL;DR: In this paper, simulation results indicate that sample mean and variance may not be the best choice when one or both assumptions are not met, and the results further show that sample median and median absolute deviation are indeed more resistant to departures from normality and to contaminated data.
Abstract: The usual assumptions behind robust design are that the distribution of experimental data is approximately normal and that there is no major contamination due to outliers in the data. Under these assumptions, sample mean and variance are often used to estimate process mean and variance. In this article, we first show simulation results indicating that sample mean and variance may not be the best choice when one or both assumptions are not met. The results further show that sample median and median absolute deviation or sample median and interquartile range are indeed more resistant to departures from normality and to contaminated data. We then show how to incorporate this observation into robust design modeling and optimization. A case study is presented.

Journal Article
TL;DR: The purpose of this paper is to articulate on aspects of infrastructure reliability, in particular the notions of chance, interaction, cause and cascading, and to make the important claim that causal failures are more deleterious to infrastructure reliability than cascading failures.
Abstract: This paper is addressed to engineers and statisticians working on topics in reliability and survival analysis. It is also addressed to designers of network systems. The material here is prompted by problems of infrastructure assurance and protection. Infrastructure systems, like the internet and the power grid, comprise a web of interconnected components experiencing interacting (or dependent) failures. Such systems are prone to a paralyzing collapse caused by a succession of rapid failures; this phenomenon is referred to as "cascading failures." Assessing the reliability of an infrastructure system is a key step in its design. The purpose of this paper is to articulate on aspects of infrastructure reliability, in particular the notions of chance, interaction, cause and cascading. Following a commentary on how the term "reliability" is sometimes interpreted, the paper begins by making the argument that exchangeability is a meaningful setting for discussing interaction. We start by considering individual components and describe what it means to say that they are exchangeable. We then show how exchangeability leads us to distinguish between chance and probability. We then look at how entire networks can be exchangeable and how components within a network can be dependent. The above material, though expository, serves the useful purpose of enabling us to introduce and make precise the notions of causal and cascading failures. Classifying dependent failures as being either causal or cascading and characterizing these notions is a contribution of this paper. The others are a focus on networks and their setting in the context of exchangeability. A simple model for cascading failures closes the paper. A virtue of this model is that it enables us to make the important claim that causal failures are more deleterious to infrastructure reliability than cascading failures. This claim, being contrary to a commonly held perception of network designers and operators, is perhaps the key contribution of this paper.

Journal Article
TL;DR: In this paper, the authors introduce a new class of monitoring procedures based on the relationship between a proportional integral derivative (PID) feedback control scheme and the corresponding prediction scheme, which is obtained by applying the PID predictor to the autocorrelated data to get residuals and then monitoring the residuals.
Abstract: We introduce a new class of monitoring procedures based on the relationship between a proportional integral derivative (PID) feedback control scheme and the corresponding prediction scheme. The charts are obtained by applying the PID predictor to the autocorrelated data to get residuals and then monitoring the residuals. This class of procedures includes as special cases several charts that have been recently proposed in the literature and thus provides a unifying framework. The PID charts have three parameters that can be suitably tuned to achieve good average run length (ARL) performance for large or small mean shifts. Methods for determining chart parameters to obtain good ARL performance are discussed. Simulation studies for autoregressive moving average (1, 1) models show that PID charts are competitive with the special cause charts of Alwan and Roberts for detecting large shifts and perform better in detecting small to moderate shifts. The effects of model parameter misspecification and bias in esti...

Journal ArticleDOI
TL;DR: In this paper, the authors reported that the machine produced excessive flash on the molded part, the Bulldog, of Kettering University's mascot, and the investigation of the manufacturing process problem required the understanding of many factors that influenced excessive flashing.
Abstract: This paper was originated by a problem that occurred in an injection-molding project. The mold design team reported that the machine produced excessive flash on the molded part, the Bulldog, of Kettering University's mascot. The investigation of the manufacturing process problem required the understanding of many factors that influenced excessive flashing. After discussion and input from the team, a simple four-factor full-factorial design with duplicate measurements was used for the experiment. The analysis revealed that factors A (pack pressure), C (injection speed), and D (screw RPM) and also the interactions AC and CD were significant. The settings for A, C, and D were obtained. The confirmation runs showed that the setting of A at low level (150 psi), C at low level (0.5 in./sec), and D at high level (200 rpm) produced Bulldogs with zero flash.

Journal ArticleDOI
TL;DR: In this article, the authors used Taguchi's robust design technique to optimize the gear blank casting process by using six control factors, namely, clay content, moisture content, ramming, sand particle size, metal fluidity, and gating design.
Abstract: This study demonstrates optimization of the gear blank casting process by using Taguchi's Robust Design technique. The metal casting process involves a large number of parameters affecting the various casting quality features of the product. Some of the parameters are controllable and some are uncontrollable, e.g., noise factors. In order to optimize the process, six control factors—namely, clay content, moisture content, ramming, sand particle size, metal fluidity, and gating design—were selected. Each process factor was considered at three levels. The quality characteristic selected was casting defects. The reduction in the weight of casting as compared to the target weight was taken to be proportional to the casting defects. An orthogonal array was constructed for the six factors undertaken, and performing 18 sets of experiments generated the data. The weights of the finished castings were obtained and signal-to-noise ratios were calculated by using the nominal best approach of parameter design. The av...

Journal ArticleDOI
TL;DR: In this paper, a simple way for monitoring shifts in the covariance matrix of a p-dimensional multivariate normal process distribution, N p (μ,Σ), is discussed.
Abstract: In this paper, we will discuss a simple way for monitoring shifts in the covariance matrix of a p-dimensional multivariate normal process distribution, N p (μ,Σ). An exact method based on the chi-square distribution for constructing multivariate control limits will also be shown. We will illustrate the proposed procedure at work based on an example.

Journal Article
TL;DR: In this article, the optimal sampling design for 2-and 3-stage Bernoulli sampling is presented for experiments in which sampling is carried out in stages and the outcomes of the previous stage are available before the sampling design is determined.
Abstract: Optimal designs are presented for experiments in which sampling is carried out in stages. There are two Bernoulli populations and it is assumed that the outcomes of the previous stage are available before the sampling design for the next stage is determined. At each stage, the design specifies the number of observations to be taken and the relative proportion to be sampled from each population. Of particular interest are 2- and 3-stage designs. To illustrate that the designs can be used for experiments of useful sample sizes, they are applied to estimation and optimization problems. Results indicate that, for problems of moderate size, published asymptotic analyses do not always represent the true behavior of the optimal stage sizes, and efficiency may be lost if the analytical results are used instead of the true optimal allocation. The exactly optimal few stage designs discussed here are generated computationally, and the examples presented indicate the ease with which this approach can be used to solve problems that present analytical difficulties. The algorithms described are flexible and provide for the accurate representation of important characteristics of the problem.