scispace - formally typeset
Search or ask a question

Showing papers in "Quality Engineering in 2008"


Journal Article
TL;DR: In this paper, the authors discuss in the context of several ongoing public health and social surveys how to develop general families of multilevel probability models that yield reasonable Bayesian inferences.
Abstract: The general principles of Bayesian data analysis imply that models for survey responses should be constructed conditional on all variables that affect the probability of inclusion and nonresponse, which are also the variables used in survey weighting and clustering. However, such models can quickly become very complicated, with potentially thousands of poststratification cells. It is then a challenge to develop general families of multilevel probability models that yield reasonable Bayesian inferences. We discuss in the context of several ongoing public health and social surveys. This work is currently open-ended, and we conclude with thoughts on how research could proceed to solve these problems.

425 citations


Journal Article
TL;DR: In this article, the authors examined whether and how the quality of the employee-organization relationship (EOR) influences the relationship between employee perception of developmental human resource (HR) practices and employee outcomes.
Abstract: The purpose of the present study was to examine whether and how the quality of the employee–organization relationship (EOR) influences the relationship between employee perception of developmental human resource (HR) practices and employee outcomes. Analyses of 593 employees representing 64 local savings banks in Norway showed that four indicators of the EOR (perceived organizational support, affective organizational commitment, and procedural and interactional justice) moderated the relationship between perception of developmental HR practices and individual work performance. A strong and direct negative relationship was found between perception of developmental HR practices and turnover intention, but perceived procedural and interactional justice moderated this linkage. No support was found for a mediating role of the EOR indicators in the relationship between perception of developmental HR practices and employee outcomes. Implications and directions for future research are discussed.

391 citations




Journal Article
TL;DR: It is found that the diffusion rate of ISO standards is higher for later-adopting countries and for the later ISO 14000 standard, and accounting for cross-country influences improves both the fit and the prediction accuracy of the models.
Abstract: We study the global diffusion of ISO 9000 and ISO 14000 certification using a network diffusion framework. We start by investigating the presence and nature of contagion effects by defining alternative cross-country networks and testing their relative strength. Second, we study how the rate of diffusion differs between the two standards and between early-and later-adopting countries. Third, we identify which countries had more influence on diffusion than others. Empirically, we build a diffusion model which includes several possible cross-country contagion effects and then use Bayesian methods for estimation and model selection. Using country by year data for 56 countries and nine years, we find that accounting for cross-country influences improves both the fit and the prediction accuracy of our models. However, the specific cross-country contagion mechanism is different across the two standards. Diffusion of ISO 9000 is driven primarily by geography and bilateral trade relations, whereas that of ISO 14000 is driven primarily by geography and cultural similarity. We also find that the diffusion rate of ISO standards is higher for later-adopting countries and for the later ISO 14000 standard. We discuss several implications of our findings for the global diffusion of management standards.

185 citations


Journal Article
TL;DR: In this article, the authors present a framework where the ob-served events are modeled as marked point processes, with marks labeling the types of events, and the emphasis is more on modeling than on statistical inference.
Abstract: We review basic modeling approaches for failure and mainte- nance data from repairable systems. In particular we consider imperfect re- pair models, defined in terms of virtual age processes, and the trend-renewal process which extends the nonhomogeneous Poisson process and the renewal process. In the case where several systems of the same kind are observed, we show how observed covariates and unobserved heterogeneity can be included in the models. We also consider various approaches to trend testing. Modern reliability data bases usually contain information on the type of failure, the type of maintenance and so forth in addition to the failure times themselves. Basing our work on recent literature we present a framework where the ob- served events are modeled as marked point processes, with marks labeling the types of events. Throughout the paper the emphasis is more on modeling than on statistical inference.

177 citations


Journal Article
TL;DR: In this paper, the authors proposed a multiplicative model for modeling and forecasting within-day arrival rates to a U.S. commercial bank's call center, where Markov chain Monte Carlo sampling methods were used to estimate both latent states and model parameters.
Abstract: A call center is a centralized hub where customer and other telephone calls are handled by an organization. In today's economy, call centers have become the primary points of contact between customers and businesses. Thus accurate predictions of call arrival rates are indispensable to help call center practitioners staff their call centers efficiently and cost-effectively. This article proposes a multiplicative model for modeling and forecasting within-day arrival rates to a U.S. commercial bank's call center. Markov chain Monte Carlo sampling methods are used to estimate both latent states and model parameters. One-day-ahead density forecasts for the rates and counts are provided. The calibration of these predictive distributions is evaluated through probability integral transforms. Furthermore, 1-day-ahead forecasts comparisons with classical statistical models are given. Our predictions show significant improvements of up to 25% over these standards. A sequential Monte Carlo algorithm is also proposed ...

156 citations


Journal ArticleDOI
TL;DR: An overview of the literature on Phase I parametric control charts for univariate variables data is presented and the joint distribution of the charting statistics is used to control the false alarm probability while designing the charts.
Abstract: An overview is given of the literature on Phase I parametric control charts for univariate variables data. When designing the charts, the joint distribution of the charting statistics is used to control the false alarm probability. An example is given a..

150 citations


Journal Article
TL;DR: An effective sampling plan based on process capability index Cpk is introduced to deal with product acceptance determination for low fraction of defectives and can be used to determine the number of required inspection units, the critical acceptance value, and make reliable decisions in product acceptance.
Abstract: Acceptance sampling plans are practical tools for quality assurance applications involving quality contract on product orders. The sampling plans provide the vendor and buyer decision rules for product acceptance to meet the preset product quality requirement. As the rapid advancement of manufacturing technology, suppliers require their products to be of high quality with very low fraction of defectives often measured in parts per million. Unfortunately, traditional methods for calculating fraction of defectives no longer work since any sample of reasonable size probably contains no defective product items. In this paper, we introduce an effective sampling plan based on process capability index Cpk to deal with product acceptance determination for low fraction of defectives. The proposed new sampling plan is developed based on the exact sampling distribution rather than approximation. Practitioners can use the proposed method to determine the number of required inspection units, the critical acceptance value, and make reliable decisions in product acceptance.

108 citations


Journal Article
TL;DR: In this article, a self-starting control chart based on recursive residuals is proposed for monitoring linear profiles when the nominal values of the process parameters are unknown, which can detect a shift in the intercept, the slope, or the standard deviation.
Abstract: A self-starting control chart based on recursive residuals is proposed for monitoring linear profiles when the nominal values of the process parameters are unknown. This chart can detect a shift in the intercept, the slope, or the standard deviation. Because of the good properties of the plot statistics, the proposed chart can be easily designed to match any desired in-control average run length. Simulated results show that our approach has good charting performance across a range of possible shifts when the process parameters are unknown and that it is particularly useful during the start-up stage of a process.

107 citations


Journal Article
TL;DR: The capability maturity model (CMM) is part of several software process improvement (SPI), six sigma, and total quality management (TQM) initiatives in organizations as discussed by the authors.
Abstract: The capability maturity model (CMM) is part of several software process improvement (SPI), six sigma, and total quality management (TQM) initiatives in organizations. SPI and continuous quality improvements are associated with better return on investment (ROI) for organizations. The purpose of this empirical research is to study the impact of the CMM on certain critical factors in information systems implementation strategy, software quality and software project performance. Our findings are that CMM levels do associate with IS implementation strategies and higher CMM levels relate to higher software quality and project performance. We also conclude that information systems (IS) implementation strategies have a significant impact on software quality and project performance. While certain IS implementation strategies - executive commitment and prototyping - have a significant impact on both software quality and project performance, training had a significant effect only on software quality and simplicity has a significant effect only on project performance.

Journal ArticleDOI
TL;DR: An overview and review of the general issues involved in healthcare, public health, and syndromic surveillance and existing data collection and surveillance systems, popular surveillance methods, and appropriate performance measures in healthcare and disease surveillance are presented.
Abstract: The threats of global epidemics and bioterrorism have created a demand for research in healthcare and disease surveillance. An overview and review is given of existing data collection and surveillance systems, as well as popular methods and performance..

Journal Article
TL;DR: The authors assesses the prevalence of workplace bullying in a sample of US workers, using a standardized measure of bullying (Negative Acts Questionnaire, NAQ), and compares the current study's prevalence rates with those from other bullying and aggression studies.
Abstract: This study assesses the prevalence of workplace bullying in a sample of US workers, using a standardized measure of workplace bullying (Negative Acts Questionnaire, NAQ), and compares the current study's prevalence rates with those from other bullying and aggression studies. The article opens by defining bullying as a persistent, enduring form of abuse at work and contrasting it with other negative workplace actions and interactions. Through a review of the current literature, we propose and test hypotheses regarding bullying prevalence and dynamics relative to a sample of US workers. After discussing research methods, we report on the rates of bullying in a US sample, compare these to similar studies, and analyse the negative acts that might lead to perceptions of being bullied. Based upon past conceptualizations, as well as research that suggests bullying is a phenomenon that occurs in gradations, we introduce and provide statistical evidence for the construct and impact of bullying degree. Finally, the study explores the impact of bullying on persons who witnessed but did not directly experience bullying in their jobs.

Journal ArticleDOI
TL;DR: In this paper, a gamma model with a constant coefficient of variation and a long-normal model with constant variance were compared in the analysis of data from quality improvement experiments, but neither the coefficient of variance nor the variance was analyzed.
Abstract: A gamma model with a constant coefficient of variation and a long-normal model with constant variance often give similar analyzes of data. In the analysis of data from quality improvement experiments, however, neither the coefficient of variation nor th..

Journal Article
TL;DR: In this paper, a two-stage procedure for comparing two hazard rates with crossing or running parallel hazard rates is proposed. But it does not consider the alternative hypothesis with crossing hazard rates.
Abstract: Summary. Comparison of two hazard rates is important in applications that are related to times to occurrence of a specific event. Conventional comparison procedures, such as the log-rank, Gehan–Wilcoxon and Peto–Peto tests, are powerful only when the two hazard rates do not cross each other. Because crossing hazard rates are common in practice, several procedures have been proposed in the literature for comparing such rates. However, most of these procedures consider only the alternative hypothesis with crossing hazard rates; many other realistic cases, including those when the two hazard rates run parallel to each other, are excluded from consideration. We propose a two-stage procedure that considers all possible alternatives, including ones with crossing or running parallel hazard rates. To define its significance level and p-value properly, a new procedure for handling the crossing hazard rates problem is suggested, which has the property that its test statistic is asymptotically independent of the test statistic of the log-rank test. We show that the two-stage procedure, with the log-rank test and the suggested procedure for handling the crossing hazard rates problem used in its two stages, performs well in applications in comparing two hazard rates.

Journal ArticleDOI
TL;DR: The recurrence interval and measures based on the time-to-signal properties for the temporal monitoring case are compared using exponentially weighted moving average charts, cumulative sum charts, and Markov dependent signaling processes and it is recommended that measuresbased on theTime to signal properties be used when possible to evaluate the performance of surveillance schemes for ongoing monitoring.
Abstract: A review is given of various statistical performance metrics that have been used with prospective surveillance schemes, giving consideration to situations under which the metrics are most useful. Approaches and metrics used in industrial process monito..

Journal Article
TL;DR: This paper considers workforce management in repair/maintenance environments in which repairmen are cross-trained to attend more than one type of machine and introduces simple repairman-assignment rules as well as machine-priority rules that are effective in minimizing the machine downtime costs, or balancing the percentage of working machines of different types.
Abstract: In this paper we consider workforce management in repair/maintenance environments in which repairmen are cross-trained to attend more than one type of machine. In this context, we study the machine-repairman problem with heterogeneous machines but with partially cross-trained repairmen. We introduce simple repairman-assignment rules as well as machine-priority rules that are effective in minimizing the machine downtime costs, or balancing the percentage of working machines of different types. We show that static machine priority rules are effective in minimizing systems downtime costs, while a generalized version of the longest queue policy is effective in balancing the percentage of working machines. We also introduce the concept of hidden symmetry in repair environments, and show that the well-known chain repairman skill set structure performs very well in repair environments with hidden symmetry. Finally, we provide insights into the design and control issues of repair/maintenance systems with cross-trained repairmen.

Journal Article
TL;DR: In this paper, an empirical study using the means-end approach and two laddering techniques (personal interviews and laddering questionnaires) gives a valuable first insight into the desired qualities of lecturers.
Abstract: The study aims to develop a deeper understanding of the teaching qualities of effective lecturers that students desire and to uncover the constructs that underlie these desire expectations to reveal the underlying benefits that students look for. An empirical study using the means–end approach and two laddering techniques (personal interviews and laddering questionnaires) gives a valuable first insight into the desired qualities of lecturers. While the personal laddering interviews produced more depth in understanding, the results of the two laddering methods are broadly similar. The study results indicate that students want lecturers to be knowledgeable, enthusiastic, approachable, and friendly. Students predominately want to encounter valuable teaching experiences to be able to pass tests and to be prepared for their profession. This study also shows that students' academic interests motivate them less than the vocational aspects of their studies. © 2007 Published by Elsevier Inc.

Journal ArticleDOI
TL;DR: This paper compares the performance of two new directionally sensitive multivariate methods, based on the multivariate CUSUM (MCUSUM) and theMultivariate exponentially weighted moving average (MEWMA) for biosurveillance, and finds that they perform very similarly.
Abstract: The performance of two directionally sensitive multivariate methods not currently used for biosurveillance is compared. The analysis is based on simulations using synthetic biosurveillance data mimicking disease incidence and outbreaks. Multivariate CU..

Journal Article
TL;DR: In this paper, a four-part decomposition of the key estimation errors in causal inference is presented, which can help scholars from different experimental and observational research traditions to understand better each other's inferential problems and attempted solutions.
Abstract: Summary We attempt to clarify, and suggest how to avoid, several serious misunderstandings about and fallacies of causal inference These issues concern some of the most fundamental advantages and disadvantages of each basic research design Problems include improper use of hypothesis tests for covariate balance between the treated and control groups, and the consequences of using randomization, blocking before randomization and matching after assignment of treatment to achieve covariate balance Applied researchers in a wide range of scientific disciplines seem to fall prey to one or more of these fallacies and as a result make suboptimal design or analysis choices To clarify these points, we derive a new four-part decomposition of the key estimation errors in making causal inferences We then show how this decomposition can help scholars from different experimental and observational research traditions to understand better each other’s inferential problems and attempted solutions

Journal Article
TL;DR: It is demonstrated that the inclusion of this and other forms of expert judgment can improve the performance of the DEA tool in the sense that the efficiency scores are more in line with expert/management beliefs.
Abstract: This paper presents an improved efficiency measurement tool by modifying the existing data envelopment analysis methodology to permit the incorporation of expert knowledge. A previous paper examined the inclusion of such knowledge within the additive model. This information appeared in the form of a binary classification of a subset of the decision making units under study (e.g. good versus poor performers). In the current paper, we extend this logic to the input-oriented radial projection model. We demonstrate that the inclusion of this and other forms of expert judgment can improve the performance of the DEA tool in the sense that the efficiency scores are more in line with expert/management beliefs.

Journal Article
TL;DR: In this paper, three test statistics, N k, S k, A k and A k, based on one-cycle ranked set sampling (RSS) are defined, which are all associated with the ordered ranked set sample (ORSS).
Abstract: A lot of research on ranked set sampling (RSS) is based on the assumption that the ranking is perfect. Hence, it is necessary to develop some tests that could be used to validate this assumption of perfect ranking. In this paper, we introduce some simple nonparametric methods for this purpose. We specifically define three test statistics, N k , S k and A k , based on one-cycle RSS, which are all associated with the ordered ranked set sample (ORSS). We then derive the exact null distributions and exact power functions of all these tests. Next, by using the sum or the maximum of each statistic over all cycles, we propose six test statistics for the case of multi-cycle RSS. We compare the performance of all these tests with that of the Kolmogorov–Smirnov test statistic proposed earlier by Stokes and Sager [1988. Characterization of a ranked-set sample with application to estimating distribution functions. J. Amer. Statist. Assoc. 83, 35–42] and display that all proposed test statistics are more powerful. Finally, we present an example to illustrate the test procedures discussed here.

Journal ArticleDOI
TL;DR: In this article, the authors describe Quality Function Deployment (QFD) as a Six Sigma tool that proactively translates customers' needs into technical design requirements, which is used to address design problems.
Abstract: Many organizations today implement Design for Six Sigma (DFSS) to address design problems. Quality function deployment (QFD) is a Six Sigma tool that proactively translates customers’ needs into technical design requirements. As yet, few researchers ha..



Journal Article
TL;DR: Long-term experiments are commonly used tools in agronomy, soil science and other disciplines for comparing the effects of different treatment regimes over an extended length of time and recommendations are made for improving statistical analysis and interpretation in the presence of extra random variations.
Abstract: Long-term experiments are commonly used tools in agronomy, soil science and other disciplines for comparing the effects of different treatment regimes over an extended length of time. Periodic measurements, typically annual, are taken on experimental units and are often analysed by using customary tools and models for repeated measures. These models contain nothing that accounts for the random environmental variations that typically affect all experimental units simultaneously and can alter treatment effects. This added variability can dominate that from all other sources and can adversely influence the results of a statistical analysis and interfere with its interpretation. The effect that this has on the standard repeated measures analysis is quantified by using an alternative model that allows for random variations over time. This model, however, is not useful for analysis because the random effects are confounded with fixed effects that are already in the repeated measures model. Possible solutions are reviewed and recommendations are made for improving statistical analysis and interpretation in the presence of these extra random variations. Copyright 2007 Royal Statistical Society.

Journal Article
TL;DR: In this article, the authors explore the nature of optimal investments in the security of simple series and parallel systems and find closed-form results for systems with moderately general structures, under the assumption that the cost of an attack against any given component increases linearly in the amount of defensive investment in that component.
Abstract: Recent results have used game theory to explore the nature of optimal investments in the security of simple series and parallel systems. However, it is clearly important in practice to extend these simple security models to more complicated system structures with both parallel and series subsystems (and, eventually, to more general networked systems). The purpose of this paper is to begin to address this challenge. While achieving fully general results is likely to be difficult, and may require heuristic approaches, we are able to find closed-form results for systems with moderately general structures, under the assumption that the cost of an attack against any given component increases linearly in the amount of defensive investment in that component. These results have interesting and sometimes counterintuitive implications for the nature of optimal investments in security.


Journal ArticleDOI
TL;DR: In this paper, the authors revisited and extended Fisher's original argument that the need for statistical control as a prerequisite for conducting industrial experiments is misconceived, and they demonstrated that this issue may help quality practitioners identify new and wider opportunities for the use of designed experiments in industrial practice.
Abstract: Fisher demonstrated three quarters of a century ago that the three key concepts of randomization, blocking, and replication make it possible to conduct experiments on processes that are not necessarily in a state of statistical control. However, even today there persists confusion about whether statistical control is a necessary prerequisite for conducting valid experiments in industry. In this article we revisit and extend Fisher's original argument. Reusing his 1925 examples, we demonstrate that the need for statistical control as a prerequisite for conducting industrial experiments is misconceived. Clarifying this issue may help quality practitioners identify new and wider opportunities for the use of designed experiments in industrial practice.

Journal ArticleDOI
TL;DR: In this paper, a Shewhart-type control chart is proposed for monitoring changes in the process variability of a bivariate process, based on the generalized Gini mean differences.
Abstract: A Shewhart-type control chart is proposed for monitoring changes in the process variability of a bivariate process. The chart is based on the generalized Gini mean differences. The design structure of the proposed chart is developed assuming bivariate n..