scispace - formally typeset
Search or ask a question

Showing papers in "Quality Engineering in 2005"


Journal Article
TL;DR: This analysis shows a tendency for the data to lie deterministically at the vertices of a regular simplex, which means all the randomness in the data appears only as a random rotation of this simplex.
Abstract: High dimension, low sample size data are emerging in various areas of science. We find a common structure underlying many such data sets by using a non-standard type of asymptotics: the dimension tends to ∞ while the sample size is fixed. Our analysis shows a tendency for the data to lie deterministically at the vertices of a regular simplex. Essentially all the randomness in the data appears only as a random rotation of this simplex. This geometric representation is used to obtain several new statistical insights. Copyright 2005 Royal Statistical Society.

421 citations


Journal Article
TL;DR: In this paper, the current state of nonparametric Bayesian inference is reviewed and a list of important statistical inference problems, including density estimation, regression, survival analysis, hierarchical models and model validation are discussed.
Abstract: We review the current state of nonparametric Bayesian inference. The discussion follows a list of important statistical inference problems, including density estimation, regression, survival analysis, hierarchical models and model validation. For each inference problem we review relevant nonparametric Bayesian models and approaches including Dirichlet process (DP) models and variations, Polya trees, wavelet based models, neural network models, spline regression, CART, dependent DP models and model validation with DP and Polya tree extensions of parametric models.

413 citations


Journal Article
TL;DR: Results suggest that both service and merchandise quality exert significant influence on store performance, measured by sales growth and customer growth, and their impact is mediated by customer satisfaction.

259 citations


Journal Article
TL;DR: In this paper, the authors developed methods for fitting spatial models to line transect data, allowing animal density to be related to topographical, environmental, habitat, and other spatial variables, helping wildlife managers to identify the factors that affect abundance.
Abstract: This article develops methods for fitting spatial models to line transect data. These allow animal density to be related to topographical, environmental, habitat, and other spatial variables, helping wildlife managers to identify the factors that affect abundance. They also enable estimation of abundance for any subarea of interest within the surveyed region, and potentially yield estimates of abundance from sightings surveys for which the survey design could not be randomized, such as surveys conducted from platforms of opportunity. The methods are illustrated through analyses of data from a shipboard sightings survey of minke whales in the Antarctic.

249 citations


Journal Article
TL;DR: In this article, the authors proposed and tested a model of library success that shows how information service quality relates to other variables associated with success, and found that service quality is an important factor in success.
Abstract: This study proposes and tests a model of library success that shows how information service quality relates to other variables associated with success. If service quality affects success, then it should be possible to compare service quality to other variables believed to affect success. A modified version of the SERVQUAL instrument was evaluated to determine how effectively it measures service quality within the information service industry. Instruments designed to measure information center success and information system success were evaluated to determine how effectively they measure success in the library system application and how they relate to SERVQUAL. Responses from 385 end users at two US Army Corps of Engineers libraries were obtained through a mail survey. Results indicate that service quality is best measured with a performance-based version of SERVQUAL, and that measuring importance may be as critical as measuring expectations for management purposes. Results also indicate that service quality is an important factor in success. The findings have implications for the development of new instruments to more effectively measure information service quality and information service success as well as for the development of new models that better show the relationship between information service quality and information service success.

209 citations


Journal Article
TL;DR: In this article, a diffusion-neural network (DNN) is proposed to learn from a small sample consisting of only a few patterns, which is trained by using the deriving patterns instead of original patterns.
Abstract: Neural information processing models largely assume that the patterns for training a neural network are sufficient. Otherwise, there must exist a non-negligible error between the real function and the estimated function from a trained network. To reduce the error, in this paper, we suggest a diffusion-neural-network (DNN) to learn from a small sample consisting of only a few patterns. A DNN with more nodes in the input and layers is trained by using the deriving patterns instead of original patterns. In this paper, we give an example to show how to construct a DNN for recognizing a non-linear function. In our case, the DNN’s error is less than the error of the conventional BP network, about 48%. To substantiate the special case arguments, we also study other two non-linear functions with simulation technology. The results show that the DNN model is very effective in the case where the target function has a strong non-linearity or a given sample is very small.

142 citations


Journal Article
TL;DR: Object-oriented design metrics concerning inheritance related measures, complexity measures, cohesion measures, coupling measures and memory allocation measures are used as the independent variables and GRNN network model is found to predict more accurately than Ward network model.
Abstract: This paper presents the application of neural networks in software quality estimation using object-oriented metrics. In this paper, two kinds of investigation are performed. The first on predicting the number of defects in a class and the second on predicting the number of lines changed per class. Two neural network models are used, they are Ward neural network and General Regression neural network (GRNN). Object-oriented design metrics concerning inheritance related measures, complexity measures, cohesion measures, coupling measures and memory allocation measures are used as the independent variables. GRNN network model is found to predict more accurately than Ward network model.

133 citations


Journal Article
TL;DR: In this article, a control chart for detecting shifts in the variance of a process is developed for the case where the nominal value of the variance is unknown, which avoids the need for a lengthy Phase I data-gathering step before charting can begin.
Abstract: A control chart for detecting shifts in the variance of a process is developed for the case where the nominal value of the variance is unknown. As our approach does not require that the in-control variance be known a priori, it avoids the need for a lengthy Phase I data-gathering step before charting can begin. The method is a variance-change-point model, based on the likelihood ratio test for a change in variance with the conventional Bartlett correction, adapted for repeated sequential use. The chart may be used alone in settings where one wishes to monitor one-degree-of-freedom chi-squared variates for departure from control; or it may be used together with a parallel change-point methodology for the mean to monitor process data for shifts in mean and/or variance. In both the solo use and as the scale portion of a combined scheme for monitoring changes in mean and/or variance, the approach has good performance across the range of possible shifts.

131 citations


Journal Article
TL;DR: In this paper, a simple example that illustrates the key differences and similarities between the Fisherian, Neyman-Pearson and Bayesian approaches to testing is presented, and implications for more complex situations are also discussed.
Abstract: This article presents a simple example that illustrates the key differences and similarities between the Fisherian, Neyman-Pearson, and Bayesian approaches to testing. Implications for more complex situations are also discussed.

99 citations


Journal ArticleDOI
TL;DR: In this article, the EFQM Excellence Model is evaluated for decision-making on organizational improvement activities and some methodological issues related to the use of the model are discussed, such as whether the organizational excellence is appropriately reflected in the sub-criteria used for the purpose of measuring excellence.
Abstract: Abstract This paper assesses the usefulness of the EFQM Excellence Model for decision-making on organizational improvement activities. The paper does this by studying the procedures of the EFQM model in practice, and – based on definitions of the object, criteria for decision making, the object goals and levers for the EFQM Excellence Model and their relationship – this paper discusses some methodological issues related to the use of the EFQM model. These procedures are studied in order to analyse their appropriateness for identification of problematic situations and, based on that, for identification of problems. The paper concludes that the EFQM Excellence Model is appropriately structured to perform the first phase of the analysis, i.e. identification of a problematic situation, but on the other hand the model does not offer any specific guidelines about the second phase, i.e. problem identification. The model offers no structured approach about how to exploit strengths or about how to classify or prioritize areas of improvement. From these methodological questions of applicability of the EFQM model two important conceptual issues arise. The first conceptual issue is related to the question of whether the organizational excellence is appropriately reflected in the sub-criteria used for the purpose of measuring excellence. The second conceptual issue is that of clarifications of the relationship between decisions made on the basis of the EFQM model self-assessment results and other strategic, business, organizational, etc, decisions.

88 citations


Journal Article
TL;DR: In this paper, a modification of Wright's learning curve is presented for processes that generate defects that can be reworked, and a plausible explanation to the plateauing phenomenon is provided.
Abstract: The earliest learning curve; i.e., of Wright [J. Aeronaut. Sci. 3 (1936) 122], representation is a geometric progression that expresses the decreasing time required to accomplish a repetitive operation. The theory in its most popular form states that as the total quantity of units produced doubles, the time per unit declines by some constant percentage. Wright's model assumes that all units produced are of acceptable quality. However, in many practical situations, there is a possibility that the process goes out-of-control, thus, producing defective items that need to be reworked. Then, the rework time per unit must be accounted for when measuring the learning curve. This paper does so, and a modification of Wright's learning curve is presented for processes that generate defects that can be reworked. However, it worth mentioning that the work presented herein has two limitations. Firstly, this paper does not apply to cases when defects are discarded. Secondly, this paper assumes the rate of generating defects is constant, which means that the production process does not benefit from any changes for eliminating the defects. Analytical results show that the learning curve, under some conditions, could be of a convex form. Furthermore, this paper provides a plausible explanation to the plateauing phenomenon that intrigued several researchers.

Journal Article
TL;DR: A slightly more general model is proposed under which the observed response is strongly related but not equal to the unobservable true response, and the maximum estimated likelihood estimator is proposed in this model.
Abstract: The logistic regression model is commonly used to describe the effect of one or several explanatory variables on a binary response variable. It suffers from the problem that its parameters are not identifiable when there is separation in the space of the explanatory variables. In that case, existing fitting techniques fail to converge or give the wrong answer. To remedy this, a slightly more general model is proposed under which the observed response is strongly related but not equal to the unobservable true response. This model will be called the hidden logistic regression model because the unobservable true responses are comparable to a hidden layer in a feedforward neural net. The maximum estimated likelihood estimator is proposed in this model. It is robust against separation, always exists, and is easy to compute. Outlier-robust estimation is also studied in this setting, yielding the weighted maximum estimated likelihood estimator.

Journal Article
TL;DR: In this paper, an attempt to cover areas beyond Bayes's scientific work is made, including his family background and education, as well as his scientific and theological work, including the Bayes Theorem.
Abstract: Thomas Bayes, from whom Bayes Theorem takes its name, was probably born in 1701 so that the year 2001 would mark the 300 th anniversary of his birth. A sketch of his life will include his family background and education, as well as his scientific and theological work. In contras t to some, but not all, biographies of Bayes, the current biography is an attempt to cover areas beyond Bayes’s scientific work. When commenting on the writing of scientific biography, Pearson (1978) stated, “it is impossible to understand a man’s work unless you understand something of his character and unless you understand something of his environment. And his environment means the state of affairs social and political of his own age.” The intention here is to follow this general approach to biography. There is very little primary source material on Bayes and his work. For example, only three of his letters and a notebook containing some sketches of his own work, almost all unpublished, as well as notes on the work of others were known to have survived. Neither the letters, nor the notebook, are dated, and only one of the letters can be dated accurately from internal evidence. This biography will contain new information about Bayes. In particular, among the papers of the 2 nd Earl Stanhope, letters and papers of Bayes have been uncovered that were previously not known to exist. The letters indirectly confirm the centrality of Stanhope in Bayes’s election to the Royal Society. They also provide evidence that Bayes was part of a network of mathematicians initially centered on Stanhope. In addition, the letters shed light on Bayes’s work in infinite series.

Journal Article
TL;DR: In this article, a new approach for the difference of two binomial proportions is proposed and compared under several generally used criteria, and recommendations for which approach is applicable under different situations are given.
Abstract: This paper considers confidence intervals for the difference of two binomial proportions. Some currently used approaches are discussed. A new approach is proposed. Under several generally used criteria, these approaches are thoroughly compared. The widely used Wald confidence interval (CI) is far from satisfactory, while the Newcombe's CI, new recentered CI and score CI have very good performance. Recommendations for which approach is applicable under different situations are given.

Journal Article
TL;DR: In this article, the authors describe three of the best known paradoxes (Simpson's paradox, Kelley's Paradox, and Lord's Paradox) and illustrate them in a single data set.
Abstract: Interpreting group differences observed in aggregated data is a practice that must be done with enormous care. Often the truth underlying such data is quite different than a naive first look would indicate. The confusions that can arise are so perplexing that some of the more frequently occurring ones have been dubbed paradoxes. In this paper we describe three of the best known of these paradoxes --Simpson’s Paradox, Kelley’s Paradox, and Lord’s Paradox -- and illustrate them in a single data set. The data set contains the score distributions, separated by race, on the biological sciences component of the Medical College Admission Test (MCAT) and Step 1 of the United States Medical Licensing Examination™ (USMLE). Our goal in examining these data was to move toward a greater understanding of race differences in admissions policies in medical schools. As we demonstrate, the path toward this goal is hindered by differences in the score distributions which gives rise to these three paradoxes. The ease with which we were able to illustrate all of these paradoxes within a single data set is indicative of how wide spread they are likely to be in practice.

Journal Article
TL;DR: In this paper, the authors provide some examples of how bad data can arise, what kinds of bad data exist, how to detect and measure bad data, and how to improve the quality of data that have already been collected.
Abstract: As Huff's landmark book made clear, lying with statistics can be accomplished in many ways. Distorting graphics, manipulating data or using biased samples are just a few of the tried and true methods. Failing to use the correct statistical procedure or failing to check the conditions for when the selected method is appropriate can distort results as well, whether the motives of the analyst are honorable or not. Even when the statistical procedure and motives are correct, bad data can produce results that have no validity at all. This article provides some examples of how bad data can arise, what kinds of bad data exist, how to detect and measure bad data, and how to improve the quality of data that have already been collected.

Journal Article
TL;DR: In this article, the authors examined the global properties of the T 2 test when shift information is unavailable, and proposed a control chart with an intrinsic relationship with the residuals-based generalized likelihood ratio test (GLRT) procedure.
Abstract: Hotelling's T 2 chart is one of the most popular control charts for monitoring identically and independently distributed random vectors. Recently, Alwan and Alwan (1994) and Apley and Tsung (2002) studied the use of the T 2 chart for detecting mean shifts of a univariate autocorrelated process by transforming the univariate variables into multivariate vectors. This paper examines the global properties of the T 2 test when shift information is unavailable. When shift directions are known a priori, the efficiency of the T 2 test can be improved by a generalized likelihood ratio test (GLRT) taking into consideration the special shift patterns. The proposed control chart has an intrinsic relationship with the residuals-based GLRT procedure studied in Vander Wiel (1996) and Apley and Shi (1999). Retaining the T 2 chart's advantages of a wide range sensitivity for mean shift detection, the proposed control chart is shown to outperform the T 2 chart and the residual-based GLRT procedure when monitoring the mean of a univariate autocorrelated process. Performance enhancement and deterioration in the face of gradual shifts are also discussed.

Journal ArticleDOI
TL;DR: The X-bar chart is frequently used to monitor the process mean quality level of a quality characteristic as mentioned in this paper, however, the sample average is sensitive to outliers that may be due to the presence o
Abstract: [This abstract is based on the author's abstract.]The X-bar chart is frequently used to monitor the process mean quality level of a quality characteristic. However, the sample average, or X-bar, is sensitive to outliers that may be due to the presence o..

Journal ArticleDOI
TL;DR: In this paper, a new approach to selective assembly is proposed, where the dimensional distributions of the mating parts are not the same and will result in surplus parts, which will be used to improve the performance of selective assembly.
Abstract: Selective assembly is a method of obtaining high-precision assemblies from relatively low-precision components. However, the dimensional distributions of the mating parts are not the same and will result in surplus parts. A new approach to selective ass..

Journal ArticleDOI
TL;DR: In this article, the authors use a Bayesian approach for analyzing the degradation data to assess reliability of a laser degradation example and demonstrate the advantages of using degradation data for reliability assessment.
Abstract: For highly reliable products, little information about reliability is provided by lifetests using a practical test duration in which few or no failures are typically observed. In this article, we illustrate the advantages of using degradation data to assess reliability. We use a Bayesian approach for analyzing the degradation data because of its advantages. The uncertainty of reliability and lifetime distribution quantile estimates can be determined in a straightforward manner. Moreover, calculating for a specified time the reliability of a population of units with varying ages is easily done. We illustrate these advantages by a laser degradation example.

Journal ArticleDOI
TL;DR: A new coordinate-exchange algorithm applicable for constrained mixture experiments implemented in JMP® was used to D-optimally select design points without candidate points.
Abstract: This article describes the solution to a unique and challenging mixture experiment design problem involving (1) 19 and 21 components for two different parts of the design, (2) many single-component and multicomponent constraints, (3) augmentation of existing data, (4) a layered design developed in stages, and (5) a no-candidate-point optimal design approach. The problem involved studying the liquidus temperature of spinel crystals as a function of nuclear waste glass composition. A D-optimal approach was used to augment existing glasses with new nonradioactive and radioactive glasses chosen to cover the designated nonradioactive and radioactive experimental regions. The traditional approach to building D-optimal mixture experiment designs is to generate a set of candidate points from which design points are D-optimally selected. The large number of mixture components (19 or 21) and many constraints defining each layer of the waste glass experimental region made it impossible to generate and store the huge...

Journal ArticleDOI
TL;DR: Simple corrections to solve the problem of standard control charts where several parameters need to be estimated, using existing factors which are widely used for the traditional charts.
Abstract: When using standard control charts, typically several parameters need to be estimated. For the usual sample sizes, this is known to affect the performance of the chart. Here we present simple corrections to solve this problem. As a basis we use existing factors which are widely used for the traditional charts. The advantage of the new proposals is that a clear link is made to the actual performance characteristics of the chart.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a strategy for the implementation of the Six Sigma method as an improvement solution for the ISO 9000:2000 Quality Standard, focusing on integrating the DMAIC cycle of the six Sigma method with the PDCA process approach.
Abstract: We propose a strategy for the implementation of the Six Sigma method as an improvement solution for the ISO 9000:2000 Quality Standard. Our approach is focused on integrating the DMAIC cycle of the Six Sigma method with the PDCA process approach, highly recommended by the standard ISO 9000:2000. The Six Sigma steps applied to each part of the PDCA cycle are presented in detail, along with some tools and training examples. Based on this analysis, the authors conclude that applying Six Sigma philosophy to the quality standard implementation process is the best way to achieve the optimal results in quality progress and therefore in customer satisfaction.

Journal Article
TL;DR: In this article, an integrated view of quality and knowledge using Nonaka's theory of knowledge creation is proposed to illuminate how quality practices can lead to knowledge creation and retention, and the knowledge perspective also provides insight into what it means to effectively deploy quality management practices.
Abstract: Several quality thought leaders have considered the role of knowledge in quality management practices. For example, Deming proposed The Deming System of Profound Knowledge™ that dealt explicitly with knowledge. However, various authors in the quality field diverge considerably when contemplating knowledge. We propose an integrated view of quality and knowledge using Nonaka's theory of knowledge creation. This integrated view helps illuminate how quality practices can lead to knowledge creation and retention. The knowledge perspective also provides insight into what it means to effectively deploy quality management practices. Previous empirical research noted the importance of effective deployment, but provided little insight into what effective deployment means. This research argues that quality management practices create knowledge, which leads to organizational performance. Taking a knowledge-based view (KBV) of the firm provides a deeper understanding of why some organizations are more successful at deploying quality management practices than others.

Journal ArticleDOI
TL;DR: In this paper, a non-contact measurement approach was proposed for characterizing manufactured surfaces using multiresolution wavelet decomposition, and the relationship between wavelet signatures and surface roughness parameters (Ra and Rq) was built by response surface methodology.
Abstract: This paper describes a new non-contact measurement approach in characterizing manufactured surfaces. Computer vision is applied to capture digital images of three types of anisotropic steel specimen surfaces from shaping, grinding, and polishing processes. Multiresolution wavelet decomposition is used to obtain signatures of surface profiles from the digital images. Relationships between these signatures and surface roughness parameters (Ra and Rq) are built by response surface methodology (RSM). The proposed models thus developed are suitable for predicting roughness in terms of the roughness parameters. Experimental results show that the proposed approach successfully correlates wavelet signals to Ra and Rq values. In addition, they also show repeatable gage capabilities. The proposed method is a good candidate for on-line, real-time surface roughness inspection when specimens of known surface roughness are available.

Journal ArticleDOI
TL;DR: In this article, the authors discuss methods that could serve as an MSA for binary data and argue that a latent class model is the most promising candidate for binary quality analysis, which can guarantee the reliability of the acquired data and serve as the basis for drawing conclusions with respect to the behavior of the key quality characteristics.
Abstract: Many quality programs prescribe a measurement system analysis (MSA) to be performed on the key quality characteristics. This guarantees the reliability of the acquired data, which serve as the basis for drawing conclusions with respect to the behavior of the key quality characteristics. When dealing with continuous characteristics, the Gauge R&R is regarded as the statistical technique in MSA. For binary characteristics, no such universally accepted equivalent is available. We discuss methods that could serve as an MSA for binary data. We argue that a latent class model is the most promising candidate.

Journal Article
TL;DR: A variant of Link-Tracing Sampling which avoids the ordinary assumption of an initial Bernoulli sample of members of the target population and traces only the links between the sampled sites and the nominees.
Abstract: We present a variant of Link-Tracing Sampling which avoids the ordinary assumption of an initial Bernoulli sample of members of the target population. Instead of that, we assume that a portion of the target population is covered by a sampling frame of accessible sites, such as households, street blocks, or block venues, and that a simple random sample of sites is selected from the frame. As in ordinary Link-Tracing sampling, the people in the initial sample are asked to nominate other members of the population, but in this case we trace only the links between the sampled sites and the nominees. Maximum likelihood estimators of the population size are presented, and estimators of their variances that incorporate the initial sampling design are suggested. The results of a simulation study carried out in this research indicate that our proposed design is effective provided that the nomination probabilities are not too small.

Journal Article
TL;DR: In this paper, two models are introduced that establish a conceptual framework linking firm profit to attributes of the IT-worker system, considering the impact of IT capabilities (such as functionality and ease-of-use) and worker skill.
Abstract: Information technology has profoundly impacted the operations of firms in the service industry and service environments within manufacturing. Two models are introduced that establish a conceptual framework linking firm profit to attributes of the IT-worker system. The framework considers the impact of IT capabilities (such as functionality and ease-of-use) and worker skill as drivers of output volume and quality. The framework contrasts attributes of the IT-worker systems when services are massproduced (flow shop) versus customized (job shop). Mathematical models are introduced to formalize the conceptual framework. Numerical examples are presented that illustrate the types of insights that can be obtained from the models.

Journal ArticleDOI
TL;DR: Quality Quandaries : The Effect of Autocorrelation on Statistical Process Control Procedures as discussed by the authors, where the effect of correlation on statistical process control procedures is investigated. But the authors focus on statistical processes.
Abstract: Quality Quandaries : The Effect of Autocorrelation on Statistical Process Control Procedures

Journal Article
TL;DR: This paper presents an extension of the Bayesian theory that can be used to perform probabilistic inference with uncertain evidence, based on an idealized view of inference in which observations are used to rule out possible valuations of the variables in a modeling space.
Abstract: The application of formal inference procedures, such as Bayes Theorem, requires that a judgment is made, by which the evidential meaning of physical observations is stated within the context of a formal model. Uncertain evidence is defined as the class of observations for which this statement cannot take place in certain terms. It is a significant class of evidence, since it cannot be treated using Bayes Theorem in its conventional form [G. Shafer, A Mathematical Theory of Evidence, Princeton University Press, Princeton, NJ, 1976]. In this paper, we present an extension of the Bayesian theory that can be used to perform probabilistic inference with uncertain evidence. The extension is based on an idealized view of inference in which observations are used to rule out possible valuations of the variables in a modeling space. The extension is different from earlier probabilistic approaches such as Jeffrey's rule of probability kinematics and Cheeseman's rule of distributed meaning, by introducing two forms of evidential meaning representation are presented, for which non-probabilistic analogues are found in theories such as Evidence Theory and Possibility Theory. By viewing the statement of evidential meaning as a separate step in the inference process, a clear probabilistic interpretation can be given to these forms of representation, and a generalization of Bayes Theorem can be derived. This generalized rule of inference allows uncertain evidence to be incorporated into probabilistic inference procedures.