scispace - formally typeset
Search or ask a question
Author

Kwok-Leung Tsui

Bio: Kwok-Leung Tsui is an academic researcher from Virginia Tech. The author has contributed to research in topics: Control chart & Statistical process control. The author has an hindex of 54, co-authored 379 publications receiving 12245 citations. Previous affiliations of Kwok-Leung Tsui include University of Hong Kong & University of Wisconsin-Madison.


Papers
More filters
Journal ArticleDOI
TL;DR: This variation embodies the integration of the Response Surface Methodology (RSM) with the compromise Decision Support Problem (DSP) and is especially useful for design problems where there are no closed-form solutions and system performance is computationally expensive to evaluate.
Abstract: In this paper, we introduce a small variation to current approaches broadly called Taguchi Robust Design Methods. In these methods, there are two broad categories of problems associated with simultaneously minimizing performance variations and bringing the mean on target, namely, Type 1-minimizing variations in performance caused by variations in noise factors (uncontrollable parameters). Type II-minimizing variations in performance caused by variations in control factors (design variables). In this paper, we introduce a variation to the existing approaches to solve both types of problems. This variation embodies the integration of the Response Surface Methodology (RSM) with the compromise Decision Support Problem (DSP). Our approach is especially useful for design problems where there are no closed-form solutions and system performance is computationally expensive to evaluate. The design of a solar powered irrigation system is used as an example.

584 citations

Journal ArticleDOI
TL;DR: In this article, the unscented Kalman filtering (UKF) was applied to tune the model parameters at each sampling step to cope with various uncertainties arising from the operation environment, cell-to-cell variation, and modeling inaccuracy.

582 citations

Journal ArticleDOI
TL;DR: In this article, a more general transformation approach is introduced for other commonly met kinds of dependence between σ y and μ y (including no dependence), and a lambda plot is presented that uses the data to suggest an appropriate transformation.
Abstract: For the analysis of designed experiments, Taguchi uses performance criteria that he calls signal-to-noise (SN) ratios. Three such criteria are here denoted by SN T , SN L , and SN S . The criterion SN T was to be used in preference to the standard deviation for the problem of achieving, for some quality characteristic y, the smallest mean squared error about an operating target value. Leon, Shoemaker, and Kacker (1987) showed how SN T was appropriate to solve this problem only when σ y was proportional to μ y . On that assumption, the same result could be obtained more simply by conducting the analysis in terms of log y rather than y. A more general transformation approach is here introduced for other, commonly met kinds of dependence between σ y and μ y (including no dependence), and a lambda plot is presented that uses the data to suggest an appropriate transformation. The criteria SN L and SN S were for problems in which the objective was to make the response as large or as small as possible. It is arg...

495 citations

Journal ArticleDOI
TL;DR: An ensemble model fuses an empirical exponential and a polynomial regression model to track the battery’s degradation trend over its cycle life based on experimental data analysis and presents the limitations of the model.

379 citations

Journal ArticleDOI
TL;DR: A thorough review of vibration-based bearing and gear health indicators constructed from mechanical signal processing, modeling, and machine learning is presented and provides a basis for predicting the remaining useful life of bearings and gears.
Abstract: Prognostics and health management is an emerging discipline to scientifically manage the health condition of engineering systems and their critical components. It mainly consists of three main aspects: construction of health indicators, remaining useful life prediction, and health management. Construction of health indicators aims to evaluate the system’s current health condition and its critical components. Given the observations of a health indicator, prediction of the remaining useful life is used to infer the time when an engineering systems or a critical component will no longer perform its intended function. Health management involves planning the optimal maintenance schedule according to the system’s current and future health condition, its critical components and the replacement costs. Construction of health indicators is the key to predicting the remaining useful life. Bearings and gears are the most common mechanical components in rotating machines, and their health conditions are of great concern in practice. Because it is difficult to measure and quantify the health conditions of bearings and gears in many cases, numerous vibration-based methods have been proposed to construct bearing and gear health indicators. This paper presents a thorough review of vibration-based bearing and gear health indicators constructed from mechanical signal processing, modeling, and machine learning. This review paper will be helpful for designing further advanced bearing and gear health indicators and provides a basis for predicting the remaining useful life of bearings and gears. Most of the bearing and gear health indicators reviewed in this paper are highly relevant to simulated and experimental run-to-failure data rather than artificially seeded bearing and gear fault data. Finally, some problems in the literature are highlighted and areas for future study are identified.

326 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: Chapter 11 includes more case studies in other areas, ranging from manufacturing to marketing research, and a detailed comparison with other diagnostic tools, such as logistic regression and tree-based methods.
Abstract: Chapter 11 includes more case studies in other areas, ranging from manufacturing to marketing research. Chapter 12 concludes the book with some commentary about the scientiŽ c contributions of MTS. The Taguchi method for design of experiment has generated considerable controversy in the statistical community over the past few decades. The MTS/MTGS method seems to lead another source of discussions on the methodology it advocates (Montgomery 2003). As pointed out by Woodall et al. (2003), the MTS/MTGS methods are considered ad hoc in the sense that they have not been developed using any underlying statistical theory. Because the “normal” and “abnormal” groups form the basis of the theory, some sampling restrictions are fundamental to the applications. First, it is essential that the “normal” sample be uniform, unbiased, and/or complete so that a reliable measurement scale is obtained. Second, the selection of “abnormal” samples is crucial to the success of dimensionality reduction when OAs are used. For example, if each abnormal item is really unique in the medical example, then it is unclear how the statistical distance MD can be guaranteed to give a consistent diagnosis measure of severity on a continuous scale when the larger-the-better type S/N ratio is used. Multivariate diagnosis is not new to Technometrics readers and is now becoming increasingly more popular in statistical analysis and data mining for knowledge discovery. As a promising alternative that assumes no underlying data model, The Mahalanobis–Taguchi Strategy does not provide sufŽ cient evidence of gains achieved by using the proposed method over existing tools. Readers may be very interested in a detailed comparison with other diagnostic tools, such as logistic regression and tree-based methods. Overall, although the idea of MTS/MTGS is intriguing, this book would be more valuable had it been written in a rigorous fashion as a technical reference. There is some lack of precision even in several mathematical notations. Perhaps a follow-up with additional theoretical justiŽ cation and careful case studies would answer some of the lingering questions.

11,507 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI

6,278 citations