scispace - formally typeset
Search or ask a question

Showing papers by "Ron S. Kenett published in 2008"


Journal ArticleDOI
TL;DR: Quality by Design (QbD) is a systematic approach to product development and process control that begins with predefined objectives, emphasizes product and process understanding, and sets up process control based on sound science and quality risk management as mentioned in this paper.
Abstract: A process is well understood when all critical sources of variability are identified and explained, variability is managed by the process design and monitoring, and product quality attributes are accurately and reliably predicted over the design space. Quality by Design (QbD) is a systematic approach to product development and process control that begins with predefined objectives, emphasizes product and process understanding, and sets up process control based on sound science and quality risk management. The Food and Drug Administration (FDA) and the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) have recently started promoting QbD in an attempt to curb rising development costs and regulatory barriers to innovation and creativity. QbD is partially based on the application of multivariate statistical methods and a statistical Design of Experiments strategy to the development of both analytical methods and pharmaceutical formulations. In this paper, we review the basics of QbD and their impact on the innovative, generic, and biosimilar pharmaceutical industry. In particular, we consider the challenge of mapping the control space in biotechnological processes and how advances in statistical methods can contribute to QbD.

43 citations


Book ChapterDOI
02 Jan 2008
TL;DR: This chapter presents a method for usability diagnosis of webpages based on time analysis of clickstream data based on the integration of stochastic Bayesian and Markov models with models for estimating and analyzing visitors’ mental activities during their interaction with a website.
Abstract: This chapter presents a method for usability diagnosis of webpages based on time analysis of clickstream data. The resulting diagnostic reports enable website managers to learn about possible usability barriers. Different website design deficiencies are associated with different patterns of exceptional navigation. This chapter presents a method based on the integration of stochastic Bayesian and Markov models with models for estimating and analyzing visitors’ mental activities during their interaction with a website. Based on this approach, a seven-layer model for data analysis is proposed and an example of a log analyzer that implements this model is presented. The chapter describes state-of-the-art techniques and tools implementing these methods and maps areas for future research. We begin with some definitions and

22 citations


Journal Article
TL;DR: The strength of Relative Linkage Disequilibrium is demonstrated by applying it to two large data sets consisting of 2008 aircraft accident and incident occurrences recorded in the FAA data base and operational risks captured by a large financial institution operating under Basel II regulations.
Abstract: . Association rules are one of the most popular unsupervised data mining methods. Once obtained, the list of association rules extractable from a given dataset is compared in order to evaluate their importance level. The measures commonly used to assess the strength of an association rule are the indexes of support, confidence, and lift. Relative Linkage Disequilibrium (RLD) was proposed in as an approach to analyse both quantitatively and graphically association rules RLD can be considered an adaptation of the lift measure with the advantage that it presents more effectively the deviation of the support of the whole rule from the support expected under independence. Moreover RLD can be interpreted graphically using a simplex representation leading to powerful graphical display of association relationships. In this paper we demonstrate the strength of RLD by applying it to two large data sets. One data set consists of 2008 aircraft accident and incident occurrences recorded in the FAA data base. The other data set consists of operational risks captured by a large financial institution operating under Basel II regulations.

20 citations


Journal ArticleDOI
TL;DR: This paper focuses on the application of multivariate methods in comparing risk profiles and readiness assessments at various stages of an ESI project and relies on ESI theory developed by the Better Enterprise SysTem project and more conventional risk management methodology.
Abstract: This work is a first step towards the application of multivariate methods in risk management and change management in Enterprise System Implementation (ESI). ESI is characterised by concentrated efforts to integrate an IT system. Such projects typically experience unplanned problems and events, which may lead to major restructuring of the process. In this work, we rely on ESI theory developed by the Better Enterprise SysTem (BEST) project and more conventional risk management methodology. Both change management and risk management consist of data collection and data analysis. We will show that the data structures of both efforts are similar so that similar data analysis techniques are applicable. In fact one can consider change management as a special case of risk management. This paper focuses on the application of multivariate methods in comparing risk profiles and readiness assessments at various stages of an ESI project. The techniques are correspondence analysis and partial order mapping, which help to characterise and compare ESI readiness across different parts of a company and compare risk profiles of different ESI components.

18 citations


Book ChapterDOI
03 Mar 2008
TL;DR: In this paper, the authors demonstrate the impact of statistical methods on process and product improvements and the competitive position of organizations and describe a systematic approach to the evaluation of benefits from process improvement and quality by design that can be implemented within and across organizations.
Abstract: Modern industrial organizations in manufacturing and services are subject to increasing competitive pressures and rising customer expectations. Management teams on all five continents are striving to satisfy and delight their customers while simultaneously improving efficiencies and cutting costs. In tackling this complex management challenge, an increasing number of organizations have shown that the apparent conflict between high productivity and high quality can be resolved through improvements in work processes and quality of design. In this chapter we attempt to demonstrate the impact of statistical methods on process and product improvements and the competitive position of organizations. We describe a systematic approach to the evaluation of benefits from process improvement and quality by design (QbD) that can be implemented within and across organizations. We then formulate and validate the statistical efficiency conjecture that links management maturity with the impact level of problem solving and improvements driven by statistical methods. The different approaches to the management of industrial organizations can be summarized and classified using a four-step quality ladder (Kenett and Zacks, 1998). The four approaches are: (1) fire fighting; (2) inspection; (3) process control; and (4) and strategic management. To each management approach there corresponds

17 citations


Book ChapterDOI
16 Jul 2008
TL;DR: Relative Linkage Disequilibrium (RLD) as discussed by the authors is an approach to analyse both quantitatively and graphically general two way contingency tables and can be interpreted graphically using a simplex representation leading to powerful graphical display of association relationships.
Abstract: Association rules are one of the most popular unsupervised data mining methods. Once obtained, the list of association rules extractable from a given dataset is compared in order to evaluate their importance level. The measures commonly used to assess the strength of an association rule are the indexes of support, confidence, and the lift. Relative Linkage Disequilibrium (RLD) was originally proposed as an approach to analyse both quantitatively and graphically general two way contingency tables. RLD can be considered an adaptation of the lift measure with the advantage that it presents more effectively the deviation of the support of the whole rule from the support expected under independence given the supports of the LHS (A) and the RHS (B). RLD can be interpreted graphically using a simplex representation leading to powerful graphical display of association relationships. Moreover the statistical properties of RLD are known so that confirmatory statistical tests of significance or basic confidence intervals can be applied. This paper will present the properties of RLD in the context of association rules and provide several application examples to demonstrate it's practical advantages.

12 citations



Proceedings ArticleDOI
28 Jul 2008
TL;DR: The role and contributions of DSUID are motivated and its implementation in the case of usability diagnosis of Web pages, based on time analysis of clickstream data is demonstrated.
Abstract: This paper presents a methodology for setting up a decision support system for user interface design (DSUID). We first motivate the role and contributions of DSUID and then demonstrate its implementation in the case of usability diagnosis of Web pages, based on time analysis of clickstream data. The resulting DSUID diagnostic reports enable website managers to learn about possible sources of usability barriers. The proposed DSUID analytic method is based on the integration of stochastic Bayesian and Markov models with models for estimating and analyzing the visitors' mental activities during their interaction with a Website. Based on this approach, a seven-layer model for data analysis is suggested and an example of a log analyzer that implements this model is presented. We demonstrate the approach with an example of a Bayesian network applied to clickstream data and conclude with general observations on the generic role of DSUID and the implementation framework we propose.

10 citations