An inseparability metric to identify a small number of key variables for improved process monitoring
01 Aug 2013-pp 740-745
TL;DR: A metric is proposed to identify the key variables of a fault and it is demonstrated that the excellent ability of the proposed metric in identifying the right key variables is demonstrated through the benchmark Tennessee Eastman Challenge problem.
Abstract: In a large-scale complex chemical process, hundreds of variables are measured. Since statistical process monitoring techniques such as PCA typically involve dimensionality reduction, all measured variables are often provided as input without pre-selection of variables. In our previous work [1], we demonstrated that reduced models based on only a small number of important variables, called key variables, which contain useful information about a fault, can significantly improve performance. This set of key variables is fault specific. In this paper, we propose a metric to identify the key variables of a fault. The metric measures the extent of inseparability in the subspace of a variable subset and thus, provides a reasonable estimate of the monitoring performance for a subset of variables. The excellent ability of the proposed metric in identifying the right key variables is demonstrated through the benchmark Tennessee Eastman Challenge problem.
References
More filters
TL;DR: In this article, a model of an industrial chemical process for the purpose of developing, studying and evaluating process control technology is presented, which is well suited for a wide variety of studies including both plantwide control and multivariable control problems.
Abstract: This paper describes a model of an industrial chemical process for the purpose of developing, studying and evaluating process control technology. This process is well suited for a wide variety of studies including both plant-wide control and multivariable control problems. It consists of a reactor/ separator/recycle arrangement involving two simultaneous gas—liquid exothermic reactions of the following form: A(g) + C(g) + D(g) → G(liq), Product 1, A(g) + C(g) + E(g) → H(liq), Product 2. Two additional byproduct reactions also occur. The process has 12 valves available for manipulation and 41 measurements available for monitoring or control. The process equipment, operating objectives, process control objectives and process disturbances are described. A set of FORTRAN subroutines which simulate the process are available upon request. The chemical process model presented here is a challenging problem for a wide variety of process control technology studies. Even though this process has only a few unit operations, it is much more complex than it appears on first examination. We hope that this problem will be useful in the development of the process control field. We are also interested in hearing about applications of the problem.
2,603 citations
TL;DR: It is shown that the proposed algorithm outperformed PCA and DPCA both in terms of detection and diagnosis of faults.
Abstract: In this paper, a new approach for fault detection and diagnosis based on One-Class Support Vector Machines (1-class SVM) has been proposed The approach is based on a non-linear distance metric measured in a feature space Just as in principal components analysis (PCA) and dynamic principal components analysis (DPCA), appropriate distance metrics and thresholds have been developed for fault detection Fault diagnosis is then carried out using the SVM-recursive feature elimination (SVM-RFE) feature selection method The efficacy of this method is demonstrated by applying it on the benchmark Tennessee Eastman problem and on an industrial real-time Semiconductor etch process dataset The algorithm has been compared with conventional techniques such as PCA and DPCA in terms of performance measures such as false alarm rates, detection latency and fault detection rates It is shown that the proposed algorithm outperformed PCA and DPCA both in terms of detection and diagnosis of faults
325 citations
Journal Article•
311 citations
TL;DR: A novel procedure to find cost-optimal sensor networks is proposed that is minimized subject to qualifying constraints that are related to certain requirements of data reconciliation.
Abstract: A novel procedure to find cost-optimal sensor networks is proposed. Cost is minimized subject to qualifying constraints that are related to certain requirements of data reconciliation. One basic qualifying constraint is a desired level of precision of reconciled values for a selected set of variables. Since precision requirements lead to multiple solutions, other qualifying constraints are proposed. These constraints are availability, resilience, and error detectability. Definitions for these terms are given and their impact on the results is presented.
129 citations
TL;DR: In this paper, the effects of adding and removing single measurements on estimation accuracy were derived for a steady-state process and evolutionary strategies for selecting an optimal measurement structure were developed to select an optimal structure.
Abstract: For a steady-state process the accuracy of reconciled data may be measured by the trace of its covariance matrix of estimation errors. Quantitative relations are derived for the effects of adding and removing single measurements on estimation accuracy. It is proved that redundancy will never adversely affect estimation accuracy. It will always enhance estimation accuracy, if the measurements relate the process variables in a different way from the constraints. These relations are utilized to develop evolutionary strategies for selecting an optimal measurement structure.
76 citations