Other affiliations: Research Institute for Advanced Computer Science, University of Maryland, College Park
Bio: Sankalita Saha is an academic researcher from Ames Research Center. The author has contributed to research in topics: Prognostics & Dataflow. The author has an hindex of 20, co-authored 44 publications receiving 1840 citations. Previous affiliations of Sankalita Saha include Research Institute for Advanced Computer Science & University of Maryland, College Park.
••12 Dec 2008
TL;DR: The metrics that are already used for prognostics in a variety of domains including medicine, nuclear, automotive, aerospace, and electronics are surveyed and differences and similarities between these domains and health maintenance have been analyzed to help understand what performance evaluation methods may or may not be borrowed.
Abstract: Prognostics is an emerging concept in condition based maintenance (CBM) of critical systems. Along with developing the fundamentals of being able to confidently predict Remaining Useful Life (RUL), the technology calls for fielded applications as it inches towards maturation. This requires a stringent performance evaluation so that the significance of the concept can be fully exploited. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few issues. Instead, the research community has used a variety of metrics based largely on convenience with respect to their respective requirements. Very little attention has been focused on establishing a common ground to compare different efforts. This paper surveys the metrics that are already used for prognostics in a variety of domains including medicine, nuclear, automotive, aerospace, and electronics. It also considers other domains that involve prediction-related tasks, such as weather and finance. Differences and similarities between these domains and health maintenance have been analyzed to help understand what performance evaluation methods may or may not be borrowed. Further, these metrics have been categorized in several ways that may be useful in deciding upon a suitable subset for a specific application. Some important prognostic concepts have been defined using a notational framework that enables interpretation of different metrics coherently. Last, but not the least, a list of metrics has been suggested to assess critical aspects of RUL predictions before they are fielded in real applications.
••22 Mar 2021
TL;DR: This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics.
Abstract: Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.
27 Sep 2009
TL;DR: This paper presents a detailed discussion on how these metrics should be interpreted and used and several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems.
Abstract: Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.
••07 Mar 2009
TL;DR: This paper introduces several new evaluation metrics tailored for prognostics and shows that they can effectively evaluate various algorithms as compared to other conventional metrics.
Abstract: Prognostics has taken center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of a system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Four prognostic algorithms, Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR), are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in a different manner; depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, this paper offers ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized.
••23 Jan 2012
TL;DR: In this article, an Extended Kalman filter is used as a model-based prognostics technique based on the Bayesian tracking framework to predict the remaining life of power MOSFETs.
Abstract: The prognostic technique for a power MOSFET presented in this paper is based on accelerated aging of MOSFET IRF520Npbf in a TO-220 package. The methodology utilizes thermal and power cycling to accelerate the life of the devices. The major failure mechanism for the stress conditions is die-attachment degradation, typical for discrete devices with lead-free solder die attachment. It has been determined that die-attach degradation results in an increase in ON-state resistance due to its dependence on junction temperature. Increasing resistance, thus, can be used as a precursor of failure for the die-attach failure mechanism under thermal stress. A feature based on normalized ON-resistance is computed from in-situ measurements of the electro-thermal response. An Extended Kalman filter is used as a model-based prognostics techniques based on the Bayesian tracking framework. The proposed prognostics technique reports on preliminary work that serves as a case study on the prediction of remaining life of power MOSFETs and builds upon the work presented in . The algorithm considered in this study had been used as prognostics algorithm in different applications and is regarded as suitable candidate for component level prognostics. This work attempts to further the validation of such algorithm by presenting it with real degradation data including measurements from real sensors, which include all the complications (noise, bias, etc.) that are regularly not captured on simulated degradation data. The algorithm is developed and tested on the accelerated aging test timescale. In real world operation, the timescale of the degradation process and therefore the RUL predictions will be considerable larger. It is hypothesized that even though the timescale will be larger, it remains constant through the degradation process and the algorithm and model would still apply under the slower degradation process. By using accelerated aging data with actual device measurements and real sensors (no simulated behavior), we are attempting to assess how such algorithm behaves under realistic conditions.
01 Jan 2011
TL;DR: In this paper, a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions is presented.
Abstract: This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol’s method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent. Mathematical modeling of complex systems often requires sensitivity analysis to determine how an output variable of interest is influenced by individual or subsets of input variables. A traditional local sensitivity analysis entails gradients or derivatives, often invoked in design optimization, describing changes in the model response due to the local variation of input. Depending on the model output, obtaining gradients or derivatives, if they exist, can be simple or difficult. In contrast, a global sensitivity analysis (GSA), increasingly becoming mainstream, characterizes how the global variation of input, due to its uncertainty, impacts the overall uncertain behavior of the model. In other words, GSA constitutes the study of how the output uncertainty from a mathematical model is divvied up, qualitatively or quantitatively, to distinct sources of input variation in the model .
TL;DR: A review on machinery prognostics following its whole program, i.e., from data acquisition to RUL prediction, which provides discussions on current situation, upcoming challenges as well as possible future trends for researchers in this field.
Abstract: Machinery prognostics is one of the major tasks in condition based maintenance (CBM), which aims to predict the remaining useful life (RUL) of machinery based on condition information. A machinery prognostic program generally consists of four technical processes, i.e., data acquisition, health indicator (HI) construction, health stage (HS) division, and RUL prediction. Over recent years, a significant amount of research work has been undertaken in each of the four processes. And much literature has made an excellent overview on the last process, i.e., RUL prediction. However, there has not been a systematic review that covers the four technical processes comprehensively. To fill this gap, this paper provides a review on machinery prognostics following its whole program, i.e., from data acquisition to RUL prediction. First, in data acquisition, several prognostic datasets widely used in academic literature are introduced systematically. Then, commonly used HI construction approaches and metrics are discussed. After that, the HS division process is summarized by introducing its major tasks and existing approaches. Afterwards, the advancements of RUL prediction are reviewed including the popular approaches and metrics. Finally, the paper provides discussions on current situation, upcoming challenges as well as possible future trends for researchers in this field.
12 Dec 2008
TL;DR: In this article, the authors describe how damage propagation can be modeled within the modules of aircraft gas turbine engines and generate response surfaces of all sensors via a thermo-dynamical simulation model.
Abstract: This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are generated via a thermo-dynamical simulation model for the engine as a function of variations of flow and efficiency of the modules of interest. An exponential rate of change for flow and efficiency loss was imposed for each data set, starting at a randomly chosen initial deterioration set point. The rate of change of the flow and efficiency denotes an otherwise unspecified fault with increasingly worsening effect. The rates of change of the faults were constrained to an upper threshold but were otherwise chosen randomly. Damage propagation was allowed to continue until a failure criterion was reached. A health index was defined as the minimum of several superimposed operational margins at any given time instant and the failure criterion is reached when health index reaches zero. Output of the model was the time series (cycles) of sensed measurements typically available from aircraft gas turbine engines. The data generated were used as challenge data for the prognostics and health management (PHM) data competition at PHMpsila08.
TL;DR: Experimental results demonstrate the effectiveness of the proposed hybrid prognostics approach in improving the accuracy and convergence of RUL prediction of rolling element bearings.
Abstract: Remaining useful life (RUL) prediction of rolling element bearings plays a pivotal role in reducing costly unplanned maintenance and increasing the reliability, availability, and safety of machines. This paper proposes a hybrid prognostics approach for RUL prediction of rolling element bearings. First, degradation data of bearings are sparsely represented using relevance vector machine regressions with different kernel parameters. Then, exponential degradation models coupled with the Frechet distance are employed to estimate the RUL adaptively. The proposed approach is evaluated using the vibration data from accelerated degradation tests of rolling element bearings and the public PRONOSTIA bearing datasets. Experimental results demonstrate the effectiveness of the proposed approach in improving the accuracy and convergence of RUL prediction of rolling element bearings.
TL;DR: A multiobjective deep belief networks ensemble (MODBNE) method that employs a multiobjectives evolutionary algorithm integrated with the traditional DBN training technique to evolve multiple DBNs simultaneously subject to accuracy and diversity as two conflicting objectives is proposed.
Abstract: In numerous industrial applications where safety, efficiency, and reliability are among primary concerns, condition-based maintenance (CBM) is often the most effective and reliable maintenance policy. Prognostics, as one of the key enablers of CBM, involves the core task of estimating the remaining useful life (RUL) of the system. Neural networks-based approaches have produced promising results on RUL estimation, although their performances are influenced by handcrafted features and manually specified parameters. In this paper, we propose a multiobjective deep belief networks ensemble (MODBNE) method. MODBNE employs a multiobjective evolutionary algorithm integrated with the traditional DBN training technique to evolve multiple DBNs simultaneously subject to accuracy and diversity as two conflicting objectives. The eventually evolved DBNs are combined to establish an ensemble model used for RUL estimation, where combination weights are optimized via a single-objective differential evolution algorithm using a task-oriented objective function. We evaluate the proposed method on several prognostic benchmarking data sets and also compare it with some existing approaches. Experimental results demonstrate the superiority of our proposed method.