scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Reliability in 2017"


Journal ArticleDOI
TL;DR: In this paper, a parametric inverse Gaussian process model is proposed to model degradation processes with constant, monotonic, and S-shaped degradation rates, where physical meaning of model parameters for time-varying degradation rates is highlighted.
Abstract: Degradation observations of modern engineering systems, such as manufacturing systems, turbine engines, and high-speed trains, often demonstrate various patterns of time-varying degradation rates. General degradation process models are mainly introduced for constant degradation rates, which cannot be used for time-varying situations. Moreover, the issue of sparse degradation observations and the problem of evolving degradation observations both are practical challenges for the degradation analysis of modern engineering systems. In this paper, parametric inverse Gaussian process models are proposed to model degradation processes with constant, monotonic, and S-shaped degradation rates, where physical meaning of model parameters for time-varying degradation rates is highlighted. Random effects are incorporated into the degradation process models to model the unit-to-unit variability within product population. A general Bayesian framework is extended to deal with the degradation analysis of sparse degradation observations and evolving observations. An illustrative example derived from the reliability analysis of a heavy-duty machine tool's spindle system is presented, which is characterized as the degradation analysis of sparse degradation observations and evolving observations under time-varying degradation rates.

148 citations


Journal ArticleDOI
TL;DR: A framework integrating cloud model, a new cognitive model for coping with fuzziness and randomness, and preference ranking organization method for enrichment evaluation (PROMETHEE) method, a powerful and flexible outranking decision making method, is developed for managing the group behaviors in FMEA.
Abstract: Failure mode and effect analysis (FMEA) is a well-known engineering technique to recognize and reduce possible failures for quality and reliability improvement in products and services. It is a group-oriented method usually conducted by a multidisciplinary and cross-functional expert panel. In this paper, we explore two key issues inherent to the FMEA practice: the representation of diversified risk assessments of FMEA team members and the determination of priority ranking of failure modes. Specifically, a framework integrating cloud model, a new cognitive model for coping with fuzziness and randomness, and preference ranking organization method for enrichment evaluation (PROMETHEE) method, a powerful and flexible outranking decision making method, is developed for managing the group behaviors in FMEA. Moreover, FMEA team members’ weights are objectively derived taking advantage of the risk assessment information. Finally, we illustrate the new risk priority model with a healthcare risk analysis case, and further validate its effectiveness via sensitivity and comparison discussions.

122 citations


Journal ArticleDOI
TL;DR: This paper presents a transmission line failure model that is enhanced with the dynamic thermal rating (DTR) system and is compared with the normal distribution model that considers only the end-of-life failure effect of the transmission line.
Abstract: This paper presents a transmission line failure model that is enhanced with the dynamic thermal rating (DTR) system. The failure model consists of two parts. The first part is the Arrhenius model and it considers the loading effect of the DTR system as a result of operating at a higher temperature than the static thermal rating system. The second part is the Weibull model and it considers the end-of-life (natural ageing) failure effect of the transmission line. The proposed model is compared with the normal distribution model that considers only the end-of-life failure effect of the transmission line. This paper also investigates the uncertainty effects of the line failure model parameters, effects of the DTR system reliability, and the effects of the weather data correlation on the reliability performance of the power system. The proposed methodology and case studies were performed on the IEEE-reliability test network.

80 citations


Journal ArticleDOI
TL;DR: Given observations of a health indicator, a statistical model of bearing degradation signals is proposed to describe two distinct stages existing in bearing degradation, which aims to detect the first change point caused by an early bearing defect and predict bearing remaining useful life.
Abstract: Bearings are the most common mechanical components used in machinery to support rotating shafts. Due to harsh working conditions, bearing performance deteriorates over time. To prevent any unexpected machinery breakdowns caused by bearing failures, statistical modeling of bearing degradation signals should be immediately conducted. In this paper, given observations of a health indicator, a statistical model of bearing degradation signals is proposed to describe two distinct stages existing in bearing degradation. More specifically, statistical modeling of Stage I aims to detect the first change point caused by an early bearing defect, and then statistical modeling of Stage II aims to predict bearing remaining useful life. More importantly, an underlying assumption used in the early work of Gebraeel et al. is discovered and reported in this paper. The work of Gebraeel et al. is extended to a more general prognostic method. Simulation and experimental case studies are investigated to illustrate how the proposed model works. Comparisons with the statistical model proposed by Gebraeel et al. for bearing remaining useful life prediction are conducted to highlight the superiority of the proposed statistical model.

68 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that the presented method can improve the accuracy of lifetime and RUL estimation for systems with state recovery, and the unknown parameters in the present model are estimated based on the observed condition monitoring data.
Abstract: Many industrial systems inevitably suffer performance degradation. Thus, predicting the remaining useful life (RUL) for such degrading systems has attracted significant attention in the prognostics community. For some systems like batteries, one commonly encountered phenomenon is that the system performance degrades with usage and recovers in storage. However, almost all of the current prognostic studies do not consider such a recovery phenomenon in stochastic degradation modeling. In this paper, we present a prognostic model for deteriorating systems experiencing a switching operating process between usage and storage, where the system degradation state recovers randomly after the storage process. The possible recovery from the current time to the predicted future failure time is incorporated in the prognosis. First, the degradation state evolution of the system is modeled through a diffusion process with piecewise but time-dependent drift coefficient functions. Under the concept of first hitting time, we derived the lifetime and RUL distributions for systems with specific constant working mode. Further, we extended the results of RUL distribution in specific constant working mode to the case of stochastic working mode, which is modeled through a flexible two-state semi-Markov model (SMM) with phase-type distributed interval times. The unknown parameters in the present model are estimated based on the observed condition monitoring data of the system, and the SMM model is identified on the basis of the operating data. A numerical study and a case study of Li-ion batteries are carried out to illustrate and demonstrate the proposed prognostic method. Experimental results indicate that the presented method can improve the accuracy of lifetime and RUL estimation for systems with state recovery.

58 citations


Journal ArticleDOI
TL;DR: A convex quadratic formulation is developed that combines the information from the degradation profiles of historical units and the in-situ sensory data from an operating unit to online estimate the failure threshold of this particular unit in the field and a better remaining useful life prediction is expected.
Abstract: The rapid development of sensor and computing technology has created an unprecedented opportunity for condition monitoring and prognostic analysis in various manufacturing and healthcare industries. With the massive amount of sensor information available, important research efforts have been made in modeling the degradation signals of a unit and estimating its remaining useful life distribution. In particular, a unit is often considered to have failed when its degradation signal crosses a predefined failure threshold, which is assumed to be known a priori . Unfortunately, such a simplified assumption may not be valid in many applications given the stochastic nature of the underlying degradation mechanism. While there are some extended studies considering the variability in the estimated failure threshold via data-driven approaches, they focus on the failure threshold distribution of the population instead of that of an individual unit. Currently, the existing literature still lacks an effective approach to accurately estimate the failure threshold distribution of an operating unit based on its in-situ sensory data during condition monitoring. To fill this literature gap, this paper develops a convex quadratic formulation that combines the information from the degradation profiles of historical units and the in-situ sensory data from an operating unit to online estimate the failure threshold of this particular unit in the field. With a more accurate estimation of the failure threshold of the operating unit in real time, a better remaining useful life prediction is expected. Simulations as well as a case study involving a degradation dataset of aircraft turbine engines were used to numerically evaluate and compare the performance of the proposed methodology with the existing literature in the context of failure threshold estimation and remaining useful life prediction.

57 citations


Journal ArticleDOI
TL;DR: A dynamic Bayesian network (DBN) approach for the modeling and predictive resilience analysis for dynamic engineered systems is presented to aid in realizing resiliency in system designs and to pave the way toward enhancements in developing resilient engineered systems.
Abstract: Uncertain and potentially harsh operating environments are often known to alter the operational performance of a system. In order to maintain system performance while coping with varying operating environments and potential disruptions, the resilience of engineered systems is desirable. Engineering systems are often interconnected in a dimensional way inherently from basic components to subsystems to the system of systems, which poses a grand challenge for system designers to analyze the resilience of such a complex system. Moreover, further complications in the assessment of resilience in the engineering domain are attributed to time-varying system performances, random perturbation occurrences, and probable failures caused by adverse events. This paper presents a dynamic Bayesian network (DBN) approach for the modeling and predictive resilience analysis for dynamic engineered systems. With the inter-time-slice links and the conditional probability tables in a DBN, the system performance could be molded as changing in a discrete time slice, while capturing the temporal probabilistic dependencies between the variables. An industrial-based case study of an electricity distribution system is further studied to demonstrate the effectiveness of the DBN approach for resilience analysis. The approach presented in this paper hopes to aid in realizing resiliency in system designs and to pave the way toward enhancements in developing resilient engineered systems.

56 citations


Journal ArticleDOI
TL;DR: A stochastic process-based degradation model is constructed to interpret the jump at the change point in the degradation process which is governed by the linear Wiener process and reveals that considering the jump in the degraded process can improve the accuracy of estimations in real applications.
Abstract: Observations on degradation performance are often used to analyze the underlying degradation process of highly reliable products. From the two-phase degradation path of the bearing performance observations, we observed that there exists an abrupt increase in degradation measurement at a change point. Then, the following degradation process started with the abrupt degradation measurement will degrade in a higher degradation rate. Here, a stochastic process-based degradation model is constructed to interpret the jump at the change point in the degradation process which is governed by the linear Wiener process. Meanwhile, the distribution of the first passage time over a prespecified threshold for the process is discussed. In addition, to get the estimates of the model parameter, the expectation-maximization algorithm is utilized since the change points are unobservable. Furthermore, to demonstrate the model's advantages over estimate, a comparison is made between the proposed and the existing known models from the literature. The results reveal that considering the jump in the degradation process can improve the accuracy of estimations in real applications.

53 citations


Journal ArticleDOI
TL;DR: A stress-strength time-varying correlation interference model for structural reliability analysis using Copulas and the lower confidence limit of structural reliability is given based on the survival coefficient.
Abstract: This paper proposes a stress-strength time-varying correlation interference model for structural reliability analysis using Copulas. First, the stochastic stress is developed by incorporating the interaction between basic variables and time variable into the quadratic response surface method, and the stochastic strength is characterized by the linear or exponential degradation model. Second, the Copula selection method is given, and we propose time-varying stable (unstable) model for Kendall's tau to describe the time-varying correlation characteristic. Third, the structural reliability estimation method is developed, especially the method for the nondifferentiable Copulas can be used to calculate the probability in any area for any type of Copula when the marginal distribution is continuous. Finally, the lower confidence limit of structural reliability is given based on the survival coefficient. The comparison results of the high-temperature structural reliability estimation from different situations are illustrated in the simulation example to demonstrate the availability of the proposed model.

53 citations


Journal ArticleDOI
TL;DR: The results show that in the case of software metrics, a dimensionality reduction technique based on confirmatory factor analysis provided an advantage when performing cross-project prediction, yielding the best F-measure for the predictions in five out of six cases.
Abstract: Statistical prediction models can be an effective technique to identify vulnerable components in large software projects. Two aspects of vulnerability prediction models have a profound impact on their performance: 1) the features (i.e., the characteristics of the software) that are used as predictors and 2) the way those features are used in the setup of the statistical learning machinery. In a previous work, we compared models based on two different types of features: software metrics and term frequencies (text mining features). In this paper, we broaden the set of models we compare by investigating an array of techniques for the manipulation of said features. These techniques fall under the umbrella of dimensionality reduction and have the potential to improve the ability of a prediction model to localize vulnerabilities. We explore the role of dimensionality reduction through a series of cross-validation and cross-project prediction experiments. Our results show that in the case of software metrics, a dimensionality reduction technique based on confirmatory factor analysis provided an advantage when performing cross-project prediction, yielding the best F -measure for the predictions in five out of six cases. In the case of text mining, feature selection can make the prediction computationally faster, but no dimensionality reduction technique provided any other notable advantage.

47 citations


Journal ArticleDOI
TL;DR: A set of two-stage recursive Bayesian formulations has been put forth to dynamically update the reliability function of a specific MSS over time by utilizing imperfect inspection data collected simultaneously or asynchronously from multiple levels of the system.
Abstract: Traditional time-based reliability assessment methods compute reliability measures of a multistate system (MSS) purely based upon historical time-to-failure data collected from a large population of identical systems. Using these methods, one can only assess the reliability of a system from a population or statistical perspective. Moreover, these methods fail to characterize the stochastic behavior of a specific individual MSS over time. Accordingly, in this paper, a dynamic reliability assessment method that can aggregate inspection data across multiple levels (such as component level, subsystem level, and system level) of a nonrepairable MSS has been studied. In general, inspection data collected from multiple levels of a system can be imperfect, but they are stochastically correlated with the actual states of the inspected system and components. A set of two-stage recursive Bayesian formulations has been put forth to dynamically update the reliability function of a specific MSS over time by utilizing imperfect inspection data collected simultaneously or asynchronously from multiple levels of the system. The proposed method is exemplified via an illustrative example of an underground flow transmission system. The impact of the probability of detection on the accuracy of the remaining useful life prediction is also examined.

Journal ArticleDOI
TL;DR: A novel local subspace model for performance degradation assessment termed locally linear embedding on Grassmann manifold (GM-LLE), where subspaces are treated as points on Grassman manifold is proposed and shows that the proposed method can assess the bearing's degradation effectively, and performs better compared with locallylinear embedding.
Abstract: In recent years, a significant amount of research work has been undertaken to address the problem about prognostic and health management (PHM) systems. Performance degradation assessment, an essential part of PHM systems, is still a challenge. Subspaces, forming a non-Euclidean and curved manifold that is known as Grassmann manifold, are able to capture dynamic behaviors and accommodate the effects of variations. In this paper, we propose a novel local subspace model for performance degradation assessment termed locally linear embedding on Grassmann manifold (GM-LLE), where subspaces are treated as points on Grassmann manifold. Due to the nonstationary property of vibration signal, second generation wavelet package is used to decompose the vibration signal into different levels. Subspaces are modeled by optimal statistical features of different frequency bands, and then GM-LLE is used to assess bearing performance degradation by embedding the subspaces into reproducing kernel Hilbert spaces. Finally, simulated and experimental vibration signals are used to validate the effectiveness of the proposed method. The results show that the proposed method can assess the bearing's degradation effectively, and performs better compared with locally linear embedding.

Journal ArticleDOI
TL;DR: A model-observer based scheme is proposed to monitor the states of Buck converters and to estimate their component parameters, such as capacitance and inductance, and it is demonstrated that the proposed scheme performs online-estimation for key parameters.
Abstract: DC–DC power converters such as buck converters are susceptible to degradation and failure due to operating under conditions of electrical stress and variable power sources in power conversion applications, such as electric vehicles and renewable energy Some key components such as electrolytic capacitors degrade over time due to evaporation of the electrolyte In this paper, a model-observer based scheme is proposed to monitor the states of Buck converters and to estimate their component parameters, such as capacitance and inductance First, a diagnosis observer is proposed, and the generated residual vectors are applied for fault detection and isolation Second, component condition parameters, such as capacitance and inductance are reconstructed using another novel observer with adaptive feedback law Additionally, the observer structures and their theoretical performance are analyzed and proven In contrast to existing reliability approaches applied in buck converters, the proposed scheme performs online-estimation for key parameters Finally, buck converters in conventional dc–dc step-down and photovoltaic applications are investigated to test and validate the effectiveness of the proposed scheme in both simulation and laboratory experiments Results demonstrate the feasibility, performance, and superiority of the proposed component parameter estimation scheme

Journal ArticleDOI
TL;DR: Based on the fractional Brownian motion, a degradation process with long-range dependence is adopted to predict the remaining useful life of batteries and blast furnace walls and unknown parameters in the degradation model can be identified using discrete dyadic wavelet transform and maximum likelihood estimation.
Abstract: A prerequisite for the existing remaining useful life prediction methods based on stochastic processes is the assumption of independent increments. However, this is in sharp contrast to some practical systems including batteries and blast furnace walls, in which the degradation processes have the property of long-range dependence. Based on the fractional Brownian motion, we adopt a degradation process with long-range dependence to predict the remaining useful life of the above systems. Because the degradation process with long-range dependence is neither a Markovian process nor a semimartingale, the exact analytical first passage time is difficult to derive directly. To address this problem, a weak convergence theorem is first adopted to approximately transform a fractional Brownian motion-based degradation process into a Brownian motion-based one with a time-varying coefficient. Then, with a space-time transformation, the first passage time of the degradation process with long-range dependence can be obtained in a closed form. Unknown parameters in the degradation model can be identified using discrete dyadic wavelet transform and maximum likelihood estimation. Numerical simulations and a practical example of a blast furnace wall are given to verify the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: A unified framework for evaluating SCRR is proposed that internalizes design inputs and is flexible to varying degrees of data availability and contributes to the literature by relating SCRR to the supply chain's risk-mitigating capabilities prior to or postdisruptions.
Abstract: As the complexity of supply chain structures and the frequency of their disruptions rise, businesses are increasingly recognizing the value of supply chain reliability and resilience (SCRR). A number of efforts have been devoted to define SCRR or to develop risk mitigation strategies for their improvement. However, a key question remaining to be answered is how SCRR should be quantified. Existing methods are insufficient in addressing this question on two important fronts: First, they do not adequately represent the interdependencies between different supply chain nodes; second, they do not allow for the modeling of actionable decisions and thus cannot be used to guide improvement strategies. This paper aims to fill this gap by proposing a unified framework for evaluating SCRR that internalizes design inputs and is flexible to varying degrees of data availability. The proposed framework captures risks involved in the supply, the demand, the firm itself, and the external environment. It also contributes to the literature by relating SCRR to the supply chain's risk-mitigating capabilities prior to or postdisruptions. A novel method using the supply chain's inherent buyer–supplier relationships is designed to model node interdependencies. Two example applications are discussed to demonstrate how this framework can be utilized to assist in reliable and resilient supply chain design.

Journal ArticleDOI
TL;DR: A flexible Bayesian multiple-phase modeling approach to characterize degradation signals for prognosis and a particle filtering algorithm with stratified sampling and partial Gibbs resample-move strategy is developed for online model updating and residual life prediction.
Abstract: Remaining useful life prediction plays an important role in ensuring the safety, availability, and efficiency of various engineering systems. In this paper, we propose a flexible Bayesian multiple-phase modeling approach to characterize degradation signals for prognosis. The priors are specified with a novel stochastic process and the multiple-phase model is formulated to a novel state-space model to facilitate online monitoring and prediction. A particle filtering algorithm with stratified sampling and partial Gibbs resample-move strategy is developed for online model updating and residual life prediction. The advantages of the proposed method are demonstrated through extensive numerical studies and real case studies.

Journal ArticleDOI
TL;DR: The results show that degradation model uncertainty has significant effects on the quantile lifetime at the use conditions, especially for extreme quantiles, and the BMA can well capture this uncertainty and compute credibility intervals with the highest coverage probability at each quantile.
Abstract: In accelerated degradation testing (ADT), test data from higher than normal stress conditions are used to find stochastic models of degradation, e.g., Wiener process, Gamma process, and inverse Gaussian process models. In general, the selection of the degradation model is made with reference to one specific product and no consideration is given to model uncertainty. In this paper, we address this issue and apply the Bayesian model averaging (BMA) method to constant stress ADT. For illustration, stress relaxation ADT data are analyzed. We also make a simulation study to compare the $s\hbox{-}$ credibility intervals for single model and BMA. The results show that degradation model uncertainty has significant effects on the $p\hbox{-}$ quantile lifetime at the use conditions, especially for extreme quantiles. The BMA can well capture this uncertainty and compute compromise $s\hbox{-}$ credibility intervals with the highest coverage probability at each quantile.

Journal ArticleDOI
TL;DR: A novel reliability evaluation approach based on the multistate decision diagram for DB-WSS is proposed that can handle arbitrary distributions of degradation processes for multistates components or systems.
Abstract: Warm standby redundancy is a fault-tolerant technique balancing the low economical efficiency of hot standby and the long recovery time of cold standby. In this paper, motivated by practical engineering systems, a general demand-based warm standby system (DB-WSS) considering component degradation process is studied. A series of intermediate states exists between perfect functionality and complete failure because of degradation processes. A lot of existing analytical reliability assessment techniques are focused on conventional binary-state models or exponential state transition distributions for a system or its components. In this paper, a novel reliability evaluation approach based on the multistate decision diagram for DB-WSS is proposed. The proposed technique can handle arbitrary distributions of degradation processes for multistate components or systems. Moreover, considering the imperfect switch of the warm standby component, the start failure probability is taken into account in the warm standby system. Numerical studies are given to illustrate the proposed approach.

Journal ArticleDOI
TL;DR: Performance of the base interval approach is analyzed, and the result shows that the proposed policy can approximate the optimal policy within a small factor.
Abstract: This paper develops a maintenance policy for a multicomponent system subject to hidden failures. Components of the system are assumed to suffer from hidden failures, which can only be detected at inspection. The objective of the maintenance policy is to determine the inspection intervals for each component such that the long-run cost rate is minimized. Due to the dependence among components, an exact optimal solution is difficult to obtain. Concerned with the intractability of the problem, a heuristic method named “base interval approach” is adopted to reduce the computational complexity. Performance of the base interval approach is analyzed, and the result shows that the proposed policy can approximate the optimal policy within a small factor. Two numerical examples are presented to illustrate the effectiveness of the policy.

Journal ArticleDOI
TL;DR: It is found that none of the mutation reduction strategies evaluated—many forms of operator selection, and stratified sampling (on operators or program elements)—produced an effectiveness advantage larger than $5\%$ in comparison with random sampling.
Abstract: Mutation analysis is a well known yet unfortunately costly method for measuring test suite quality. Researchers have proposed numerous mutation reduction strategies in order to reduce the high cost of mutation analysis, while preserving the representativeness of the original set of mutants. As mutation reduction is an area of active research, it is important to understand the limits of possible improvements. We theoretically and empirically investigate the limits of improvement in effectiveness from using mutation reduction strategies compared to random sampling. Using real-world open source programs as subjects, we find an absolute limit in improvement of effectiveness over random sampling— $13.078\%$ . Given our findings with respect to absolute limits, one may ask: How effective are the extant mutation reduction strategies? We evaluate the effectiveness of multiple mutation reduction strategies in comparison to random sampling. We find that none of the mutation reduction strategies evaluated—many forms of operator selection, and stratified sampling (on operators or program elements)—produced an effectiveness advantage larger than $5\%$ in comparison with random sampling. Given the poor performance of mutation selection strategies—they may have a negligible advantage at best, and often perform worse than random sampling—we caution practicing testers against applying mutation reduction strategies without adequate justification.

Journal ArticleDOI
TL;DR: A new signature monitoring technique called random additive signature monitoring (RASM), which uses signature updates with random values and optimally placed validity checks to detect interblock control flow errors and has a higher detection ratio, lower execution time overhead, and code size overhead than the studied techniques.
Abstract: Due to harsher working environments, soft errors or erroneous bit-flips occur more frequently in microcontrollers during execution. Without mitigation, such errors result in data corruption and control flow errors. Multiple software-implemented mitigation techniques have already been proposed. In this paper, we evaluate seven signature monitoring techniques in seven different test cases. We measure and compare their detection ratios, execution time overhead, and code size overhead. From the gathered results, we derive five requirements to develop an optimal signature monitoring technique. Based on these requirements, we propose a new signature monitoring technique called random additive signature monitoring (RASM). RASM uses signature updates with random values and optimally placed validity checks to detect interblock control flow errors. RASM has a higher detection ratio, lower execution time overhead, and lower code size overhead than the studied techniques.

Journal ArticleDOI
TL;DR: Results show that coverage has an insignificant correlation with the number of bugs that are found after the release of the software at the project level, and no such correlation at the file level.
Abstract: Testing is a pivotal activity in ensuring the quality of software. Code coverage is a common metric used as a yardstick to measure the efficacy and adequacy of testing. However, does higher coverage actually lead to a decline in postrelease bugs? Do files that have higher test coverage actually have fewer bug reports? The direct relationship between code coverage and actual bug reports has not yet been analyzed via a comprehensive empirical study on real bugs. Past studies only involve a few software systems or artificially injected bugs (mutants). In this empirical study, we examine these questions in the context of open-source software projects based on their actual reported bugs. We analyze 100 large open-source Java projects and measure the code coverage of the test cases that come along with these projects. We collect real bugs logged in the issue tracking system after the release of the software and analyze the correlations between code coverage and these bugs. We also collect other metrics such as cyclomatic complexity and lines of code, which are used to normalize the number of bugs and coverage to correlate with other metrics as well as use these metrics in regression analysis. Our results show that coverage has an insignificant correlation with the number of bugs that are found after the release of the software at the project level, and no such correlation at the file level.

Journal ArticleDOI
Zhiliang Huang1, Chao Jiang1, Xiao Ming Li1, Xinpeng Wei1, T. Fang1, Xu Han1 
TL;DR: A single-loop approach (SLA) is proposed to convert the nested optimization in TRBDO into a sequence iterative process composed of the time-variant reliability analysis (TRA), constraint discretization, and design optimization.
Abstract: In the process of long-term use, the uncertainty of an engineering structure often presents time-variant or dynamic characteristics due to the influence of stochastic loads and material performance degradations In such a situation, the structural design optimization will involve an important problem of time-variant reliability-based design optimization (TRBDO) Performing TRBDO involves a nested optimization, which will lead to extremely low computational efficiency In this paper, a single-loop approach (SLA) is proposed to convert the nested optimization in TRBDO into a sequence iterative process composed of the time-variant reliability analysis (TRA), constraint discretization, and design optimization In each iteration step, the TRA method based on stochastic process discretization is first used to calculate the time-variant reliability of constraints; second, through introducing the concept of the target reliability index of discretized time period and proposing the corresponding algorithm, each time-variant constraint is discretized into a series of time-invariant constraints to formulate a conventional reliability-based design optimization problem The approach exhibits a good comprehensive performance in terms of efficiency and convergence The validity and practicality of the SLA are validated by two numerical examples and a design problem for the chassis of a self-balancing vehicle

Journal ArticleDOI
TL;DR: DUCG is a newly presented approach for uncertain causality representation and probabilistic reasoning, and has been successfully applied to online fault diagnoses of large complex industrial systems and the results reveal the effectiveness and feasibility of this methodology.
Abstract: Probabilistic safety assessment (PSA) has been widely applied to large complex industrial systems like nuclear power plants, chemical plants, etc. Event trees (ET) and fault trees (FT) are the major tools, but dependences and logic cycles may exist among and within them, and are not well addressed, leading to even optimistic estimates. Repeated representations and calculations exist. Causalities are assumed deterministic, while sometimes they are uncertain. This paper applies dynamic uncertain causality graph (DUCG) in PSA to overcome these problems. DUCG is a newly presented approach for uncertain causality representation and probabilistic reasoning, and has been successfully applied to online fault diagnoses of large complex industrial systems. This paper suggests to model all ETs and FTs of a target system as a single DUCG allowing uncertain causalities and avoiding repeated representations, and calculate the probabilities/frequencies of the undesired events by using the DUCG algorithm. In the calculation, the problems of dependencies and circular loops are solved. The suggested DUCG representation mode and calculation algorithm are presented and illustrated with examples. The results reveal the effectiveness and feasibility of this methodology.

Journal ArticleDOI
TL;DR: This paper advances the state-of-the-art by presenting a solution methodology to determine combined optimal design configuration and optimal operation of heterogeneous warm standby series-parallel systems.
Abstract: Existing works on redundancy allocation problems have typically focused on active or cold standby redundancies or a mix of them; little research is dedicated to warm standby systems but with an assumption of allocating the same choice of components within each subsystem. Motivated by the fact that components with different costs and failure time distributions from different vendors can be available for the design of the same subsystem in practice, this paper advances the state-of-the-art by presenting a solution methodology to determine combined optimal design configuration and optimal operation of heterogeneous warm standby series-parallel systems. Particularly, based on a proposed numerical reliability evaluation algorithm, two combined optimization problems (component allocation and sequencing problem, and component distribution and sequencing problem) are formulated and solved. Necessity and significance of the proposed methodology are illustrated through examples. Efficiency of the methodology is also successfully demonstrated on large warm standby series-parallel systems containing 14 subsystems of different choices of components.

Journal ArticleDOI
TL;DR: A simple novel algorithm is improved to solve a Diophantine system that appeared in the $d$-MP problem and an improved algorithm is proposed for the same problem.
Abstract: System reliability of a multistate flow network can be computed in terms of all the lower boundary points, called d -minimal paths ( $d$ -MPs). Although several algorithms have been proposed in the literature for the $d$ -MP problem, there is still room for improvement upon its solution. Here, some new results are presented to improve the solution of the problem. A simple novel algorithm is improved to solve a Diophantine system that appeared in the $d$ -MP problem. Then, an improved algorithm is proposed for the $d$ -MP problem. It is also explained how the proposed algorithm can be used in order to assess the reliability of some smart grid communication networks. We provide the complexity results and show the main algorithm to be more efficient than the existing ones in terms of execution times through some benchmark networks and a general network example. Moreover, we compare the algorithms through one thousand randomly generated test problems using the Dolan and More's performance profile.

Journal ArticleDOI
TL;DR: This paper evaluates the network reliability from a game theory perspective by forming a network game consisting of two players—router and attacker, where the router seeks to minimize his total expected trip cost while the attacker attempts to maximize theexpected trip cost by undermining some of the network links.
Abstract: This paper evaluates the network reliability from a game theory perspective. We formulate a network game consisting of two players—router and attacker, where the router seeks to minimize his total expected trip cost, while the attacker attempts to maximize the expected trip cost by undermining some of the network links. Each link has a probabilistic cost in accordance with its state (normal or damaged). Two different scenarios are considered: link cost independent of the flow and link cost dependent on the flow. We are interested in the link use and damage probabilities at system equilibrium for both cases, and these probabilities are derived in a four-step procedure. First, for the router, Dijkstra and the Frank–Wolfe (FW) algorithms are used to optimize his strategy under the two scenarios, respectively. Second, we model the attacker's problem as a constrained optimization problem, in which all the decision variables are binary. A probabilistic solution discovery algorithm (PSDA) is integrated with stochastic ranking to determine the attacker's optimal strategy. Third, we leverage the Method of Successive Averages (MSA) to approximate the router's link use probabilities and attacker's link damage probabilities at the mixed Nash equilibrium of the game. Finally, given the router's probability of traveling through each link and attacker's probability of undermining each link, we use Monte Carlo Simulations (MCS) to estimate the network reliability as the router arriving the destination node within a prescribed time. Two numerical examples are used to illustrate the procedures and effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: The results indicate that the proposed model and associated indicator can be used with significant confidence to predict the Remaining Useful Life during the early stages of operation and have distinct advantages over the standard linear model (Moving Average Model), both in terms of accuracy and robustness.
Abstract: Since failure of mechanical components can lead to catastrophic failure of the entire system, significant efforts have been made to monitor system behavior and try to predict the end of useful life of a component. A method to assess the process of mechanical wear in real time is based on monitoring the amount of debris in the lubricant. Although this approach has shown some potential in application, the nonlinearly cumulative damage in the late stage of mechanical life presents significant challenge to early prediction of the Remaining Useful Life. This paper considers continuous wear (devoid of sudden large particle dislodging or catastrophic failure) and assumes that it is a positive feedback physical process. This assumption serves as a basis of a dynamic model developed to describe the nonlinear behavior of wear in the late stage of useful mechanical life. Based on this model, it was discovered that the inflection point in the cumulative debris, during continuous wear process, presents a more accurate indicator of pending mechanical failure compared to the existing indicators. The peak in generation rate is considered as the end of useful mechanical life. The model is validated based on data from four wind turbine gearboxes. The results indicate that the proposed model and associated indicator can be used with significant confidence to predict the Remaining Useful Life during the early stages of operation and have distinct advantages over the standard linear model (Moving Average Model), both in terms of accuracy and robustness.

Journal ArticleDOI
TL;DR: An empirical framework of accuracy graphs and their construction that reveal the relative accuracy of formulas is proposed and a list of formula pairs in which a formula is consistently statistically more accurate than or similar in accuracy to another is identified, enlightening directions for further theoretical analysis.
Abstract: The effectiveness of spectrum-based fault localization techniques primarily relies on the accuracy of their fault localization formulas. Theoretical studies prove the relative accuracy orders of selected formulas under certain assumptions, forming a graph of their theoretical accuracy relations. However, it is unclear whether in such a graph the relative positions of these formulas may change when some assumptions are relaxed. On the other hand, empirical studies can measure the actual accuracy of any formula in controlled settings that more closely approximate practical scenarios but in less general contexts. In this paper, we propose an empirical framework of accuracy graphs and their construction that reveal the relative accuracy of formulas. Our work not only evaluates the association between certain assumptions and the theoretical relations among formulas, but also expands our knowledge to reveal new potential accuracy relationships of other formulas which have not been discovered by theoretical analysis. Using our proposed framework, we identified a list of formula pairs in which a formula is consistently statistically more accurate than or similar in accuracy to another, enlightening directions for further theoretical analysis.

Journal ArticleDOI
TL;DR: A new type of degradation model is developed, in which the diffusion is represented as a fractional Brownian motion (FBM), which is actually a special non-Markovian process with long-term dependencies.
Abstract: Some practical systems such as blast furnaces and turbofan engines have degradation processes with memory effects. The term of memory effects implies that the future states of the degradation processes depend on both the current state and the past states because of the interaction with environments. However, most works generally used a memoryless Markovian process to model the degradation processes. To characterize the memory effects in practical systems, we develop a new type of degradation model, in which the diffusion is represented as a fractional Brownian motion (FBM). FBM is actually a special non-Markovian process with long-term dependencies. Based on the monitored data, a Monte Carlo method is used to predict the remaining useful life (RUL). The unknown parameters in the proposed model can be estimated by the maximum likelihood algorithm, and then the distribution of the RUL is predicted. The effectiveness of the proposed model is fully verified by a numerical example and a practical case study.