scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Reliability in 2018"


Journal ArticleDOI
TL;DR: This paper addresses the problem of delay-dependent robust and reliable static output feedback (SOF) control for uncertain discrete-time piecewise-affine (PWA) systems with time-delay and actuator failure in a singular system setup.
Abstract: This paper addresses the problem of delay-dependent robust and reliable $\mathscr {H}_{\infty }$ static output feedback (SOF) control for uncertain discrete-time piecewise-affine (PWA) systems with time-delay and actuator failure in a singular system setup. The Markov chain is applied to describe the actuator faults behaviors. In particular, by utilizing a system augmentation approach, the conventional closed-loop system is converted into a singular PWA system. By constructing a mode-dependent piecewise Lyapunov–Krasovskii functional, a new $\mathscr {H}_{\infty }$ performance analysis criterion is then presented, where a novel summation inequality and S-procedure are succeedingly employed. Subsequently, thanks to the special structure of the singular system formulation, the PWA SOF controller design is proposed via a convex program. Illustrative examples are finally given to show the efficacy and less conservatism of the presented approach.

101 citations


Journal ArticleDOI
TL;DR: Numerical simulation examples indicate that the proposed method has a better performance of analyzing the conflict between different pieces of evidence, especially for high conflict evidence, therefore, compared with the existing methods, it has better applicability.
Abstract: Fault diagnosis is a typical multisensor information fusion problem. The information obtained from different sensors, such as sound, pressure, vibration, and temperature, can be considered as a piece of evidence. From the viewpoint of the evidence theory, the problem of multisensor fault diagnosis can be viewed as the problem of evidence fusion and decision. However, the information obtained from different sensors may be inaccurate, uncertain, fuzzy, or even conflict, so how to set up the fault diagnosis architecture of a distributed multisensor system and combine the conflict evidence should be taken into consideration. In this paper, the classical Dempster–Shafer evidence theory is described and the disadvantage of a classical Dempster's combination rule is discussed. In order to solve the counter-intuitive result when using the classical Dempster's combination rule, the Euclidean distance is proposed to characterize the differences between different pieces of evidence, and then the support degree of each evidence is generated and the weighted pieces of evidence can be combined directly using the classical Dempster's combination rule. Numerical simulation examples indicate that the proposed method has a better performance of analyzing the conflict between different pieces of evidence, especially for high conflict evidence. Therefore, compared with the existing methods, it has better applicability. According to the requirement of the Dempster–Shafer evidence theory, the fault diagnosis architecture of a distributed multisensor system is analyzed in detail, and a fault case of a rotating machine is used to illustrate that the proposed model is effective and superior, which can be used in practice.

100 citations


Journal ArticleDOI
TL;DR: A unified and effective solution for both CSDP and WSDP problems is provided and a cost-sensitive kernelized semisupervised dictionary learning (CKSDL) approach is proposed that outperforms state-of-the-art WSDP methods, using unlabeled cross-project defect data can help improve the WSDP performance, and CKSDL generally obtains significantly better prediction performance than related SSDP methods in the CSDP scenario.
Abstract: When there exist not enough historical defect data for building an accurate prediction model, semisupervised defect prediction (SSDP) and cross-project defect prediction (CPDP) are two feasible solutions. Existing CPDP methods assume that the available source data are well labeled. However, due to expensive human efforts for labeling a large amount of defect data, usually, we can only utilize the suitable unlabeled source data. We call CPDP in this scenario as cross-project semisupervised defect prediction (CSDP). Although some within-project semisupervised defect prediction (WSDP) methods have been developed in recent years, there still exists much room for improvement on prediction performance. In this paper, we aim to provide a unified and effective solution for both CSDP and WSDP problems. We introduce the semisupervised dictionary learning technique and propose a cost-sensitive kernelized semisupervised dictionary learning (CKSDL) approach. CKSDL can make full use of the limited labeled defect data and a large amount of unlabeled data in the kernel space. In addition, CKSDL considers the misclassification costs in the dictionary learning process. Extensive experiments on 16 projects indicate that CKSDL outperforms state-of-the-art WSDP methods, using unlabeled cross-project defect data can help improve the WSDP performance, and CKSDL generally obtains significantly better prediction performance than related SSDP methods in the CSDP scenario.

94 citations


Journal ArticleDOI
TL;DR: This paper focuses on surveying state-of-the-art condition monitoring, diagnostic and prognostic techniques using performance parameters acquired from gas-path data that are mostly available from the operating systems of gas turbines.
Abstract: Health monitoring is an essential part of condition-based maintenance and prognostics and health management for gas turbines. Various health monitoring systems have been developed based on the measurement and observation of the fault symptoms including turbine performance parameters such as heat rate, and nonperformance symptoms such as structural vibration. This paper focuses on surveying state-of-the-art condition monitoring, diagnostic and prognostic techniques using performance parameters acquired from gas-path data that are mostly available from the operating systems of gas turbines. Performance parameters and the corresponding effective factors are presented in the beginning. Structure of performance monitoring and diagnostic systems are systematically laid out next, and the recent developments in each section are surveyed and discussed. Observing the importance of the prognostics in the recent trend of health monitoring research, an emphasis is given on the prognostic frameworks and their implementation for the remaining useful life prediction. A conclusion along with a brief discussion on the current state and potential future directions is provided at the end.

85 citations


Journal ArticleDOI
TL;DR: This paper aims to study the optimal inspection/replacement CBM strategy for a multi-unit system and casts the problem into a Markov decision framework and derives the optimal maintenance decisions that minimize the maintenance cost.
Abstract: Condition-based maintenance (CBM) is proved to be effective in reducing the long-run operational cost for a system subject to degradation failure. Most existing research on CBM focuses on single-unit systems where the whole system is treated as a black box. However, a system usually consists of a number of components and each component has its failure behavior. When degradation of the components is observable, CBM can be applied to the component level to improve the maintenance efficiency. This paper aims to study the optimal inspection/replacement CBM strategy for a multi-unit system. Degradation of each component is assumed to follow a Wiener process and periodic inspection is considered. We cast the problem into a Markov decision framework and derive the optimal maintenance decisions that minimize the maintenance cost. To better illustrate the optimal maintenance strategy, we start from a 1-out-of-2: G system and show that the optimal maintenance policy is a two-dimensional control limit policy. The argument used in the 1-out-of-2: G system can be readily extended to general cases in a similar way. The value iteration algorithm is used to find the optimal control limits, and the optimal inspection interval is subsequently determined through a one-dimensional search. A numerical study and a comprehensive sensitivity analysis are provided to illustrate the optimal maintenance strategy.

78 citations


Journal ArticleDOI
TL;DR: This paper develops a numerical methodology to model and evaluate mission success probability and system survivability of 1-out-of-N warm standby systems subject to constant or adaptive mission abort policies and proposes a tradeoff analysis that can help identify optimal decisions on system mission abort and standby policies.
Abstract: Many real-world critical systems, such as aircraft and human space flight systems, utilize mission aborts to enhance the survivability of the system. Specifically, the mission objectives of these systems can be aborted in cases where a certain malfunction condition is met, and a rescue or recovery procedure is then initiated for system survival. Traditional system reliability models typically cannot address the effects of mission aborts, and thus are not applicable to analyzing systems subject to mission abort requirements. In this paper, we first develop a numerical methodology to model and evaluate mission success probability and system survivability of 1-out-of- N warm standby systems subject to constant or adaptive mission abort policies. The system components are heterogeneous, characterized by different performances and different types of time-to-failure distributions. Based on the proposed evaluation method, we make another new contribution by formulating and solving the optimal mission abort problem, as well as a combined optimization problem that identifies the mission abort policy and component activation sequence maximizing mission success probability while achieving the desired level of system survivability. Efficiencies of constant and adaptive mission abort policies are compared through examples. Examples also demonstrate the tradeoff between system survivability and mission success probability due to the utilization of a mission abort policy. Such a tradeoff analysis can help identify optimal decisions on system mission abort and standby policies, promoting safe and reliable operation of warm standby systems.

74 citations


Journal ArticleDOI
TL;DR: A hybrid cutting tool remaining useful life prediction approach is proposed by combining a data-driven model and a physics-based model that can be predicted more accurately during the machining process.
Abstract: Accurate remaining useful life prediction is meaningful for cutting tool usability evaluation. Over the years, experience-based models, data-driven models, and physics-based models have been used individually to predict cutting tool remaining useful lives. In order to improve prediction performances, different prognostics models can be combined to leverage their advantages. In this paper, a hybrid cutting tool remaining useful life prediction approach is proposed by combining a data-driven model and a physics-based model. By using force, vibration and acoustic emission signals, the data-driven model monitors cutting tool wear conditions based on empirical mode decomposition and back propagation neural network. On the basis of the Wiener process, the physics-based model builds a cutting tool condition degradation model to predict cutting tool remaining useful lives. Experimental study verifies the approach's effectiveness, accuracy, and robustness. Then, cutting tool remaining useful lives can be predicted more accurately during the machining process.

74 citations


Journal ArticleDOI
TL;DR: An overview of fuzzing is presented that concentrates on its general process, as well as classifications, followed by detailed discussion of the key obstacles and some state-of-the-art technologies which aim to overcome or mitigate these obstacles.
Abstract: As one of the most popular software testing techniques, fuzzing can find a variety of weaknesses in a program, such as software bugs and vulnerabilities, by generating numerous test inputs. Due to its effectiveness, fuzzing is regarded as a valuable bug hunting method. In this paper, we present an overview of fuzzing that concentrates on its general process, as well as classifications, followed by detailed discussion of the key obstacles and some state-of-the-art technologies which aim to overcome or mitigate these obstacles. We further investigate and classify several widely used fuzzing tools. Our primary goal is to equip the stakeholder with a better understanding of fuzzing and the potential solutions for improving fuzzing methods in the spectrum of software testing and security. To inspire future research, we also predict some future directions with regard to fuzzing.

74 citations


Journal ArticleDOI
Zhen Wang1, Jianmin Gao1, Rongxi Wang1, Kun Chen1, Zhiyong Gao1, Wei Zheng1 
TL;DR: A new risk priority model is presented for FMEA by using the house of reliability (HoR)-based rough VIsekriterijumska optimizacija i KOmpromisno Resenje (VIKOR) approach and an illustrative case in transmission system of a vertical machining center has demonstrated the effectiveness and practicality of the proposed model.
Abstract: Failure mode and effects analysis (FMEA) is a widely used reliability analysis tool for identifying and eliminating known or potential failures in system, design, and process. In traditional FMEA, failure modes are evaluated by FMEA team members with respect to three risk factors: severity (S), occurrence (O), and detectability (D), and ranked via their risk priority number (RPN), which is obtained by multiplying the crisp values of S, O, and D. However, traditional RPN has been considerably criticized due to the following shortcomings: not considering the different weights of risk factors; the identical value of RPN for different combinations of S, O, and D; the diversity and uncertainty of evaluation information given by FMEA team members and without considering the dependence among different failure modes. Although significant efforts have been made in FMEA literatures to overcome these shortcomings, there are still some deficiencies. In this paper, a new risk priority model is presented for FMEA by using the house of reliability (HoR)-based rough VIsekriterijumska optimizacija i KOmpromisno Resenje (VIKOR) approach. In the proposed model, the HoR is introduced to identify the dependence among different failure modes and the link between failure modes and the risk factors of O, D and the subcriteria of S. Rough number is introduced to manipulate the subjectivity and vagueness in decision making and VIKOR approach is used to determine the risk priority order of failure modes in a comprehensive way. Finally, an illustrative case in transmission system of a vertical machining center has demonstrated the effectiveness and practicality of the proposed model.

74 citations


Journal ArticleDOI
TL;DR: A case study based on the degradation signals of aircraft gas turbine engines is conducted and shows the developed health index by using the proposed method is insensitive for missing data and leads to an improved prognostic performance.
Abstract: To prevent unexpected failures of complex engineering systems, multiple sensors have been widely used to simultaneously monitor the degradation process and make inference about the remaining useful life in real time. As each of the sensor signals often contains partial and dependent information, data-level fusion techniques have been developed that aim to construct a health index via the combination of multiple sensor signals. While the existing data-level fusion approaches have shown a promise for degradation modeling and prognostics, they are limited by only considering a linear fusion function. Such a linear assumption is usually insufficient to accurately characterize the complicated relations between multiple sensor signals and the underlying degradation process in practice, especially for complex engineering systems considered in this study. To address this issue, this study fills the literature gap by integrating kernel methods into the data-level fusion approaches to construct a health index for better characterizing the degradation process of the system. Through selecting a proper kernel function, the nonlinear relation between multiple sensor signals and the underlying degradation process can be captured. As a result, the constructed health index is expected to perform better in prognosis than existing data-level fusion methods that are based on the linear assumption. In fact, the existing data-level fusion models turn out to be only a special case of the proposed method. A case study based on the degradation signals of aircraft gas turbine engines is conducted and finally shows the developed health index by using the proposed method is insensitive for missing data and leads to an improved prognostic performance.

72 citations


Journal ArticleDOI
TL;DR: Results from this paper demonstrate that the joint analysis of both types of uncertainties are more useful, as they provide a range of reliability index values rather than a single crisp value as in the pure probabilistic approach.
Abstract: This paper proposes a joint methodology for quantifying both the aleatory and epistemic uncertainties of the transmission line end-of-life failure model. Aleatory uncertainty accounts for the well-known probabilistic or random failures. Epistemic uncertainty modeling accounts for the lack of knowledge and imprecision in the parameters of the failure model. Imprecision arises due to the lack of historical failure data as transmission lines have long life cycle. The studied transmission lines are also equipped with the dynamic thermal rating system. This reflects the current realistic situation where system operators are able to up-rate their networks due to the advancement of sensors, which is normally preferred over the purchase of new transmission assets. The end-of-life failure is modeled using the Arrhenius–Weibull model from the authors’ previous paper. Results from this paper demonstrate that the joint analysis of both types of uncertainties are more useful, as they provide a range of reliability index values rather than a single crisp value as in the pure probabilistic approach.

Journal ArticleDOI
TL;DR: Experimental results show that both RR and LAR models perform better than linear regression and negative binomial regression for cross-version defect prediction and compared with two best methods in the previous study for sorting software modules according to the predicted number of defects, RR has comparable performance and less model construction time.
Abstract: Sorting software modules in order of defect count can help testers to focus on software modules with more defects. One of the most popular methods for sorting modules is generalized linear regression. However, our previous study showed the poor performance of these regression models, which might be caused by severe multicollinearity. Ridge regression (RR) can improve the prediction performance for multicollinearity problems. Lasso regression (LAR) is a worthy competitor to RR. Therefore, we investigate both RR and LAR models for cross-version defect prediction. Cross-version defect prediction is an approximate to real applications. It constructs prediction models from a previous version of projects and predicts defects in the next version. Experimental results based on 11 projects from the PROMISE repository consisting of 41 different versions show that: 1) there exist severe multicollinearity problems in the experimental datasets; 2) both RR and LAR models perform better than linear regression and negative binomial regression for cross-version defect prediction; and 3) compared with two best methods in our previous study for sorting software modules according to the predicted number of defects, RR has comparable performance and less model construction time.

Journal ArticleDOI
TL;DR: An automated malware detection system, MalPat, is implemented to fight against malware and assist Android app marketplaces to address unknown malicious apps.
Abstract: The dramatic rise of Android application (app) marketplaces has significantly gained the success of convenience for mobile users. Consequently, with the advantage of numerous Android apps, Android malware seizes the opportunity to steal privacy-sensitive data by pretending to provide functionalities as benign apps do. To distinguish malware from millions of Android apps, researchers have proposed sophisticated static and dynamic analysis tools to automatically detect and classify malicious apps. Most of these tools, however, rely on manual configuration of lists of features based on permissions, sensitive resources, intents, etc., which are difficult to come by. To address this problem, we study real-world Android apps to mine hidden patterns of malware and are able to extract highly sensitive APIs that are widely used in Android malware. We also implement an automated malware detection system, MalPat, to fight against malware and assist Android app marketplaces to address unknown malicious apps. Comprehensive experiments are conducted on our dataset consisting of 31 185 benign apps and 15 336 malware samples. Experimental results show that MalPat is capable of detecting malware with a high $F_1$ score (98.24%) comparing with the state-of-the-art approaches.

Journal ArticleDOI
TL;DR: A system to detect vigilance level using not only a driver's EEG signals but also driving contexts as inputs is proposed, and a support vector machine with particle swarm optimization methods are combined to improve classification accuracy.
Abstract: Quantitative estimation of a driver's vigilance level has a great value for improving driving safety and preventing accidents. Previous studies have identified correlations between electroencephalogram (EEG) spectrum power and a driver's mental states such as vigilance and alertness. Studies have also built classification models that can estimate vigilance state changes based on data collected from drivers. In the present study, we propose a system to detect vigilance level using not only a driver's EEG signals but also driving contexts as inputs. We combined a support vector machine with particle swarm optimization methods to improve classification accuracy. A simulated driving task was conducted to demonstrate the reliability of the proposed system. Twenty participants were assigned a 2-h sustained-attention driving task to identify a lead car's brake events. Our system was able to account for 84.1% of experimental reaction times with 162-ms prediction errors. A newly introduced driving context factor, road curves, improved the prediction accuracy by 2–5% with 30–80 ms smaller errors. These findings demonstrated the potential value of the proposed system for estimating driver vigilance level on a time scale of seconds.

Journal ArticleDOI
TL;DR: A change-point Wiener process with measurement errors (CPWPME) to fit two-phase degradation paths of organic light-emitting diodes (OLEDs) is proposed by using hierarchical Bayesian method and provides higher modeling flexibility and prediction power for future testing units than existing three degradation models.
Abstract: Degradation test is an effective method in assessing product reliability when measurements of degradation leading to failure can be observed. The accuracy of reliability inference in degradation analysis highly depends on the fitted model to the observed degradation data. Sometimes, observed degradation paths of the products exhibit multiphase pattern over the testing period. In this paper, we propose a change-point Wiener process with measurement errors (CPWPME) to fit two-phase degradation paths of organic light-emitting diodes (OLEDs). We assume the unit-specific parameters of the CPWPME model by using hierarchical Bayesian method. Based upon the proposed approach, the failure-time distribution and the remaining useful life distribution along with mean time to failure and mean residual life function are derived in closed form. A simulation study shows the utility of the proposed CPWPME model and the validity of the hierarchical Bayesian approach for the degradation data possessing two-phase degradation characteristics. In the analysis of OLED degradation data, the hierarchical Bayesian CPWPME model provides higher modeling flexibility and prediction power for future testing units than existing three degradation models.

Journal ArticleDOI
TL;DR: By introducing condition-monitoring data on the system degradation process, it is possible to capture the system-specific characteristics, and, therefore, provide a more complete and accurate description of the risk of the target system.
Abstract: Traditional quantitative risk assessment methods (e.g., event tree analysis) are static in nature, i.e., the risk indexes are assessed before operation, which prevents capturing time-dependent variations as the components and systems operate, age, fail, are repaired and changed. To address this issue, we develop a dynamic risk assessment (DRA) method that allows online estimation of risk indexes using data collected during operation. Two types of data are considered: statistical failure data, which refer to the counts of accidents or near misses from similar systems and condition-monitoring data, which come from online monitoring the degradation of the target system of interest. For this, a hierarchical Bayesian model is developed to compute the reliability of the safety barriers and a Bayesian updating algorithm, which integrates particle filtering (PF) with Markov Chain Monte Carlo, is developed to update the reliability evaluations based on both the statistical and condition-monitoring data. The updated safety barriers reliabilities, are, then, used in an event tree (ET) for consequence analysis and the risk indexes are updated accordingly. A case study on a high-flow safety system is conducted to demonstrate the developed methods. A comparison to the DRA method which only uses statistical failure data shows that by introducing condition-monitoring data on the system degradation process, it is possible to capture the system-specific characteristics, and, therefore, provide a more complete and accurate description of the risk of the target system.

Journal ArticleDOI
TL;DR: This paper presents a reliability modeling and analysis framework for load-sharing systems with identical components subject to continuous degradation by constructing maximum likelihood estimates (MLEs) for unknown parameters and related reliability characteristics by combining analytical and numerical methods.
Abstract: This paper presents a reliability modeling and analysis framework for load-sharing systems with identical components subject to continuous degradation. It is assumed that the components in the system suffer from degradation through an additive impact under increased workload caused by consecutive failures. A log-linear link function is used to describe the relationship between the degradation rate and load stress levels. By assuming that the component degradation is well modeled by a step-wise drifted Wiener process, we construct maximum likelihood estimates (MLEs) for unknown parameters and related reliability characteristics by combining analytical and numerical methods. Approximate initial guesses are proposed to lessen the computational burden in numerical estimation. The estimated distribution of MLE is given in the form of multivariate normal distribution with the aid of Fisher information. Alternative confidence intervals are provided by bootstrapping methods. A simulation study with various sample sizes and inspection intervals is presented to analyze the estimation accuracy. Finally, the proposed approach is illustrated by track degradation data from an application example.

Journal ArticleDOI
TL;DR: A new bivariate degradation model based on the Wiener process is proposed that can describe the common factor affecting the degradation of the two performance characteristics and unit-to-unit variation simultaneously simultaneously and the reliability functions of the system and the remaining useful life of theSystem have analytic forms.
Abstract: Modern products are usually designed with high reliability and have complex structures, and degradation analysis of the complex systems with two or multiple performance characteristics is still a challenge. In this paper, we propose a new bivariate degradation model based on the Wiener process. There are three main merits of the proposed model: it can describe the common factor affecting the degradation of the two performance characteristics and unit-to-unit variation simultaneously, the reliability functions of the system and the remaining useful life of the system have analytic forms, and the model parameters and the missing values can be estimated by the Bayesian method and data augmentation. The simulation study and data analysis show that the Bayesian method and the proposed model have satisfactory performance.

Journal ArticleDOI
TL;DR: ML-Driven, an approach based on machine learning and an evolutionary algorithm to automatically detect holes in WAFs that let SQL injection attacks bypass them, is presented.
Abstract: Web application firewalls (WAFs) are an essential protection mechanism for online software systems. Because of the relentless flow of new kinds of attacks as well as their increased sophistication, WAFs have to be updated and tested regularly to prevent attackers from easily circumventing them. In this paper, we focus on testing WAFs for SQL injection attacks, but the general principles and strategy we propose can be adapted to other contexts. We present ML-Driven , an approach based on machine learning and an evolutionary algorithm to automatically detect holes in WAFs that let SQL injection attacks bypass them. Initially, ML-Driven automatically generates a diverse set of attacks and submits them to the system being protected by the target WAF. Then, ML-Driven selects attacks that exhibit patterns (substrings) associated with bypassing the WAF and evolves them to generate new successful bypassing attacks. Machine learning is used to incrementally learn attack patterns from previously generated attacks according to their testing results, i.e., if they are blocked or bypass the WAF. We implemented ML-Driven in a tool and evaluated it on ModSecurity, a widely used open-source WAF, and a proprietary WAF protecting a financial institution. Our empirical results indicate that ML-Driven is effective and efficient at generating SQL injection attacks bypassing WAFs and identifying attack patterns.

Journal ArticleDOI
TL;DR: Results clearly show that the best solution depends on the deployment scenario and class of vulnerability being detected; therefore, highlighting the importance of these aspects in the design of the benchmark and of future static analysis tools.
Abstract: Static analysis tools are recurrently used by developers to search for vulnerabilities in the source code of web applications. However, distinct tools provide different results depending on factors such as the complexity of the code under analysis and the application scenario; thus, missing some of the vulnerabilities while reporting false problems. Benchmarks can be used to assess and compare different systems or components, however, existing benchmarks have strong representativeness limitations, disregarding the specificities of the environment, where the tools under benchmarking will be used. In this paper, we propose a benchmark for assessing and comparing static analysis tools in terms of their capability to detect security vulnerabilities. The benchmark considers four real-world development scenarios, including workloads composed of real web applications with different goals and constraints, ranging from low budget to high-end applications. Our benchmark was implemented and assessed experimentally using a set of 134 WordPress plugins, which served as the basis for the evaluation of five free PHP static analysis tools. Results clearly show that the best solution depends on the deployment scenario and class of vulnerability being detected; therefore, highlighting the importance of these aspects in the design of the benchmark and of future static analysis tools.

Journal ArticleDOI
TL;DR: A method to improve the robustness of gas turbine gas-path fault diagnosis against sensor faults was proposed for the typical nonlinear GPA method, and can effectively and accurately detect and isolate degraded gas- path components as well as sensors, and further quantify the component degradations.
Abstract: Gas-path analysis (GPA) method has been widespreadly used to monitor gas turbine engine health status, and has become one of the key techniques in favor of condition-oriented maintenance strategy. GPA method (especially nonlinear GPA) can easily obtain the magnitudes of the gas-path component faults. Usually, it is essential to use correct measurement information to obtain correct fault signature for producing accurate gas-path diagnostic results. However gas-path components as well as sensors may degrade or even fail during gas turbine operations. The degraded sensors may produce significant measurement biases, which do not follow the Gaussian distribution, and misleading diagnostic results may be obtained. In order to solve this problem, a method to improve the robustness of gas turbine gas-path fault diagnosis against sensor faults was proposed for the typical nonlinear GPA method. The proposed method includes two steps: first, to locate suspicious degraded sensors based on Gaussian data reconciliation principle for all the gas-path measurements and second, to detect, isolate, and quantify the degradation rate of major gas-path components based on an extended nonlinear GPA method. The proposed method can effectively and accurately detect and isolate degraded gas-path components as well as sensors, and further quantify the component degradations.

Journal ArticleDOI
TL;DR: The relationship between the relationship between $g$-RC and the general regular networks, first under the PMC model and second under the MM* model is established.
Abstract: The $g$ -restricted connectivity ( $g$ -RC) is the minimum vertex-set size of a network, whose deletion disconnects the network such that each remaining vertex has at least $g$ neighbors in its respective component. The $g$ -RC is a deterministic indicator of tolerability of a network with failing processors. The $g$ -good-neighbor fault diagnosability ( $g$ -GNFD) is the largest set size of correctly identified faulty vertices in a network such that any good vertex has no fewer $g$ good neighbors. This paper establishes the relationship between $g$ -RC and $g$ -GNFD of general regular networks, first under the PMC model and second under the MM* model. Moreover, this paper directly gives the $g$ -GNFD of some well-known special networks by their $g$ -RC and our proposed relationship.

Journal ArticleDOI
TL;DR: This paper presents a new general method for evaluating the reliability function and the mean residual life of degrading systems subject to condition monitoring and random failure that is computationally efficient and embeddable to support real-time reliability assessment of the system subject to conditions monitoring for developing the optimal maintenance policy.
Abstract: This paper presents a new general method for evaluating the reliability function and the mean residual life of degrading systems subject to condition monitoring and random failure. In the proposed method, the degradation process of the system is characterized by a continuous-time Markov chain, which is then incorporated into the proportional hazards model as a stochastic covariate process to describe the hazard rate of the time to system failure. Unlike the conventional method based on conditioning, which is applicable only for a small number of degradation states, the proposed method is capable of tackling the case with a general number of degradation states. Using the developed approximation techniques, closed-form formulas for related reliability characteristics are obtained in terms of the appropriate transition probability matrix. The proposed evaluation algorithm is computationally efficient and embeddable to support real-time reliability assessment of the system subject to condition monitoring for developing the optimal maintenance policy. The effectiveness and the accuracy of the method are validated by a numerical study and compared with the conventional method. A general case where the degradation path can be discretized up to ten states is also studied to illustrate the appealing general features.

Journal ArticleDOI
TL;DR: This work revisits the National Institute of Standards and Technology hash function competition, which was used to develop the SHA-3 standard, and applies a new testing strategy to all available reference implementations, and develops four tests motivated by the cryptographic properties that a hash function should satisfy.
Abstract: Cryptographic hash functions are security-critical algorithms with many practical applications, notably in digital signatures. Developing an approach to test them can be particularly difficult, and bugs can remain unnoticed for many years. We revisit the National Institute of Standards and Technology hash function competition, which was used to develop the SHA-3 standard, and apply a new testing strategy to all available reference implementations. Motivated by the cryptographic properties that a hash function should satisfy, we develop four tests. The Bit-Contribution Test checks if changes in the message affect the hash value, and the Bit-Exclusion Test checks that changes beyond the last message bit leave the hash value unchanged. We develop the Update Test to verify that messages are processed correctly in chunks, and then use combinatorial testing methods to reduce the test set size by several orders of magnitude while retaining the same fault-detection capability. Our tests detect bugs in 41 of the 86 reference implementations submitted to the SHA-3 competition, including the rediscovery of a bug in all submitted implementations of the SHA-3 finalist BLAKE. This bug remained undiscovered for seven years, and is particularly serious because it provides a simple strategy to modify the message without changing the hash value returned by the implementation. We detect these bugs using a fully automated testing approach.

Journal ArticleDOI
TL;DR: A novel anonymous ECC-based self-certified two-factor key management scheme that not only provides the desired security features, but also has much better efficiency than several recently-published schemes, such as the presented one by Tseng et al.
Abstract: Investigating the literature reveals the fact that the key management protocols play a vital role in protecting the security and privacy of medical data in telecare medical information systems. Recently, Tseng et al . have proposed an interesting elliptic curve cryptosystem (ECC) based self-certified key management scheme that can yield secure channel for the communications of secure sensors (cluster members) and access point (cluster head). After careful consideration, we found that their scheme suffers from the cluster head impersonation, replay, and key replicating attacks. In addition, in the secure session phase, there are some errata and a point multiplication that is undefined in the ECC and hence is mathematically incorrect. Finally, yet importantly, their presented scheme has been adopted for a similar application by other authors with the same problem. Therefore, in this paper, we first try to elaborate the existing errata and security threats. Second, we propose a modified version, which is free from the challenges of their scheme. Eventually, we propose a novel anonymous ECC-based self-certified two-factor key management scheme that not only provides the desired security features, but also has much better efficiency than several recently-published schemes, such as the presented one by Tseng et al . Our formal security verification and proof besides the efficiency analysis support our claim.

Journal ArticleDOI
TL;DR: A new metric that combines network reliability with network resilience is presented to measure reliability/survivability effectively for capacitated networks.
Abstract: In telecommunication network design problems, survivability and reliability are often used to evaluate quality of service while usually ignoring link capacity. In this paper, a new metric that combines network reliability with network resilience is presented to measure reliability/survivability effectively for capacitated networks. Capacitated resilience is compared with well-known network reliability/survivability metrics ( $k$ -terminal reliability, all-terminal reliability, traffic efficiency, and $k$ -connectivity), and its benefits and computational efficiency are discussed. An application is shown using heterogeneous wireless networks (HetNets). With the growing use of new telecommunication technologies such as 4G and wireless hotspots, HetNets are gaining more attention. The source of heterogeneity of a HetNet can either be the differences in nodes (such as transmission ranges, failure rates, and energy levels) or the differences in services offered in the network (such as GSM and WiFi).

Journal ArticleDOI
TL;DR: Metrics for performance evaluation are suggested, which show that the model can be used to schedule and optimize the cost of the battery replacement and explain why it is a good choice not to add time-related variables into the model.
Abstract: Maintenance planning is important in the automotive industry as it allows fleet owners or regular customers to avoid unexpected failures of the components. One cause of unplanned stops of heavy-duty trucks is failure in the lead–acid starter battery. High availability of the vehicles can be achieved by changing the battery frequently, but such an approach is expensive both due to the frequent visits to a workshop and also due to the component cost. Here, a data-driven method based on random survival forest (RSF) is proposed for predicting the reliability of the batteries. The dataset available for the study, covering more than 50 000 trucks, has two important properties. First, it does not contain measurements related directly to the battery health; second, there are no time series of measurements for every vehicle. In this paper, the RSF method is used to predict the reliability function for a particular vehicle using data from the fleet of vehicles given that only one set of measurements per vehicle is available. A theory for confidence bands for the RSF method is developed, which is an extension of an existing technique for variance estimation in the random forest method. Adding confidence bands to the RSF method gives an opportunity for an engineer to evaluate the confidence of the model prediction. Some aspects of the confidence bands are considered: their asymptotic behavior and usefulness in model selection. A problem of including time-related variables is addressed in this paper with the argument that why it is a good choice not to add them into the model. Metrics for performance evaluation are suggested, which show that the model can be used to schedule and optimize the cost of the battery replacement. The approach is illustrated extensively using the real-life truck data case study.

Journal ArticleDOI
TL;DR: An ADT optimization approach for products suffering from both degradation failures and random shock failures is proposed and the result shows that the optimalADT plans in the presence of random shocks differ significantly from the traditional ADT plans.
Abstract: Accelerated degradation tests (ADT) have been widely used to assess the reliability of products with long lifetime. For many products, environmental stress not only accelerates their degradation rate but also elevates the probability of traumatic shocks. When random traumatic shocks occur during an ADT, it is possible that the degradation measurements cannot be taken afterward, which brings challenges to reliability assessment. In this paper, we propose an ADT optimization approach for products suffering from both degradation failures and random shock failures. The degradation path is modeled by a Wiener process. Under various stress levels, the arrival process of random shocks is assumed to follow a nonhomogeneous Poisson process. Parameters of acceleration models for both failure modes need to be estimated from the ADT. Three common optimality criteria based on the Fisher information are considered and compared to optimize the ADT plan under a given number of test units and a predetermined test duration. Optimal two- and three-level optimal ADT plans are obtained by numerical methods. We use the general equivalence theorems to verify the global optimality of ADT plans. A numerical example is presented to illustrate the proposed methods. The result shows that the optimal ADT plans in the presence of random shocks differ significantly from the traditional ADT plans. Sensitivity analysis is carried out to study the robustness of optimal ADT plans with respect to the changes in planning input.

Journal ArticleDOI
TL;DR: Modelling effects of correlated, probabilistic competing failures in reliability analysis of nonrepairable binary-state systems through a combinatorial procedure is demonstrated using a case study of a relay-assisted wireless body area network system in healthcare.
Abstract: A combinatorial system reliability modeling method is proposed to consider the effects of correlated probabilistic competing failures caused by the probabilistic–functional-dependence (PFD) behavior. PFD exists in many real-world systems, such as sensor networks and computer systems, where functions of some system components (referred to as dependent components) rely on functions of other components (referred to as triggers) with certain probabilities. Competitions exist in the time domain between a trigger failure and propagated failures of corresponding dependent components, causing a twofold effect. On one hand, if the trigger failure happens first, an isolation effect can take place preventing the system function from being compromised by further dependent component failures. On the other hand, if any propagated failure of the dependent components happens before the trigger failure, the propagation effect takes place and can cause the entire system to fail. In addition, correlations may exist due to the shared trigger or dependent components, which make system reliability modeling more challenging. This paper models effects of correlated, probabilistic competing failures in reliability analysis of nonrepairable binary-state systems through a combinatorial procedure. The proposed method is demonstrated using a case study of a relay-assisted wireless body area network system in healthcare. Correctness of the method is verified using Monte–Carlo simulations.

Journal ArticleDOI
Guoqi Xie1, Yuekun Chen1, Yan Liu1, Renfa Li1, Keqin Li1 
TL;DR: This study solves the problem of minimizing the development cost of a distributed automotive function while satisfying its reliability goal during the design phase by presenting two heuristic algorithms, reliabilitycalculation of scheme (RCS) and minimizing development cost with reliability goal (MDCRG).
Abstract: ISO 26262 is a functional safety standard specifically made for automotive systems, in which the automotive safety integrity level (ASIL) is the representation of the criticality level. Recently, most studies have used ASIL decomposition to reduce the development cost of automotive functions. However, these studies have not paid special attention to the problem that the reliability goal may not be satisfied when ASIL decomposition is performed. In this study, we solve the problem of minimizing the development cost of a distributed automotive function while satisfying its reliability goal during the design phase by presenting two heuristic algorithms, reliabilitycalculation of scheme (RCS) and minimizing development cost with reliability goal (MDCRG). We first use RCS to calculate the reliability value of each ASIL decomposition scheme; then, the MDCRG is used to select the scheme with the minimum development cost while satisfying the reliability goal. Real-life benchmark and simulated functions based on real parameter values are used in experiments, and results show the effectiveness of the proposed algorithms.