scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Reliability in 2020"


Journal ArticleDOI
TL;DR: Experimental results demonstrate the effectiveness of the proposed hybrid prognostics approach in improving the accuracy and convergence of RUL prediction of rolling element bearings.
Abstract: Remaining useful life (RUL) prediction of rolling element bearings plays a pivotal role in reducing costly unplanned maintenance and increasing the reliability, availability, and safety of machines. This paper proposes a hybrid prognostics approach for RUL prediction of rolling element bearings. First, degradation data of bearings are sparsely represented using relevance vector machine regressions with different kernel parameters. Then, exponential degradation models coupled with the Frechet distance are employed to estimate the RUL adaptively. The proposed approach is evaluated using the vibration data from accelerated degradation tests of rolling element bearings and the public PRONOSTIA bearing datasets. Experimental results demonstrate the effectiveness of the proposed approach in improving the accuracy and convergence of RUL prediction of rolling element bearings.

685 citations


Journal ArticleDOI
TL;DR: The main contribution of this paper is applying entropy-based fault classification methods to establish a benchmark analysis of entire CWRU datasets, aiming to provide a proper assessment of any new classification methods.
Abstract: Fault diagnosis of bearings using classification techniques plays an important role in industrial applications, and, hence, has received increasing attention. Recently, significant efforts have been made to develop various methods for bearing fault classification and the application of Case Western Reserve University (CWRU) data for validation has become a standard reference to test the fault classification algorithms. However, a systematic research for evaluating bearing fault classification performance using the CWRU data is still lacking. This paper aims to provide a comprehensive benchmark analysis of the CWRU data using various entropy and classification methods. The main contribution of this paper is applying entropy-based fault classification methods to establish a benchmark analysis of entire CWRU datasets, aiming to provide a proper assessment of any new classification methods. Recommendations are provided for the selection of the CWRU data to aid in testing new fault classification algorithms, which will enable the researches to develop and evaluate various diagnostic algorithms. In the end, the comparison results and discussion are reported as a useful baseline for future research.

104 citations


Journal ArticleDOI
TL;DR: This paper focuses on improving reliable data transmission with high security in the MANET using an optimization technique and demonstrates that the MANet with optimization techniques achieves a high transmission rate and improves the reliable data security.
Abstract: In recent years, the need for high security with reliability in the wireless network has tremendously been increased. To provide high security in reliable networks, mobile ad hoc networks (MANETs) play a top role, like open network boundary, distributed network, and fast and quick implementation. By expanding the technology, the MANET faces a number of security challenges due to self-configuration and maintenance capabilities. Besides, traditional security solutions for wired networks are ineffective and inefficient because of the nature of highly dynamic and resource-constrained MANETs. In this paper, the researchers focus on improving reliable data transmission with high security in the MANET using an optimization technique. In the proposed MANET system, the nodes are clustered by utilizing an energy-efficient routing protocol. Then, the modified discrete particle swarm optimization is used to select the optimal cluster head. A secured routing protocol and a signcryption model can be used to improve the transmission security of the reliable MANET. The signcryption algorithm encrypts the digital signature, which can enhance the overall efficiency and confidentiality. The security-based analysis is performed on the basis of packet delivery ratio, energy consumption, network lifetime, and throughput. Finally, the result demonstrates that the MANET with optimization techniques achieves a high transmission rate and improves the reliable data security.

93 citations


Journal ArticleDOI
TL;DR: The results demonstrate that the proposed PEM has a higher accuracy and efficiency to assess the positioning accuracy reliability of industrial robots.
Abstract: The uncertain variables of the link dimensions and joint clearances, whose deviation is caused by manufacturing and assembling errors, have a considerable influence on the positioning accuracy of industrial robots. Understanding how these uncertain variables affect the positioning accuracy of industrial robots is very important to select appropriate parameters during design process. In this paper, the positioning accuracy reliability of industrial robots is analyzed considering the influence of uncertain variables. First, the kinematic models of industrial robots are established based on the Denavit–Hartenberg method, in which the link lengths and joint rotation angles are treated as uncertain variables. Second, the Sobol’ method is used to analyze the sensitivity of uncertain variables for the positioning accuracy of industrial robots, by which the sensitive variables are determined to perform the reliability analysis. Finally, in view of the sensitive variables, the first-four order moments and probability density function of the manipulator's positioning point are assessed by the point estimation method (PEM) in three examples. The Monte Carlo simulation method, the maximum entropy problem with fractional order moments (maximum entropy problem with fractional order moments method (ME-FM) method), and the experimental method are also performed as comparative methods. All the results demonstrate that the proposed PEM has a higher accuracy and efficiency to assess the positioning accuracy reliability of industrial robots.

85 citations


Journal ArticleDOI
TL;DR: The reviewed papers are classified into three major areas based on whether the physics of failure knowledge is incorporated for prognostics, i.e., the data-driven, physics-based, and hybrid prognostic methods.
Abstract: Due to the advancements in sensing technologies and computational capabilities, system health assessment and prognostics have been extensively investigated in the literature. Industry has adopted and implemented many advanced system prognostic applications. This article reviews recent research advances and applications in prognostics modeling methods for engineering systems. The reviewed papers are classified into three major areas based on whether the physics of failure knowledge is incorporated for prognostics, i.e., the data-driven, physics-based, and hybrid prognostic methods. The technical merits and limitations of each prognostic method are discussed. This review also summarizes research and technological challenges in engineering system prognostics, and points out future research directions.

81 citations


Journal ArticleDOI
TL;DR: A novel deep learning based fusion prognostic method for remaining useful life (RUL) prediction of engineering systems that strategically combines the advantages of bidirectional long short-term memory (BLSTM) networks and particle filter method and meanwhile mitigates their limitations.
Abstract: This article proposes a novel deep learning based fusion prognostic method for remaining useful life (RUL) prediction of engineering systems. The proposed framework strategically combines the advantages of bidirectional long short-term memory (BLSTM) networks and particle filter (PF) method and meanwhile mitigates their limitations. In the proposed framework, BLSTM networks are applied for further extracting, selecting, and fusing discriminative features to form predicted measurements of the identified degradation indicator. Simultaneously, PF is utilized to estimate system state and identify unknown parameters of the degradation model for RUL prediction. Hence, the proposed fusion prognostic framework has two innovative features: first, the preprocessed features from raw multisensor data can be intelligently extracted, selected, and fused by the BLSTM networks without specific domain knowledge of feature engineering; second, the predicted measurements with uncertainties obtained from the BLSTM networks will be properly represented by the PF in a transparent manner. Moreover, the developed approach is experimentally validated with machining tool wear tests on a computer numerical control (CNC) milling machine. In addition, the popular techniques employed in this field are also investigated to compare with the proposed method.

61 citations


Journal ArticleDOI
TL;DR: This article introduces a highly reliable and low-complexity image compression scheme using neighborhood correlation sequence (NCS) algorithm that increases the compression performance and decreases the energy utilization of the sensor nodes with high fidelity.
Abstract: Recently, the advancements in the field of wireless technologies and micro-electro-mechanical systems lead to the development of potential applications in wireless sensor networks (WSNs). The visual sensors in WSN create a significant impact on computer vision based applications such as pattern recognition and image restoration. generate a massive quantity of multimedia data. Since transmission of images consumes more computational resources, various image compression techniques have been proposed. But, most of the existing image compression techniques are not applicable for sensor nodes due to its limitations on energy, bandwidth, memory, and processing capabilities. In this article, we introduce a highly reliable and low-complexity image compression scheme using neighborhood correlation sequence (NCS) algorithm. The NCS algorithm performs the bit reduction operation and then encoded by a codec (such as PPM, Deflate, and Lempel Ziv Markov chain algorithm.) to further compress the image. The proposed NCS algorithm increases the compression performance and decreases the energy utilization of the sensor nodes with high fidelity. Moreover, it achieved a minimum end to end delay of 1074.46 ms at the average bit rate of 4.40 bpp and peak signal to noise ratio of 48.06 on the applied test images. On comparing with state-of-art methods, the proposed method maintains a better tradeoff between compression efficiency and reconstructed image quality.

60 citations


Journal ArticleDOI
TL;DR: A prediction framework based on nonlinear-drifted fractional Brownian motion with multiple hidden state variables is put forward to estimate RUL, and demonstrates greater precision in the RUL prediction.
Abstract: Lithium-ion rechargeable batteries are widely used in various electronic products and equipment due to their immense benefits in power supplying. The exact remaining useful life (RUL) prediction of lithium-ion batteries has shown excellent achievements in preventing severe economic and security consequences incurred in failing to provide necessary power levels. Recently, the nonlinear-drifted fractional Brownian motion made quite a splash in RUL prediction, since its first hitting time distribution can be approximated by weak convergence theorem and time-space transformation. However, the previous RUL prediction methods based on fractional Brownian motion only considered current state measurement. In this paper, a prediction framework based on nonlinear-drifted fractional Brownian motion with multiple hidden state variables is put forward to estimate RUL. Specifically, all the parameters of nonlinear function are defined as specific hidden state variables of lithium-ion battery degradation model, and all the state measurements are used to posteriorly estimate the distribution of the multiple hidden state variables by unscented particle filter algorithm. Four sets of lithium-ion battery degradation data provided by NASA Ames Research Center are used to validate the proposed prediction framework. According to comparison study with other methods, the proposed prediction framework demonstrates greater precision in the RUL prediction.

53 citations


Journal ArticleDOI
Yihai He1, Zhaoxiang Chen1, Yixiao Zhao1, Xiao Han1, Di Zhou1 
TL;DR: A mission reliability evaluation method for fuzzy multistate manufacturing systems based on an extended stochastic flow network (ESFN) and the connotation of mission reliability is further defined.
Abstract: Due to the artificial division of a machine performance state and unpredictable external working conditions, the performance and state transition strength of multistate machines (that is, their fuzzy multistate characteristic) cannot be accurately identified. Moreover, existing research on the reliability evaluation of multistate systems does not consider the operating mechanism of a manufacturing system. Therefore, this article proposes a mission reliability evaluation method for fuzzy multistate manufacturing systems based on an extended stochastic flow network (ESFN). First, from the relationship between the production task execution state, production machine degradation state, and produced product quality state, the operation mechanism of a fuzzy multistate manufacturing system is proposed, and the connotation of mission reliability is further defined. Second, an ESFN model of the multistate manufacturing system based on the operating mechanism of a fuzzy multistate manufacturing system is established. Third, on the basis of a fuzzy Markov model, the mission reliability evaluation method of the fuzzy multistate manufacturing system is proposed. Finally, a case study of the manufacturing system for producing a ferrite phase shifting unit is presented to verify the proposed approach, and a sensitivity analysis is conducted to analyze the impacts of the parameters in the proposed model.

53 citations


Journal ArticleDOI
TL;DR: The developed IDCKMS is superior to the other three methods in the precision and efficiency of modeling and simulation, from the model-fitting features and simulation performance perspectives.
Abstract: The probabilistic design of complex structure usually involves the features of numerous components, multiple disciplines, nonlinearity, and transients and, thus, requires lots of simulations as well. To enhance the modeling efficiency and simulation performance for the dynamic probabilistic analysis of the multicomponent structure, we propose an improved decomposed-coordinated Kriging modeling strategy (IDCKMS), by integrating decomposed-coordinated (DC) strategy, extremum response surface method (ERSM), genetic algorithm (GA), and Kriging surrogate model. The GA is used to resolve the maximum-likelihood equation and achieve the optimal values of the Kriging hyperparameter θ . The ERSM is utilized to resolve the response process of outputs in surrogate modeling by extracting the extremum values. The DC strategy is used to coordinate the output responses of analytical objectives. The probabilistic analysis of an aeroengine high-pressure turbine blisk with blade and disk is conducted to validate the effectiveness and feasibility of this developed method, by considering the fluid–thermal–structural interaction. In respect of this investigation, we see that the reliability of turbine blisk is 0.9976 as the allowable value of radial deformation is 2.319 × 10−3 m. In terms of the sensitivity analysis, the highest impact on turbine blisk radial deformation is of gas temperature, followed by angular speed, inlet velocity, material density, outlet pressure, and inlet pressure. By the comparison of methods, including the DC surrogate modeling method (DCSMM) with quadratic polynomial, the DCSMM with Kriging, and the direct simulation with finite-element model, from the model-fitting features and simulation performance perspectives, we discover that the developed IDCKMS is superior to the other three methods in the precision and efficiency of modeling and simulation. The efforts of this article provide a highly efficient and highly accurate technique for the dynamic probabilistic analysis of complex structure and enrich reliability theory.

51 citations


Journal ArticleDOI
TL;DR: Results of the cross-project evaluation suggest that the proposed CNN-based approach to bug report prioritization significantly outperforms the state-of-the-art approaches and improves the average F1-score by more than 24%.
Abstract: Software systems often receive a large number of bug reports. Triagers read through such reports and assign different priorities to different reports so that important and urgent bugs could be fixed on time. However, manual prioritization is tedious and time-consuming. To this end, in this article, we propose a convolutional neural network (CNN) based automatic approach to predict the multiclass priority for bug reports. First, we apply natural language processing (NLP) techniques to preprocess textual information of bug reports and covert the textual information into vectors based on the syntactic and semantic relationship of words within each bug report. Second, we perform the software engineering domain specific emotion analysis on bug reports and compute the emotion value for each of them using a software engineering domain repository. Finally, we train a CNN-based classifier that generates a suggested priority based on its input, i.e., vectored textual information and emotion values. To the best of our knowledge, it is the first CNN-based approach to bug report prioritization. We evaluate the proposed approach on open-source projects. Results of our cross-project evaluation suggest that the proposed approach significantly outperforms the state-of-the-art approaches and improves the average F1-score by more than 24%.

Journal ArticleDOI
TL;DR: Experimental results indicate that this approach has higher area under curve (AUC), Recall and comparable probability of a false alarm (pf), and F-measure than some existing methods for the class-imbalance problem.
Abstract: Software defect prediction (SDP) is an available way to enhance test efficiency and guarantee software reliability. However, there are more clean instances than defective instances in real software projects, and this results in severe class distribution skews and gets the poor performance of classifiers. So solving the class-imbalance problem in SDP has attracted growing attention from industry and academia in software engineering. In this paper, we propose a novel class-imbalance learning approach for both within-project and cross-project class-imbalance problem. We utilize the thought of stratification embedded in nearest neighbor (STr-NN) to produce evolving training datasets with balanced data. For within-project, we directly employ the STr-NN approach for defect prediction. For cross-project, we first introduce transfer component analysis to mitigate the distribution differences between source and target dataset, and then employ the STr-NN approach on the transferred data. We conduct experiments on PROMISE and NASA datasets using ensemble learning based on weight vote. Experimental results indicate that our approach has higher area under curve (AUC) , Recall and comparable probability of a false alarm (pf) , and F-measure than some existing methods for the class-imbalance problem.

Journal ArticleDOI
TL;DR: Experimental results show that DIAVA not only outperforms state-of-the-art WAFs in detecting SQLAs from the perspectives of precision and recall, but also enables real-time vulnerability evaluation of leaked data caused by SQL injection.
Abstract: SQL injection attack (SQLIA) is among the most common security threats to web-based services that are deployed on cloud. By exploiting web software vulnerabilities, SQL injection attackers can run arbitrary malicious code on target databases to acquire or compromise sensitive data. Although web application firewalls (WAFs) are offered by most cloud service providers, tenants are reluctant to pay for them, since there are few approaches that can report accurate SQLIA statistics for their deployed services. Traditional WAFs focus on blocking suspicious SQL requests. Few of them can accurately decide whether an attack is really harmful and quickly answer how severe the attack is. To raise the tenants’ awareness of the seriousness of SQLIAs, in this paper, we introduce a novel traffic-based SQLIA detection and vulnerability analysis framework named DIAVA, which can proactively send warnings to tenants promptly. By analyzing the bidirectional network traffic of SQL operations and applying our proposed multilevel regular expression model, DIAVA can accurately identify successful SQLIAs among all the suspects. Meanwhile, the severity of such SQLIAs and the vulnerabilities of the corresponding leaked data can be quickly evaluated by DIAVA based on its GPU-based dictionary attack analysis engine. Experimental results show that DIAVA not only outperforms state-of-the-art WAFs in detecting SQLAs from the perspectives of precision and recall, but also enables real-time vulnerability evaluation of leaked data caused by SQL injection.

Journal ArticleDOI
TL;DR: Experimental results show that DAMBA outperforms the well-known detector, McAfee, which is based on signature recognition, and DAMBA is demonstrated to resist the known malware attacks and their variants efficiently, as well as malware that uses obfuscation techniques.
Abstract: With the rapid development of smart devices, mobile phones have permeated many aspects of our life. Unfortunately, their widespread popularization attracted endless attacks that are serious threats for users. As the mobile system with the largest market share, Android has already become the hardest hit for years. To Detect Android Malware by ORGB Anlysis, in this paper, we present DAMBA, a novel prototype system based on a C/S architecture. DAMBA extracts the static and dynamic features of apps. For further analyses, we propose TANMAD algorithm, a two-step Android malware detection algorithm, which reduces the range of possible malware families, and then utilizes subgraph isomorphism matching for malware detection. The key novelty of this paper is the modeling of object reference information by constructing directed graphs, which is called object reference graph birthmarks (ORGB). To achieve better efficiency and accuracy, in this paper, we present several optimization strategies for hybrid analysis. DAMBA is evaluated on a large real-world dataset of 2239 malicious and 1000 popular benign apps. The detection accuracy reaches 100% in most cases, and the average detection time is less than 5 s. Experimental results show that DAMBA outperforms the well-known detector, McAfee, which is based on signature recognition. In addition, DAMBA is demonstrated to resist the known malware attacks and their variants efficiently, as well as malware that uses obfuscation techniques.

Journal ArticleDOI
TL;DR: A new deadline-constrained reliability-aware HEFT algorithm, namely DRHEFT, is proposed, which seeks for the best SER–LTR tradeoff solutions through using fuzzy dominance to evaluate the relative fitness values of candidate solutions.
Abstract: Heterogeneous multiprocessor system-on-chips (MPSoCs) are suitable platforms for real-time embedded applications that require powerful parallel processing capability as well as low power consumption. For such applications, soft-error reliability (SER) due to transient faults and lifetime reliability (LTR) due to permanent faults are both key concerns. There have been several efforts in the literature oriented toward related reliability problems. However, most existing techniques only concentrate on improving one of the two reliability metrics, which are not suitable for embedded systems deployed in critical applications in need of a long lifetime as well as a reliable execution. This article develops a novel heterogeneous earliest-finish-time (HEFT)-based algorithm to maximize SER and LTR simultaneously under the real-time constraint for dependent tasks executing on heterogeneous MPSoC systems. More specifically, a new deadline-constrained reliability-aware HEFT algorithm, namely DRHEFT, is proposed, which seeks for the best SER–LTR tradeoff solutions through using fuzzy dominance to evaluate the relative fitness values of candidate solutions. The extensive experiments on real-life benchmarks as well as synthetic applications demonstrate that DRHEFT is capable of achieving better SER–LTR tradeoff solutions with higher hypervolume and less computation cost when compared with the state-of-the-art approaches.

Journal ArticleDOI
TL;DR: A hybrid FMEA framework for addressing machine tool risk analysis problem by integrating cloud model, Choquet integral, and gained and lost dominance score (GLDS) method is proposed.
Abstract: Enhancing the reliability of machine tools and preventing the potential failures in the manufacturing process is one of the most important tasks for the development of industry. The failure mode and effects analysis (FMEA) is the well-known and widely utilized approach to identify and evaluate potential failures for preventing risk in various enterprises. It is a group-oriented technique ordinarily conducted by a group of experts from related fields. Obviously, an effective approach should be developed, which is used to integrate risk evaluation information from multiexperts by considering group and individual risk attitudes. This article proposes a hybrid FMEA framework for addressing machine tool risk analysis problem by integrating cloud model, Choquet integral, and gained and lost dominance score (GLDS) method. In this framework, an improved Shapley cloud-Choquet weighting averaging operator is defined to fuse random and uncertain risk information by considering the correlations among experts. An extended GLDS method with developed Choquet integral based on distance measure of normal clouds is presented to prioritize the risk priority of each failures, in which the group and individual risk attitude and risk interactions are considered simultaneously. Finally, a real risk analysis of machine tool in a machine tool industry is introduced to illustrate the application and feasibility of the proposed approach, and comparison and sensitivity studies are also conducted to validate the effectiveness of the hybrid framework.

Journal ArticleDOI
TL;DR: The modeling and optimization of the two imperfect PM policies with unpunctual executions for infinite and finite planning horizons are discussed and the impact of unpunctUAL executions on the optimal PM decisions and corresponding maintenance expenses are explored in an analytical or numerical way.
Abstract: Traditional maintenance planning problems usually presume that preventive maintenance (PM) policies will be executed exactly as planned. In reality, however, maintainers often deviate from the intended PM policy, resulting in unpunctual PM executions that may reduce maintenance effectiveness. This article studies two imperfect PM policies with unpunctual executions for infinite and finite planning horizons, respectively. Under the former policy, imperfect PM actions are periodically performed and the system is preventively replaced at the last PM instant. The objective is to determine the optimal number of PM actions and associated PM interval so as to minimize the long-run average cost rate. However, the latter policy specifies that a system is subject to periodic PM activities within a finite planning horizon and there is no PM activity at the end of the horizon. The aim is then to identify the optimal number of PM activities to minimize the expected total maintenance cost. In this article, we discuss the modeling and optimization of the two unpunctual PM policies and then explore the impact of unpunctual executions on the optimal PM decisions and corresponding maintenance expenses in an analytical or numerical way. The resulting insights are helpful for practitioners to adjust their PM plans when unpunctual executions are anticipated.

Journal ArticleDOI
TL;DR: This article studies the reliability of WSNs with multistate nodes and proposes a modified sum-of-disjoint products approach to evaluate WSN reliability in the presence of multistates nodes from the enumerated shortest minimal paths.
Abstract: Wireless sensor networks (WSNs) find application in various fields like environmental monitoring, health-care, land security, and many more. To ease our day-to-day activity, WSNs have become an integral tool for complex data gathering tasks. Monitoring a phenomenon by a WSN depends on the collective data provided by the sensor nodes. To ensure reliable operation of WSNs, it is important to quantify the performance of such networks in terms of network reliability measures. This article studies the reliability of WSNs with multistate nodes and proposes an approach to evaluate the flow-oriented network reliability of WSNs consisting of multistate sensor nodes. The proposed method takes into account the dynamic state of the network due to multistate sensor nodes. The proposed approach includes enumeration of shortest minimal paths from application-specific flow satisfying sensor nodes (source nodes) to the sink node. It then proposes a modified sum-of-disjoint products approach to evaluate WSN reliability in the presence of multistate nodes from the enumerated shortest minimal paths. Simulations are performed on WSNs of various sizes to show the applicability of the proposed approach on arbitrary WSNs.

Journal ArticleDOI
TL;DR: A cost-sensitive ranking support vector machine (SVM) (CSRankSVM), which modifies the loss function of the ranking SVM algorithm by adding two penalty parameters to address both the cost issue and the data imbalance problem in RODP methods and achieves better performance.
Abstract: Context: Ranking-oriented defect prediction (RODP) ranks software modules to allocate limited testing resources to each module according to the predicted number of defects. Most RODP methods overlook that ranking a module with more defects incorrectly makes it difficult to successfully find all of the defects in the module due to fewer testing resources being allocated to the module, which results in much higher costs than incorrectly ranking the modules with fewer defects, and the numbers of defects in software modules are highly imbalanced in defective software datasets. Cost-sensitive learning is an effective technique in handling the cost issue and data imbalance problem for software defect prediction. However, the effectiveness of cost-sensitive learning has not been investigated in RODP models. Aims: In this article, we propose a cost-sensitive ranking support vector machine (SVM) (CSRankSVM) algorithm to improve the performance of RODP models. Method: CSRankSVM modifies the loss function of the ranking SVM algorithm by adding two penalty parameters to address both the cost issue and the data imbalance problem. Additionally, the loss function of the CSRankSVM is optimized using a genetic algorithm. Results: The experimental results for 11 project datasets with 41 releases show that CSRankSVM achieves 1.12%–15.68% higher average fault percentile average (FPA) values than the five existing RODP methods (i.e., decision tree regression, linear regression, Bayesian ridge regression, ranking SVM, and learning-to-rank (LTR)) and 1.08%–15.74% higher average FPA values than the four data imbalance learning methods (i.e., random undersampling and a synthetic minority oversampling technique; two data resampling methods; RankBoost, an ensemble learning method; IRSVM, a CSRankSVM method for information retrieval). Conclusion: CSRankSVM is capable of handling the cost issue and data imbalance problem in RODP methods and achieves better performance. Therefore, CSRankSVM is recommended as an effective method for RODP.

Journal ArticleDOI
TL;DR: An in-depth analysis of various operation modes and design constraints of a new soft switching isolated push–pull dc–dc converter using a three-winding transformer is presented, offering a high efficiency over a wide range of input and output voltage signals with an unsophisticated fixed-frequency control mechanism.
Abstract: In this article, a new soft switching isolated push–pull dc–dc converter using a three-winding transformer is proposed. The proposed hybrid resonant and pulse width modulated converter employs a conventional push–pull structure in the primary side, a voltage doubler in the secondary side, and a bidirectional switch besides the transformer, altogether help offering a high efficiency over a wide range of input and output voltage signals with an unsophisticated fixed-frequency control mechanism. The primary-side switches are commutated under zero voltage switching with low switching current and the secondary-side diodes are commutated under zero current switching. In this article, we first present an in-depth analysis of various operation modes and design constraints. Our analysis is further complemented with a comprehensive reliability evaluation of the proposed converter under various short circuit and open circuit fault scenarios. Different from the previous research, the derated operating states of the proposed converter are detailed and characterized in the reliability evaluations. A comparison study is then provided to evaluate the performance of the proposed converter against other similar converters from the operation, components count, efficiency, and reliability perspectives. Finally, the theoretical analyses are verified via tests and experiments performed on a 280 W/34.7 kHz prototype.

Journal ArticleDOI
TL;DR: The developed method uses a local-weighted minority oversampling strategy to identify hard-to-learn informative minority fault samples and an EM-based imputation algorithm to generate fault samples based on the distribution of minority samples.
Abstract: Data-driven fault diagnostics of industrial systems suffer from class-imbalanced problems, which is a common challenge for machine learning algorithms as it is difficult to learn the features of the minority class samples. Synthetic oversampling methods are commonly used to tackle these problems by generating minority class samples to balance the majority and minority classes. Two major issues will influence the performance of oversampling methods which are how to choose the most appropriate existing minority seed samples, and how to synthesize new samples from seed samples effectively. However, many existing oversampling methods are not accurate and effective enough to generate new samples when dealing with high-dimensional faulty samples with different imbalanced ratios, since they do not take these two factors into consideration at the same time. This article develops a novel adaptive oversampling technique: expectation maximization (EM)-based local-weighted minority oversampling technique for industrial fault diagnostics. This method uses a local-weighted minority oversampling strategy to identify hard-to-learn informative minority fault samples and an EM-based imputation algorithm to generate fault samples based on the distribution of minority samples. To validate the performance of the developed method, experiments were conducted on two real-world datasets. The results show that the developed method can achieve better performances, in terms of F-measure, Matthews correlation coefficient (MCC), and Mean (average of F-measure and MCC) values, on multiclass imbalanced fault diagnostics in different imbalance ratios than state-of-arts’ baseline sampling techniques.

Journal ArticleDOI
TL;DR: The results show that, as to a flight cruise with different stages, the new method is basically reasonable to most of the load histories, especially for the situation with impact loads existing.
Abstract: This paper develops an improved dynamic reliability model for compressor rotor system, defined as a “unconventional active system” here. First, regarding to the variety of system-component connections, system specific reliability modeling technique is presented, in which the stress and strength are both nonnegative stochastic processes. The new model fully considers the load characteristics of rotor blades, and inherently embodied the effect of the statistical dependence between the failures of the rotor blades. Then, calculated by the new model, the relationship between reliability and time, and that between the hazard rate and time, for a compressor rotor system are discussed, respectively. Comparing to the traditional series system reliability model, the new system reliability model developed in this paper has higher accuracy for compressor rotor system, since the traditional models neglect the difference between compressor rotor system and traditional active system. Moreover, in order to master the cruise characteristics of the aero-plane during reliability analysis of its compressor rotor blade set, a flight cruise is separated into several stages based on the value of Mach number and its change rate, and a subsection solution method is innovatively proposed accordingly. At the end of the paper, a certain cruise of fighter plane is analyzed to verify the accuracy of the new method. The results show that, as to a flight cruise with different stages, the new method is basically reasonable to most of the load histories, especially for the situation with impact loads existing.

Journal ArticleDOI
TL;DR: Empirical results show that two unsupervised methods [i.e., lines of code (LOC) and Halstead's volume (HV)] and four recently proposed state-of-the-art supervised methods can achieve better performance than the other methods in terms of effort-aware performance measures.
Abstract: Security vulnerability prediction (SVP) can identify potential vulnerable modules in advance and then help developers to allocate most of the test resources to these modules. To evaluate the performance of different SVP methods, we should take the security audit and code inspection into account and then consider effort-aware performance measures (such as $ACC$ and $P_{\rm opt}$ ). However, to the best of our knowledge, the effectiveness of different SVP methods has not been thoroughly investigated in terms of effort-aware performance measures. In this article, we consider 48 different SVP methods, of which 36 are supervised methods and 12 are unsupervised methods. For the supervised methods, we consider 34 software-metric-based methods and two text-mining-based methods. For the software-metric-based methods, in addition to a large number of classification methods, we also consider four state-of-the-art methods (i.e., EALR, OneWay, CBS, and MULTI) proposed in recent effort-aware just-in-time defect prediction studies. For text-mining-based methods, we consider the Bag-of-Word model and the term-frequency-inverse-document-frequency model. For the unsupervised methods, all the modules are ranked in the ascendent order based on a specific metric. Since 12 software metrics are considered when measuring extracted modules, there are 12 different unsupervised methods. To the best of our knowledge, over 40 SVP methods have not been considered in previous SVP studies. In our large-scale empirical studies, we use three real open-source web applications written in PHP as benchmark. These three web applications include 3466 modules and 223 vulnerabilities in total. We evaluate these SVP methods both in the within-project SVP scenario and the cross-project SVP scenario. Empirical results show that two unsupervised methods [i.e., lines of code (LOC) and Halstead's volume (HV)] and four recently proposed state-of-the-art supervised methods (i.e., MULTI, OneWay, CBS, and EALR) can achieve better performance than the other methods in terms of effort-aware performance measures. Then, we analyze the reasons why these six methods can achieve better performance. For example, when using 20% of the entire efforts, we find that these six methods always require more modules to be inspected, especially for unsupervised methods LOC and HV. Finally, from the view of practical vulnerability localization, we find that all the unsupervised methods and the OneWay method have high false alarms before finding the first vulnerable module. This may have an impact on developers’ confidence and tolerance, and supervised methods (especially MULTI and text-mining-based methods) are preferred.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a MET amorphic approach to assess and validate unsupervised machine learning systems, abbreviated as mettle, by explicitly considering the specific expectations and requirements of these systems from individual users' perspectives.
Abstract: Unsupervised machine learning is the training of an artificial intelligence system using information that is neither classified nor labeled, with a view to modeling the underlying structure or distribution in a dataset. Since unsupervised machine learning systems are widely used in many real-world applications, assessing the appropriateness of these systems and validating their implementations with respect to individual users’ requirements and specific application scenarios/contexts are indisputably two important tasks. Such assessments and validation tasks, however, are fairly challenging due to the absence of a priori knowledge of the data. In view of this challenge, in this article, we develop a MET amorphic T esting approach to assessing and validating unsupervised machine LE arning systems, abbreviated as mettle . Our approach provides a new way to unveil the (possibly latent) characteristics of various machine learning systems, by explicitly considering the specific expectations and requirements of these systems from individual users’ perspectives. To support mettle , we have further formulated 11 generic metamorphic relations (MRs), covering users’ generally expected characteristics that should be possessed by machine learning systems. We have performed an experiment and a user evaluation study to evaluate the viability and effectiveness of mettle . Our experiment and user evaluation study have shown that, guided by user-defined MR-based adequacy criteria, end users are able to assess, validate, and select appropriate clustering systems in accordance with their own specific needs. Our investigation has also yielded insightful understanding and interpretation of the behavior of the machine learning systems from an end-user software engineering's perspective , rather than a designer's or implementor's perspective, who normally adopts a theoretical approach.

Journal ArticleDOI
TL;DR: The analysis reveals that STT-MRAM cache vulnerability is highly workload-dependent and varies by orders of magnitude in different cache access patterns and moreover, error rates are differently affected by PVs.
Abstract: Spin-transfer torque magnetic RAM (STT-MRAM) is known as the most promising replacement for static random access memory (SRAM) technology in large last-level cache memories (LLC). Despite its high density, nonvolatility, near-zero leakage power, and immunity to radiation as the major advantages, STT-MRAM-based cache memory suffers from high error rates mainly due to retention failure (RF), read disturbance , and write failure . Existing studies are limited to estimate the rate of only one or two of these error types for STT-MRAM cache. However, the overall vulnerability of STT-MRAM caches, whose estimation is a must to design cost-efficient reliable caches, has not been studied previously. In this paper, we propose a system-level framework for reliability exploration and characterization of errors’ behavior in STT-MRAM caches. To this end, we formulate the cache vulnerability considering the intercorrelation of the error types including RF, read disturbance, and write failure as well as the dependency of error rates to workloads’ behavior and process variations (PVs). Our analysis reveals that STT-MRAM cache vulnerability is highly workload-dependent and varies by orders of magnitude in different cache access patterns. Our analytical study also shows that this vulnerability divergence significantly increases by PVs in STT-MRAM cells. To take the effects of system workloads and PVs into account, we implement the error types in gem5 full-system simulator. The experimental results using a comprehensive set of multiprogrammed workloads from SPEC CPU2006 benchmark suite on a quad-core processor show that the total error rate in a shared STT-MRAM LLC varies by 32.0× for different workloads. A further 6.5× vulnerability variation is observed when considering PVs in the STT-MRAM cells. In addition, the contribution of each error type in total LLC vulnerability highly varies in different cache access patterns and moreover, error rates are differently affected by PVs. The proposed analytical and empirical studies can significantly help system architects for efficient utilization of error mitigation techniques and designing highly reliable and low-cost STT-MRAM LLCs.

Journal ArticleDOI
TL;DR: This work has explored an efficient machine learning technique, namely extreme learning machine (ELM) for prediction of the number of software faults, and a new variant of ELM, namely weighted regularization ELM is proposed to generalize the imbalanced data to balanced data.
Abstract: Imbalanced data is a significant issue in software fault prediction. It is very challenging for software engineers to handle imbalanced software fault data for the early prediction of software faults. In the last two decades, many researchers have used synthetic minority oversampling technique (SMOTE), SMOTE for regression and other such techniques to preprocess the imbalanced software fault data. However, these preprocessing techniques do not produce consistently good accuracy, especially in inter release, and cross project fault prediction. The learning of imbalanced fault data for prediction of the number of software faults has not been explored in depth so far. To deal with this scenario, we have explored an efficient machine learning technique, namely extreme learning machine (ELM) for prediction of the number of software faults. Furthermore, a new variant of ELM, namely weighted regularization ELM, is proposed to generalize the imbalanced data to balanced data. To validate the proposed imbalanced learning model, we have used 26 open source PROMISE software fault datasets and three prediction scenarios, intra release, inter release, and cross project. We have conducted the experiments for prediction of the number of faults. The experimental results showed that the proposed approach led to improved performance.

Journal ArticleDOI
TL;DR: In this paper, the authors used semi-Markov process (SMP) theorem for DFT solution with the motivation of obviating the model state-explosion, considering nonexponential failure distribution through a hierarchical solution, which can generalize dynamic behaviors like functional dependencies, sequences, priorities and spares in a single model.
Abstract: Dynamic fault tree (DFT) is a top-down deductive technique extended to model systems with complex failure behaviors and interactions. In two last decades, different methods have been applied to improve its capabilities, such as computational complexity reduction, modularization, intricate failure distribution, and reconfiguration. This paper uses semi-Markov process (SMP) theorem for DFT solution with the motivation of obviating the model state-explosion, considering nonexponential failure distribution through a hierarchical solution. In addition, in the proposed method, a universal SMP for static and dynamic gates is introduced, which can generalize dynamic behaviors like functional dependencies, sequences, priorities, and spares in a single model. The efficiency of the method regarding precision and competitiveness with commercial tools, repeated events consideration, computational complexity reduction, nonexponential failure distribution consideration, and repairable events in DFT is studied by a number of examples, and the results are then compared to those of the selected existing methods.

Journal ArticleDOI
TL;DR: A generic framework for modeling and assessment of preventive and corrective maintenances, including virtual age models of Kijima, and two applications to real dataset issued from off-road engines of Brazilian mining trucks and bladder cancer study are presented.
Abstract: This paper presents a generic framework for modeling and assessment of preventive and corrective maintenances. The virtual age (VA) models of Kijima are generalized. The proposed definition provide many existing models as VA models. New models are also proposed. Covariates effects can be considered. The framework is generic in the sense that we propose an iterative way to compute the characteristics of the model that does not depend on the number of maintenances, on their types, on their effect models, and on the way of planning the preventive maintenances. Such a framework is particularly interesting in order to develop adaptive software tools, such as the virtual age models (VAM) package of R language. Methods are proposed for maintenance times simulation, maximum likelihood parametric estimations, and reliability indicators computation. Finally, we present two applications to real dataset issued from off-road engines of Brazilian mining trucks and bladder cancer study.

Journal ArticleDOI
TL;DR: Results show that the generalized inflection S-shaped model provides outputs that may significantly differ from those provided by the models nested within it, and the nonexistence issue of maximum-likelihood estimates cannot be considered an oddity and that its occurrence is not necessarily related to model complexity.
Abstract: In this paper, the new generalized inflection S-shaped software reliability growth model is proposed. It is a very flexible finite failure Poisson process that possesses two distinguishing features: 1) includes as special cases the popular inflection S-shaped model, Goel generalized nonhomogenous Poisson process, and Goel–Okumoto model and 2) differently than these latter models, allows for modeling nonmonotonic failure rate per fault functions. The properties of the generalized inflection S-shaped model are discussed and intuitive arguments are provided to justify its structure. Maximum-likelihood estimators of model parameters are formulated and their properties are summarized. A special attention is devoted to the nonexistence issue of maximum-likelihood estimates. The problem of estimating the optimal release time of a software product is also addressed. Affordability and flexibility of the proposed model are demonstrated via four applicative examples, based on real sets of software reliability data. Attained results show that the generalized inflection S-shaped model provides outputs that may significantly differ from those provided by the models nested within it. As a side result, the developed examples also show that the nonexistence issue of maximum-likelihood estimates, in the case of finite failure Poisson processes, cannot be considered an oddity and that its occurrence is not necessarily related to model complexity.

Journal ArticleDOI
TL;DR: Developing robust estimators for one-shot device testing by assuming a Weibull distribution as a lifetime model to be a robust alternative to MLEs is developed.
Abstract: Classical inferential methods for one-shot device testing data from an accelerated life-test are based on maximum likelihood estimators (MLEs) of model parameters. However, the lack of robustness of MLE is well-known. In this article, we develop robust estimators for one-shot device testing by assuming a Weibull distribution as a lifetime model. Wald-type tests based on these estimators are also developed. Their robustness properties are evaluated both theoretically and empirically, through an extensive simulation study. Finally, the methods of inference proposed are applied to three numerical examples. Results obtained from both Monte Carlo simulations and numerical studies show the proposed estimators to be a robust alternative to MLEs.