scispace - formally typeset
Search or ask a question

Showing papers on "Failure rate published in 2021"


Journal ArticleDOI
TL;DR: The results showed that the PV/WT/FC combination is the best combination in view of LSCS and LIPmax for supplying the RCC as a cost-effective and reliable combination and the use of hydrogen storage as a reserve power has well compensated the fluctuations in renewable sources production.

76 citations


Journal ArticleDOI
TL;DR: The results indicate that, when the units have an increasing failure rate, the new switching policy improves the performance of the units which, in turn, increases the reliability of the system.

27 citations


Journal ArticleDOI
TL;DR: In this paper, the optimal periodic preventive maintenance (PM) policy for the system is investigated, assuming that each shock may cause a random increment of the system failure rate, and the expression of the average cost rate is derived explicitly.

22 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed an optimal statistical model to analyze COVID-19 data in order to model and analyze the COVID19 mortality rates in Somalia, which combines the log-logistic distribution and the tangent function, yielding the flexible extension loglogistic tangent (LLT) distribution, a new two-parameter distribution.
Abstract: The goal of this paper is to develop an optimal statistical model to analyze COVID-19 data in order to model and analyze the COVID-19 mortality rates in Somalia. Combining the log-logistic distribution and the tangent function yields the flexible extension log-logistic tangent (LLT) distribution, a new two-parameter distribution. This new distribution has a number of excellent statistical and mathematical properties, including a simple failure rate function, reliability function, and cumulative distribution function. Maximum likelihood estimation (MLE) is used to estimate the unknown parameters of the proposed distribution. A numerical and visual result of the Monte Carlo simulation is obtained to evaluate the use of the MLE method. In addition, the LLT model is compared to the well-known two-parameter, three-parameter, and four-parameter competitors. Gompertz, log-logistic, kappa, exponentiated log-logistic, Marshall-Olkin log-logistic, Kumaraswamy log-logistic, and beta log-logistic are among the competing models. Different goodness-of-fit measures are used to determine whether the LLT distribution is more useful than the competing models in COVID-19 data of mortality rate analysis.

22 citations


Journal ArticleDOI
TL;DR: The vulnerability of road network for dangerous goods transportation (RNDGT) under cascading failure considering intentional attack is analyzed in this paper, where the authors introduced the time characteristics of load distribution and node recovery ability into previous cascaded failure model, subdivide the state of failed node into normal state, partial failure state and complete failure state.

21 citations


Journal ArticleDOI
TL;DR: A new generalization of the Pareto type II model is introduced and studied, which can be “right skewed” with heavy tail shape and its corresponding failure rate can be“J-shape”, “decreasing” and “upside down (or increasing-constant-decrease)”.
Abstract: In this paper, a new generalization of the Pareto type II model is introduced and studied. The new density canbe “right skewed” with heavy tail shape and its corresponding failure rate can be “J-shape”, “decreasing” and “upside down (or increasing-constant-decreasing)”. The new model may be used as an “under-dispersed” and “over-dispersed” model. Bayesian and non-Bayesian estimation methods are considered. We assessed the performance of all methods via simulation study. Bayesian and non-Bayesian estimation methods are compared in modeling real data via two applications. In modeling real data, the maximum likelihood method is the best estimation method. So, we used it in comparing competitive models. Before using the the maximum likelihood method, we performed simulation experiments to assess the finite sample behavior of it using the biases and mean squared errors.

18 citations


Journal ArticleDOI
TL;DR: A planning tool for congestion forecasting and reliability assessment of overhead distribution lines and takes advantage of optimal active power dispatching among “congestion-nearby resources” to avoid congestions criticalities.
Abstract: Integration of DC grids into AC networks will realize hybrid AC/DC grids, a new energetic paradigm which will become widespread in the future due to the increasing availability of DC-based generators, loads and storage systems. Furthermore, the huge connection of intermittent renewable sources to distribution grids could cause security and congestion issues affecting line behaviour and reliability performance. This paper aims to propose a planning tool for congestion forecasting and reliability assessment of overhead distribution lines. The tool inputs consist of a single line diagram of a real or synthetic grid and a set of 24-h forecasting time series concerning climatic conditions and grid resource operative profiles. The developed approach aims to avoid congestions criticalities, taking advantage of optimal active power dispatching among “congestion-nearby resources”. A case study is analysed to validate the implemented control strategy considering a modified IEEE 14-Bus System with introduction of renewables. The tool also implements reliability prediction formulas to calculate an overhead line reliability function in congested and congestions-avoided conditions. A quantitative evaluation underlines the reliability performance achievable after the congestion strategy action.

18 citations


Journal ArticleDOI
TL;DR: Results show that the annulus loop has the lowest reliability and is the most likely to fail, and corresponding control measures are proposed that can significantly reduce the failure risk of the tree.

15 citations


Journal ArticleDOI
23 Mar 2021
TL;DR: A simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL) of an electronic device ( system) after an appreciable deviation from its normal operation conditions has been detected and the corresponding increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub curve has been assessed.
Abstract: Reliability evaluations and assurances cannot be delayed until the device (system) is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications), by electrical, optical and mechanical measurements and testing; checked (screened) during fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation. Prognostics and health monitoring (PHM) effort can be of significant help, especially at the last, operational stage, of the product use. Accordingly, a simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL) of an electronic device (system) after an appreciable deviation from its normal operation conditions has been detected, and the corresponding increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub curve has been assessed. The general concepts are illustrated by a numerical example. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure) of a device (system) at the operation stage of its lifetime.

14 citations


Journal ArticleDOI
TL;DR: A fast and accurate reliability approximation method based on the Lyapunov central limit theorem for heterogeneous 1-out-of-n cold standby systems with non-identical components that can estimate the reliability of large-scale CSSs efficiently and accurately.

14 citations


Journal ArticleDOI
TL;DR: A heuristic algorithm is developed and developed based on improved NSGA-II to solve CPP-MLF efficiently and performs well on the number of controllers, worst-case delay and robustness, while producing acceptable runtime overheads.

Journal ArticleDOI
TL;DR: In this paper, a two-parameter flexible extension of the Burr-Hatke distribution using the inverse-power transformation is proposed, and the failure rate of the new distribution can be an increasing shape, a decreasing shape, or an upside-down bathtub shape.
Abstract: This article introduces a two-parameter flexible extension of the Burr-Hatke distribution using the inverse-power transformation. The failure rate of the new distribution can be an increasing shape, a decreasing shape, or an upside-down bathtub shape. Some of its mathematical properties are calculated. Ten estimation methods, including classical and Bayesian techniques, are discussed to estimate the model parameters. The Bayes estimators for the unknown parameters, based on the squared error, general entropy, and linear exponential loss functions, are provided. The ranking and behavior of these methods are assessed by simulation results with their partial and overall ranks. Finally, the flexibility of the proposed distribution is illustrated empirically using two real-life datasets. The analyzed data shows that the introduced distribution provides a superior fit than some important competing distributions such as the Weibull, Frechet, gamma, exponential, inverse log-logistic, inverse weighted Lindley, inverse Pareto, inverse Nakagami-M, and Burr-Hatke distributions.

Journal ArticleDOI
TL;DR: In this paper, the optimal placement of distributed energy resources (DERs) to enhance the reliability and improve the voltage profile of the distribution feeder is proposed, and the effect of varying DER penetration on reliability and voltage profile is analyzed.

Journal ArticleDOI
TL;DR: A novel predictive maintenance (PdM) assessment matrix is proposed to overcome problems of failure mode and symptom analysis, which is tested using a case study of a centrifugal compressor and validated using empirical data provided by the case study company.
Abstract: Dependability analyses in the design phase are common in IEC 60300 standards to assess the reliability, risk, maintainability, and maintenance supportability of specific physical assets. Reliability and risk assessment uses well-known methods such as failure modes, effects, and criticality analysis (FMECA), fault tree analysis (FTA), and event tree analysis (ETA)to identify critical components and failure modes based on failure rate, severity, and detectability. Monitoring technology has evolved over time, and a new method of failure mode and symptom analysis (FMSA) was introduced in ISO 13379-1 to identify the critical symptoms and descriptors of failure mechanisms. FMSA is used to estimate monitoring priority, and this helps to determine the critical monitoring specifications. However, FMSA cannot determine the effectiveness of technical specifications that are essential for predictive maintenance, such as detection techniques (capability and coverage), diagnosis (fault type, location, and severity), or prognosis (precision and predictive horizon). The paper proposes a novel predictive maintenance (PdM) assessment matrix to overcome these problems, which is tested using a case study of a centrifugal compressor and validated using empirical data provided by the case study company. The paper also demonstrates the possible enhancements introduced by Industry 4.0 technologies.

Journal ArticleDOI
18 Nov 2021-Sensors
TL;DR: In this paper, the authors analyzed the reliability of a wireless sensor network for low power and low-cost applications and calculated its reliability considering the real environmental conditions and the real arrangement of the nodes deployed in the field.
Abstract: Wireless Sensor Networks are subjected to some design constraints (e.g., processing capability, storage memory, energy consumption, fixed deployment, etc.) and to outdoor harsh conditions that deeply affect the network reliability. The aim of this work is to provide a deeper understanding about the way redundancy and node deployment affect the network reliability. In more detail, the paper analyzes the design and implementation of a wireless sensor network for low-power and low-cost applications and calculates its reliability considering the real environmental conditions and the real arrangement of the nodes deployed in the field. The reliability of the system has been evaluated by looking for both hardware failures and communication errors. A reliability prediction based on different handbooks has been carried out to estimate the failure rate of the nodes self-designed and self-developed to be used under harsh environments. Then, using the Fault Tree Analysis the real deployment of the nodes is taken into account considering the Wi-Fi coverage area and the possible communication link between nearby nodes. The findings show how different node arrangements provide significantly different reliability. The positioning is therefore essential in order to obtain maximum performance from a Wireless sensor network.

Journal ArticleDOI
TL;DR: This proposed method is to ensure reliability and resource utilization for electrical products, and compares with traditional reliability assessment method, and considers reasonable life with more resource utilization than traditional design method.
Abstract: The mathematical concept of green design method is proposed based on the relation between energy and life cycle, including domain redefinition, geometric probability, failure rate and reliability e...

Journal ArticleDOI
TL;DR: In this article, fast solutions to data communication problems are proposed to send the required d units of user's data from the desired source node to the destination node within some constraints such as permissible error rate e, time constraint T and maintenance budget constraints B.

Journal ArticleDOI
TL;DR: This study aims to perform multi-state system reliability analysis in energy storage facilities of SAIPA Corporation to extract a predictive model for failure behavior as well as to analyze the effect of shocks on deterioration.
Abstract: In some environments, the failure rate of a system depends not only on time but also on the system condition, such as vibrational level, efficiency and the number of random shocks, each of which causes failure. In this situation, systems can keep working, though they fail gradually. So, the purpose of this paper is modeling multi-state system reliability analysis in capacitor bank under fatal and nonfatal shocks by a simulation approach.,In some situations, there may be several levels of failure where the system performance diminishes gradually. However, if the level of failure is beyond a certain threshold, the system may stop working. Transition from one faulty stage to the next can lead the system to more rapid degradation. Thus, in failure analysis, the authors need to consider the transition rate from these stages in order to model the failure process.,This study aims to perform multi-state system reliability analysis in energy storage facilities of SAIPA Corporation. This is performed to extract a predictive model for failure behavior as well as to analyze the effect of shocks on deterioration. The results indicate that the reliability of the system improved by 6%.,The results of this study can provide more confidence for critical system designers who are engaged on the proper system performance beyond economic design.

Journal ArticleDOI
07 Nov 2021-Symmetry
TL;DR: In this article, the problem of estimating the unknown xgamma parameter and some survival characteristics, such as reliability and failure rate functions in the presence of adaptive type-II progressive hybrid censored data is considered.
Abstract: Censoring mechanisms are widely used in various life tests, such as medicine, engineering, biology, etc., as they save (overall) test time and cost. In this context, we consider the problem of estimating the unknown xgamma parameter and some survival characteristics, such as reliability and failure rate functions in the presence of adaptive type-II progressive hybrid censored data. For this purpose, the maximum likelihood and Bayesian inferential approaches are used. Using the observed Fisher information under s-normal approximation, different asymptotic confidence intervals for any function of the unknown parameter were constructed. Using the gamma flexible prior, Bayes estimators against the squared-error loss were developed. Two procedures of Bayesian approximations—Lindley’s approximation and Metropolis–Hastings algorithm—were used to carry out the Bayes estimates and to construct the associated credible intervals. An extensive simulation study was implemented to compare the performance of the different methods. To validate the proposed methodologies of inference—two practical studies using datasets that form engineering and chemical fields are discussed.

Journal ArticleDOI
24 Aug 2021
TL;DR: Theoretical and applied researchers have been frequently interested in proposing alternative skewed and symmetric lifetime parametric models that provide greater flexibility in modeling real-life data in several applied sciences, so a three-parameter bounded lifetime model called the exponentiated new power function (E-NPF) distribution is introduced.
Abstract: Theoretical and applied researchers have been frequently interested in proposing alternative skewed and symmetric lifetime parametric models that provide greater flexibility in modeling real-life data in several applied sciences. To fill this gap, we introduce a three-parameter bounded lifetime model called the exponentiated new power function (E-NPF) distribution. Some of its mathematical and reliability features are discussed. Furthermore, many possible shapes over certain choices of the model parameters are presented to understand the behavior of the density and hazard rate functions. For the estimation of the model parameters, we utilize eight classical approaches of estimation and provide a simulation study to assess and explore the asymptotic behaviors of these estimators. The maximum likelihood approach is used to estimate the E-NPF parameters under the type II censored samples. The efficiency of the E-NPF distribution is evaluated by modeling three lifetime datasets, showing that the E-NPF distribution gives a better fit over its competing models such as the Kumaraswamy-PF, Weibull-PF, generalized-PF, Kumaraswamy, and beta distributions.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a hybrid resilience metric by analyzing the applicability of two existing methods, deterministic metric and probabilistic metric, to represent the inherent properties of equipment.

Journal ArticleDOI
TL;DR: The proposed UFP-LRC enables the data blocks that are stored on more failure-prone disks/nodes to tolerate a greater number of failures while suffering from less repair cost than others, leading to a substantial improvement of the overall reliability and repair performance for cloud storage systems.
Abstract: In recent years, erasure codes have become the de facto standard for data protection in large scale distributed cloud storage systems at the cost of an affordable storage overhead. However, traditional erasure coding schemes, such as Reed-Solomon codes, suffer from high reconstruction cost and I/Os. The recent past has seen a plethora of efforts to optimize the tradeoff between the reconstruction cost, I/Os and storage overhead. Quiet different from all prior studies, in this paper, our erasure coding technique makes the first attempt to take advantage of the unequal failure rates across the disks/nodes to optimize the system reliability and reconstruction performance. Specifically, our proposed technique, the Unequal Failure Protection based Local Reconstruction Code (UFP-LRC) divides the data blocks into several unequal-sized groups with local parities, assigning the data blocks stored on more failure-prone disks/nodes into the smaller-sized group, so as to provide unequal failure protection for each group. In this way, by exploiting the nonuniform local parity degrees, the proposed UFP-LRC enables the data blocks that are stored on more failure-prone disks/nodes to tolerate a greater number of failures while suffering from less repair cost than others, leading to a substantial improvement of the overall reliability and repair performance for cloud storage systems. We perform numerical analysis and build a prototype storage system to verify our approach. The analytical results show that the UFP-LRC technique gradually outperforms LRC along the increase of failure rate ratio. Also, extensive experiments show that, when compared to LRC, UFP-LRC is able to achieve a 10 to 15 percent improvement in throughput, and an 8 to 12 percent reduction in decoding latency, while retaining a comparable overall reliability.

Journal ArticleDOI
Wei Qiu1, Qiu Tang1, Wenxuan Yao2, Yuhong Qin1, Jun Ma1 
TL;DR: An improved k-nearest neighbor (IkNN) to identify potential outliers and a weighted fusion Bayesian method is proposed to fuse multiple extreme environmental stresses and failure rate using the proposed nonlinear fusion function.
Abstract: The failure evaluation of electric energy metering equipment is essential for the equipment design and accurate measurement of electric energy, especially in extreme environmental stress. However, actual failure assessment is often affected by the environmental noise and insufficient interpretability. To address this problem, this article first proposes an improved k-nearest neighbor (IkNN) to identify potential outliers. In addition, an optimized distance function is used to obtain the score for each outlier. Next, a probability analysis method, namely, the weighted fusion Bayesian (WFB), is proposed to fuse multiple extreme environmental stresses and failure rate using the proposed nonlinear fusion function. Combining the WFB and the IkNN, examples from three extreme environmental regions show that the proposed evaluation framework has a higher assessment performance and less uncertainty. Compared with the classical prediction methods, our framework has profound outlier detection and failure prediction performance ever under the condition of small samples. More importantly, the parameters of this model are interpretable compared to some conventional approaches.

Journal ArticleDOI
08 Mar 2021
TL;DR: A new approach was developed based on the hybridization of trained models to provide a more efficient model for a more accurate prediction of the pipe failure rates of water distribution network.
Abstract: The pipes as one of the main and important components of a water distribution network break during operation due to various factors. Developing models for pipes failure rate prediction can be one of the most important tools for managers and stakeholders during optimal operation of the water distribution network. In this study, the statistical and soft models such as Linear Regression, Generalized Linear Regression, Support Vector Machine, Feed Forward Neural Network (FFNN), Radial-Based Function Neural Network (RBFNN), and Adaptive Neuro-Fuzzy Inference System (ANFIS) were studied in order to predict the pipes failure rate based on the characteristics of Gorgan city water distribution network including diameter, length, age, installation depth, and number of failures of each pipe. In order to determine the optimal values of the parameters of each model, appropriate error indices including correlation coefficient (R), Mean Square Error (MSE), and Correlation Mean Square Error Ratio (CMSER) for training and test data were calculated, and the values of the parameters related to the model with the highest value of the CMSER index were considered as the model optimal values. Furthermore, in the validation stage, the values of R and MSE error indices for each of the above models were considered as a criterion for selecting the most appropriate model for predicting pipe failure rate. The findings show that among the soft and statistical models investigated, ANFIS with MSE of 0.071 and R of 0.92 can predict the failure rate of the studied network pipes more efficiently and more accurately than other models. Yet, despite the superiority of this model over other models, this model cannot accurately predict the failure rate of the studied network pipes due to its relatively high MSE value. Therefore, a new approach was developed based on the hybridization of trained models to provide a more efficient model for a more accurate prediction of the pipe failure rates of water distribution network. In this approach, the values of the network pipe failure rate predicted by each of the soft and statistical models are considered as independent input variables, and the observational failure rate values are considered as the dependent output variable of the ANFIS model. A comparison between the values of non-hybrid model validation data indices and the results of the proposed hybrid prediction model reveals that the use of the developed hybrid model increased the R error value from 8.1% (compared to the ANFIS model) to 260% (compared to the RBFNN model). It also decreased the MSE error value from 37% (compared to the FFNN model) to 58% (compared to the RBFNN model). Moreover, the hybrid model, compared to the superior non-hybrid ANFIS model, decreased MSE error rates by 45%. The findings show that the proposed model can significantly raise the accuracy of predicting the failure rate of pipes, compared to other existing models.

Journal ArticleDOI
TL;DR: This study uses data of past offshore fire incidents in the Gulf of Mexico to predict future incidents and shows how using a nonhomogeneous Poisson process (NHPP) assumption, where failure rate is a function of time, enables a better understanding of performance, and can be used to Predict future incidents more accurately.

Journal ArticleDOI
TL;DR: The reliability assessment of wind and solar energy based power system at different levels is presented and the helpfulness of the proposed technique is provided through different evaluation data.

Journal ArticleDOI
TL;DR: This study presents a method for determining reliability models of lead batteries by investigating individual failure modes and shows that batteries are subject to ageing, which results in time-dependent failure rates of different magnitudes.
Abstract: The safety requirements in vehicles continuously increase due to more automated functions using electronic components. Besides the reliability of the components themselves, a reliable power supply is crucial for a safe overall system. Different architectures for a safe power supply consider the lead battery as a backup solution for safety-critical applications. Various ageing mechanisms influence the performance of the battery and have an impact on its reliability. In order to qualify the battery with its specific failure modes for use in safety-critical applications, it is necessary to prove this reliability by failure rates. Previous investigations determine the fixed failure rates of lead batteries using data from teardown analyses to identify the battery failure modes but did not include the lifetime of these batteries examined. Alternatively, lifetime values of battery replacements in workshops without knowing the reason for failure were used to determine the overall time-dependent failure rate. This study presents a method for determining reliability models of lead batteries by investigating individual failure modes. Since batteries are subject to ageing, the analysis of lifetime values of different failure modes results in time-dependent failure rates of different magnitudes. The failure rates of the individual failure modes develop with different shapes over time, which allows their ageing behaviour to be evaluated.

Journal ArticleDOI
11 Jun 2021
TL;DR: The authors explore how the universal generating function can work to solve the problems related to the network using the exponentially distributed failure rate and proposes an efficient algorithm to compute the reliability indices of the network.
Abstract: Network reliability is one of the most important concepts in this modern era. Reliability characteristics, component significance measures, such as the Birnbaum importance measure, critical importance measure, the risk growth factor and average risk growth factor, and network reliability stability of the communication network system have been discussed in this paper to identify the critical components in the network, and also to quantify the impact of component failures. The study also proposes an efficient algorithm to compute the reliability indices of the network. The authors explore how the universal generating function can work to solve the problems related to the network using the exponentially distributed failure rate. To illustrate the proposed algorithm, a numerical example has been taken.

Journal ArticleDOI
TL;DR: By understanding the impact of parameter variation enabling utilities to categorize their priorities in the decision making process and optimally invest in distribution network with respect to reliability, this work enables utilities to obtain more comprehensive solutions.

Proceedings ArticleDOI
11 Apr 2021
TL;DR: In this paper, a fault tree analysis (FTA) is used to study the reliability performance of a microgrid in grid-tied mode with photovoltaics (PV), wind turbines (WT), and battery energy storage systems (BESS) as distributed generators.
Abstract: Fault tree analysis (FTA) is a top-down reliability evaluation method based on Boolean logic and is used to identify the potential causes of system failures (top event) that would possibly occur due to various combinations of component failures (basic events). In this paper, FTA is used to study the reliability performance of a microgrid in grid-tied mode with photovoltaics (PV), wind turbines (WT), and battery energy storage systems (BESS) as distributed generators. Three FTA analyses are performed for three cases, namely microgrid (MG) in islanded mode, utility grid (UG), and finally MG in grid-tied mode (UG + MG). Failure rate and repair rate data for the analyses are drawn from several public databases, reports, and literature. Quantitative and qualitative assessments of the FTA results and performance measures such as unavailability, conditional failure intensity, failure frequency, and the number of failures are presented. Furthermore, to provide a more comprehensive and thorough understanding of the system reliability, several other essential performance measures such as marginality, criticality, diagnostic, risk achievement, and risk reduction for each basic event and gate are also calculated and presented. Cut sets of the top failure events are also documented. Results prove the feasibility and effectiveness of the proposed method.