scispace - formally typeset
Search or ask a question

Showing papers on "Reliability theory published in 2022"


Journal ArticleDOI
01 Jan 2022
TL;DR: In this article, an adaptive Monte-Carlo sampling approach is proposed to estimate the probability of line overflow in a high voltage direct current power transmission grid with linear reliability constraints on power injections and line currents.
Abstract: Electricity production currently generates approximately 25% of greenhouse gas emissions in the USA. Thus, increasing the amount of renewable energy is a key step to carbon neutrality. However, integrating a large amount of fluctuating renewable generation is a significant challenge for power grid operating and planning. Grid reliability, i.e., an ability to meet operational constraints under power fluctuations, is probably the most important of them. In this letter, we propose computationally efficient and accurate methods to estimate the probability of line overflow, i.e., reliability constraints violation, under a known distribution of renewable energy generation. To this end, we investigate an importance sampling approach, a flexible extension of Monte-Carlo methods, which adaptively changes the sampling distribution to generate more samples near the reliability boundary. The approach allows to estimate overload probability in real-time based only on a few dozens of random samples, compared to thousands required by the plain Monte-Carlo. Our study focuses on high voltage direct current power transmission grids with linear reliability constraints on power injections and line currents. We propose a novel theoretically justified physics-informed adaptive importance sampling algorithm and compare its performance to state-of-the-art methods on multiple IEEE power grid test cases.

11 citations


Journal ArticleDOI
TL;DR: This article proposes a software belief reliability growth model (SBRGM) based on uncertain differential equations for the first time and shows that it performs better than several famous probability-based SRGMs in terms of fitting ability and prediction ability.
Abstract: Software reliability plays an important role in modern society. To evaluate software reliability, software reliability growth models (SRGMs) investigate the number of software faults in the testing phase. Obviously, testing progresses are inevitably influenced by dynamic indeterministic fluctuations such as the testing effort expenditure, testing efficiency and skill, testing method, and strategy. To model these dynamic fluctuations, several probability theory-based SRGMs are proposed. However, probability theory is suitable for dealing with aleatory uncertainty, but fails to deal with epistemic uncertainty widely existing in software faults. Therefore, this article considers software reliability from a new perspective under the framework of uncertainty theory, which is a new mathematical system different from probability theory, and proposes a software belief reliability growth model (SBRGM) based on uncertain differential equations for the first time. Based on this SBRGM, properties of essential software reliability metrics are investigated under belief reliability theory, which is a brand-new reliability theory. Parameter estimations for unknown parameters in SBRGM are presented. Furthermore, some numerical examples and real data analyses illustrate our methodology in detail, and show that it performs better than several famous probability-based SRGMs in terms of fitting ability and prediction ability. Finally, an optimal software release policy is discussed.

10 citations


Journal ArticleDOI
TL;DR: In this paper, a general approach in assessing the system reliability is presented, using the theory of signature and survival signature depending on whether the components of the system are of one type or several types, and some preventive maintenance strategies for a multi-component system whose components are subject to both internal failures and fatal shocks.

9 citations


Journal ArticleDOI
TL;DR: In this article , a general approach in assessing the system reliability is presented, using the theory of signature and survival signature depending on whether the components of the system are of one type or several types, and some preventive maintenance strategies for a multi-component system whose components are subject to both internal failures and fatal shocks.

9 citations


Journal ArticleDOI
TL;DR: In this paper , the reliability factor model is proposed for examining latent regions with poor conditional reliability, and correlates thereof, in a classical test theory framework, providing an analogue to test information functions in item response theory.
Abstract: Reliability is a crucial concept in psychometrics. Although it is typically estimated as a single fixed quantity, previous work suggests that reliability can vary across persons, groups, and covariates. We propose a novel method for estimating and modeling case-specific reliability without repeated measurements or parallel tests. The proposed method employs a "Reliability Factor" that models the error variance of each case across multiple indicators, thereby producing case-specific reliability estimates. Additionally, we use Gaussian process modeling to estimate a nonlinear, non-monotonic function between the latent factor itself and the reliability of the measure, providing an analogue to test information functions in item response theory. The reliability factor model is a new tool for examining latent regions with poor conditional reliability, and correlates thereof, in a classical test theory framework.

3 citations


Journal ArticleDOI
TL;DR: In this article , a novel mission reliability and mission effective capacity metric was introduced to take the time-varying characteristics of the wireless channel into account, while specifically studying multiconnectivity (MC)-enabled industrial radio systems.
Abstract: Various industrial Internet of Things applications demand execution periods throughout which no communication failure is tolerated. However, the classical understanding of reliability in the context of ultra-reliable low-latency communication (URLLC) does not reflect on the time-varying characteristics of the wireless channel. In this article, we introduce a novel mission reliability and mission effective capacity metric that takes these phenomena medium into account, while specifically studying multiconnectivity (MC)-enabled industrial radio systems. We assume uplink short packet transmission with no channel state information at URLLC user (the transmitter) and sporadic traffic arrival. Moreover, we leverage the existing framework of dependability theory and provide closed-form expressions (CFEs) for the mission reliability of the MC system using the maximal-ratio combining scheme. We do so by utilizing the mean time to first failure, which is the expected time of failure occurring for the first time. Moreover, we also derive exact CFEs for second-order statistics, such as level crossing rate and average fade duration, showing how fades are distributed in fading channels with respect to time. Furthermore, the design throughput maximization problem under the mission reliability constraint is solved numerically through the cross-entropy method.

3 citations


Journal ArticleDOI
TL;DR: In this paper, an improved method of the evidence theory is proposed for the state monitoring of a gas regulator station, which can meet the requirement of the dynamic fusion of the time-domain information.
Abstract: Regulator stations are widely used in gas transmission and distribution systems. Their state monitoring is of great significance to the safe operation of gas pipe networks. Due to the complexity of the working environment and the limitation of sensors, the acquired information is uncertain, which makes the state monitoring result prone to errors. The evidence theory has the ability to solve the uncertainty problem effectively. Most of the improvement methods of the evidence theory are in the spatial domain. These methods are not applicable to the fusion of the time-domain information. In this article, an improved method of the evidence theory is proposed for the state monitoring of a gas regulator station. It can meet the requirement of the dynamic fusion of the time-domain information. First, the back-propagation neural network is used to judge whether the evidence conflicts with each other. The simulation results demonstrate that it can judge the conflicts well. On this basis, the relative conflict factor is proposed to modify the evidence, and the calculation method of the adaptive time attenuation factor is proposed to reduce the accumulated error. The dynamic fusion of the time-domain information is realized by combining the time attenuation factor and the relative conflict factor. Finally, the proposed method is applied to the state monitoring of the gas regulator station. The feasibility and effectiveness of the method are verified by experiments. It verifies that the variation of the support degree of the proposed method for the correct proposition is 0.1478 higher than that of the temporal evidence combination based on relative reliability factor when the evidence is strongly conflicting.

3 citations


Journal ArticleDOI
TL;DR: In this article , a truncated normal uncertainty distribution of lifetime is given, and the belief reliability evaluation is presented using the uncertain measure, which can achieve more accurate and stable mean time-to-failure (MTTF) results with uncertain right censored TTF data under the small sample situation compared to classical probability and Bayesian reliability methods.
Abstract: Uncertain right censoring (URC) which describes the situation that the censoring settings of test units are unrelated to their failure time often occurs in practical life testing. Besides, in practical life testing, the small sample situation is also common since the test resources are usually limited, which brings epistemic uncertainties to reliability evaluations due to the lack of information. Under such situation, the large sample‐based probability theory is not appropriate anymore to conduct reliability evaluations. In this paper, the belief reliability evaluation is conducted with uncertain right censored time‐to‐failure (TTF) data under the small sample situation based on the belief reliability theory. Firstly, to deal with epistemic uncertainties, a truncated normal uncertainty distribution of lifetime is given, and the belief reliability evaluation is presented using the uncertain measure. Then, the corresponding uncertain statistics method for unknown parameter estimation is provided with objective measures. Finally, a simulation study and a practical case are used to illustrate the proposed method. The results show that the proposed method is suitable to deal with epistemic uncertainties and can achieve more accurate and stable mean time‐to‐failure (MTTF) results with uncertain right censored TTF data under the small sample situation compared to classical probability and Bayesian reliability methods.

2 citations


Journal ArticleDOI
TL;DR: In this paper , a non-probabilistic credible Bayesian reliability model is proposed for three failure modes and developed to model the credibility of nonprobablistic reliability, which can be updated by introducing new sample points.

2 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed using the Copula function to calculate the reliability index of the bridge structure construction process system, and the sensitivity analysis of bridge system reliability was carried out.
Abstract: Various random factors in the bridge construction process directly affect the safety of the bridge life cycle. The existing theories on the reliability of bridge structure mainly focus on the reliability of components and the reliability of the bridge structure system in the completion and operation stages, while the research on the reliability of the structure system in the construction stage is relatively lacking. Therefore, this paper proposed using the Copula function to calculate the reliability index of the bridge structure construction process system. The basic theory of the Copula function was introduced in detail, and the formula was improved according to the actual situation of bridge construction. Finally, the sensitivity analysis of bridge system reliability was carried out. The research results showed that the method proposed in this paper based on Copula theory to calculate the reliability index of the bridge structure construction process system has strong applicability, simple calculation, and can be used in conjunction with the “interval estimation method”, which is suitable for large and complex bridge structural engineering. At the same time, the conclusion that the influence of failure mode correlation on structural reliability should not be ignored in the actual engineering construction process is confirmed.

2 citations


Journal ArticleDOI
TL;DR: In this article , the authors investigated the influence of the network topology on the reliability of the satellite communication networks (SCN), and showed that to achieve a higher reliability, it is necessary to use an SCN with a radial structure.
Abstract: Objectives . The most important distinguishing feature of satellite communication networks (SCNs) is topology, which consolidates the scheme for combining nodes and communication channels into a single structure and largely determines the main characteristics of communication systems. The following topologies are used in SCNs: fully connected, tree-like, ring-shaped, and radial (“star” type). The topology can be changed depending on the tasks being solved; for example, to ensure high reliability rates. The most frequently used indicator characterizing the reliability of communication networks is the readiness factor. Considering the SCN as a complex recoverable system, it is advisable to analyze the operational readiness factor along with the readiness factor. This paper investigates the influence of the network topology on the reliability of the SCN. Methods . Queuing theory was used to analyze the flow of events, that is, the flow of failures and recoveries. Results . Assuming that the exponential Mean Time Between Failures (MTBF) model can be used for a central node with a radial network topology, the time dependences of the operational readiness factor were obtained. The reliability of networks with ring and radial topology was compared in terms of the operational readiness factor. Conclusions . To achieve a higher reliability, it is necessary to use an SCN with a radial structure. For example, on a time interval of 12000 h, the operational readiness factor of a two-node SCN with a radial structure is 0.9, and for an SCN with a ring topology with the number of nodes 2, 3, 4, it is 0.7, 0.59, and 0.5, respectively. The study also showed that radial topology is more efficient even with less reliable nodes, that is, with higher failure rates. The advantage of a radial network topology increases as the number of nodes increases. However, in an SCN with a radial topology, failure of the central unit leads to complete degradation of the entire system.

Proceedings ArticleDOI
20 Jul 2022
TL;DR: In this paper , the reliability of an electrical system that is used inside a spacecraft was analyzed using geodesics for equivalent simplified reliability polynomial; the mean time to failure (MTTF), the failure rates, and the steady state failure rates for the system have been studied based on their geometric properties; investigating all the exponential curves of decay included in the hypersurface of reliability.
Abstract: This paper utilizes the geometric interpretations that were used to analyze the reliability of an electrical system that is used inside a spacecraft. Several new ideas have been addressed; (i) geodesics for equivalent simplified reliability polynomial; (ii) the mean time to failure (MTTF), the failure rates, and the steady state failure rates for the system have been studied based on their geometric properties; (iii) investigating all the exponential curves of decay included in the hypersurface of reliability. An extension of the standard definition of reliability has been considered by assuming the reliability component to be defined over the whole domain of the real axis.

Journal ArticleDOI
TL;DR: In this paper , a correlation-aware network topology design problem is formulated as a quadratic integer program to find the optimal solution for wireless backhaul networks under correlated failures with a focus on rain disturbances.
Abstract: Design of reliable wireless backhaul networks is challenging due to the inherent vulnerability of wireless backhauling to random fluctuations of the wireless channel. Considerable studies deal with modifying and designing the network topology to meet the reliability requirements in a cost-efficient manner. However, these studies ignore the correlation among link failures, particularly those caused by weather disturbances. Consequently, the resulting topology designs may fail to meet the network reliability requirements under correlated failure scenarios. To fill this gap, we study the design of cost-efficient and reliable wireless backhaul networks under correlated failures with a focus on rain disturbances. We first propose a new model to consider the pairwise correlation amongf links along a path. The model is verified on real data, indicating an approximation closer to reality than the existing independent failure model. Second, we model the correlation among different paths by defining a penalty cost. Considering the newly formalized link and path correlation, we formulate the correlation-aware network topology design problem as a quadratic integer program to find the optimal solutions. Two lightweight heuristic algorithms are developed to find near-optimal solutions within reasonable time. Performance evaluation shows that correlation-aware design substantially improves the resiliency under rain disturbances at a slightly increased cost compared to independent failure approaches.

Journal ArticleDOI
TL;DR: This article deduces an imperfect debugging software belief reliability growth model using the uncertain differential equation under the framework of uncertainty theory, and investigates properties of essentialSoftware belief reliability metrics, namely belief reliability, belief reliable time, and mean time between failures based on the belief reliability theory.
Abstract: Due to the increased dependency of the modern system on software-based system, software reliability has become the primary concern during the software development. To track and measure the software reliability, various software reliability growth models under the framework of probability theory have been proposed. Note that software failures involve lots of epistemic uncertainty, which cannot be depicted well by the probability theory, and debugging processes are usually imperfect due to the complexity and incomplete understanding of software systems. This article deduces an imperfect debugging software belief reliability growth model using the uncertain differential equation under the framework of uncertainty theory, and investigates properties of essential software belief reliability metrics, namely belief reliability, belief reliable time, and mean time between failures based on the belief reliability theory. Estimations for unknown parameters in this model are derived. Real data analyses validate our model and show that it performs better than previous models from the perspective of the sum of square error. A theoretical analysis for these results is presented.

Journal ArticleDOI
TL;DR: In this article , the authors explore certain reliability probability models using a Hamiltonian Monte Carlo (HMC) sampler and propose an algorithm to implement the conservation of energy by virtual collision among particles, to adopt constant simulation steps and adjust the acceptance probability and step size; and to alternate two kinds of trajectories to reconcile all principal components.
Abstract: The article aims to explore certain reliability probability models using a Hamiltonian Monte Carlo (HMC) sampler. There are some concerns with the classic HMC samplers. A notable aspect of the method comes from its acceptance probability whose value is constant, “1,” in theory. One practical problem is the possibility of divergence. A further issue emerges from the simulation trajectory, which essentially traverses on the last principal component. Also, the method is extremely sensitive to run-time parameters. As an improvement, Riemann Manifold HMC can travel along the first principal component. However, its overall accuracy on other components remains unclear. The no-U-turn sampler can adjust the distance traveled, but it may also suffer from excessive simulation steps. To address these concerns, this article proposes: to implement the conservation of energy by virtual collision among particles; to adopt constant simulation steps and adjust the acceptance probability and step size; and to alternate two kinds of trajectories to reconcile all principal components. Experiments show that the proposed algorithm is able to estimate the reliability and remaining useful life of the probability models. To bridge theory and practice, algorithm fragment is demonstrated with Mathematica language.



Proceedings ArticleDOI
24 Jan 2022
TL;DR: In this article , the authors present a method to conduct network reliability analysis of complex systems, which employs complex network theory, including several parameters related to network properties, and a performance index is specified.
Abstract: Complex systems are composed of many heterogeneous subsystems and components, which contribute to system failure or degradation. Reliability analysis of complex systems has slowly gained attention, yet most research continues to focus on the degradation of components in complex systems, including multiple failure modes, propagation of failure modes, and uncertainty analysis. Achieving and preserving reliable operation of complex systems requires more complete architectural models, which can utilize the information on the components and interactions between subsystems and their components. To address this limitation, this paper presents a method to conduct network reliability analysis of complex systems. The approach employs complex network theory, including several parameters related to network properties. A network reliability analysis method is subsequently defined and a performance index specified. A case study illustrates the proposed method, interpreting results and identifying opportunities for future elaboration of the model. Our results indicate that the proposed method can evaluate the overall reliability and identify the weak modules and links of a complex system that should be targeted for improvement.

Journal ArticleDOI
TL;DR: The reliability of a plant depends on the decisions made during the design and construction of the plant as well as the maintenance actions performed during the operation of the power plant as discussed by the authors . But reliability is not only important for economic and safety reasons.
Abstract: Reliability is important in chemical plants for economic and safety reasons. The reliability of a plant depends on the decisions made during the design and construction of the plant as well as the maintenance actions performed during the operation of the plant. Reliability theory deals with various analytical aspects of reliability (science, engineering, modeling, optimization, etc). A proper understanding of reliability theory is important for the design and construction of reliable plants and to ensure sufficient reliability during the operation of the plant. The article gives a brief introduction to some basic concepts from reliability theory and discusses the areas of interest to chemical engineers. It concludes with an illustrative example from the chemical industry.

Proceedings ArticleDOI
01 May 2022
TL;DR: In this article , a new uncertainty accelerated degradation model, which takes into account the cognitive uncertainty of time, sample, and double stress dimensions at the same time, is presented. And the parameter estimation method of the model is given based on the least square principle.
Abstract: Accelerated degradation test (ADT) plays an important role in products reliability evaluation and lifetime prediction. When less data is obtained, the traditional method based on probability theory has some defects in the cognitive uncertainty of quantitative data. Therefore, based on the uncertainty theory and belief reliability theory, this paper establishes a new uncertainty accelerated degradation model, which takes into account the cognitive uncertainty of time, sample, and double stress dimensions at the same time. Then, the parameter estimation method of the model is given based on the least square principle. Finally, take the temperature and humidity stress ADT of the sealing rubber ring as an example, establish double-stress accelerated degradation model and evaluate its reliability.



Journal ArticleDOI
TL;DR: The reliability of an element is defined as the probability that the element will survive on the ground without changing location (without changing spatial coordinates) for a given period of time under certain conditions as discussed by the authors .
Abstract: As mentioned above, the theory of reliability was mainly developed for technical devices. However, nowadays it is widely used in construction, and is also beginning to be used in geodesy. By abstracting its position can be successfully transferred to systems that do not seem to be in a dynamic state. Take, for example, the polygon metric network in the city. It would seem that such a network is in a static state, but over time it undergoes changes, ie it is in subtle dynamics and its reliability is gradually declining. Reliability in the broadest sense of the word means the ability of a technical device (system, network) to uninterrupted (trouble-free) operation for a specified period of time under certain conditions. This period of time is usually due to the time of a task. Which is carried out by a device or system and is part of the overall operational task. Currently, the problem of reliability is becoming one of the key problems of technology and management. Ensuring the reliable operation of all elements of the system is of paramount importance. Improving reliability requires special study and quantitative analysis of the phenomena associated with accidental failures of devices or systems. At this time, the theory of reliability has become a special science that makes extensive use of probable methods of examination. In the theory of reliability there are two types of failures: sudden and gradual. Consider sudden failures. Sudden device failures are understood as an instantaneous failure, which means that it cannot be used, and these failures occur at some random point in time. The reliability of the system depends on the composition and number of elements included in it, on the type of integration into the system and on the characteristics of each individual element. An element is to be understood as any device that is not subject to further disconnection, the reliability of which is specified or determined experimentally. By assembling such elements in different ways into systems, we will solve the problem of determining the reliability of the system depending on the reliability of its elements. The reliability of elements and systems is determined by numerical characteristics. We give some definitions of these characteristics for the element and the system as a whole. The reliability of the element is the probability that the element in certain conditions will work flawlessly over time, its probability is denoted. With increasing time, reliability usually decreases; with probability. In the geodetic literature, the term "reliability" is often used, which means the accuracy of the obtained results of geodetic measurements. However, the reliability of a geodetic sign or network as a whole, from the point of view of reliability theory should be considered, for example, the ability of a sign or network to survive on the ground without changing location (without changing spatial coordinates) for a given period of time under certain conditions. In other words, the reliability of a geodetic sign or network as a whole is the probability that the sign or network in these conditions will fail to "work" until the end of a given time.



Proceedings ArticleDOI
01 Nov 2022
TL;DR: In this paper , an interval failure assessment diagram model is established by converting the uncertain parameters into the interval variables, and then the reliability state, the uncertain state and failure state are determined according to the state function.
Abstract: The failure assessment diagram model is an important probabilistic analysis method in the field of structural reliability analysis. However, the parameters are usual uncertain in practical engineering, and the distribution of parameters is difficult to be determined. In this case, the traditional failure assessment diagram model is no longer applicable. Therefore, based on the interval theory and the stress-strength interference model, an interval failure assessment diagram model is established by converting the uncertain parameters into the interval variables. And then the reliability state, the uncertain state and failure state are determined according to the state function. The proposed model provides a new reference for structure reliability analysis with uncertain parameters.



Proceedings ArticleDOI
31 Dec 2022
TL;DR: In this article , the authors propose to define reliability based on contrasts with one or several metafounders, leading to a sounder genetic interpretation for models with several base populations.
Abstract: For models with several base populations (Unknown Parent Groups or Metafounders), the usual definition of reliability is ill-posed. Here we propose to define reliability based on contrasts with one or several metafounders, leading to a sounder genetic interpretation. In the case of a single metafounder, our definition equals the definition of reliability for the classical animal model. This definition also allows expressing the reliability of contrasts of metafounders, which may be of interest to decide if their setup is estimable with sufficient reliability. All desired quantities can be obtained from elements of the inverse of the MME.

Proceedings ArticleDOI
25 Apr 2022
TL;DR: An improved method of D-S evidence theory based on weighted Lance distance that can more effectively fuse conflicting evidence than other methods and improve the accuracy of the reliability assessment of the flight reliability of guided munitions.
Abstract: When evaluating the flight reliability of guided munitions, the results of using D-S evidence theory to fuse conflicting evidence will be biased. Aiming at this problem, an improved method of D-S evidence theory based on weighted Lance distance is proposed. The evidence is preprocessed first, and the weighted Lance distance is used to calculate the degree to which different evidence is supported by other evidence. The product of the credibility determined by the Lance distance and the uncertainty determined by the entropy weight is used as the fusion weight to modify the mass function, and then the D-S evidence theory is used to obtain the result. Finally, the method is compared with the traditional D-S evidence fusion theory and Bayes fusion theory. The results show that this method can more effectively fuse conflicting evidence than other methods and improve the accuracy of the reliability assessment of the flight reliability of guided munitions.