Safar M. Alghamdi
Bio: Safar M. Alghamdi is an academic researcher from University of Salford. The author has contributed to research in topics: Statistics & Computer science. The author has an hindex of 1, co-authored 1 publications receiving 7 citations.
TL;DR: In this article, reliability equivalence factors of a system of independent and identical components with exponentiated Weibull lifetimes were studied, and the authors proposed several duplication methods to improve the reliability of the system.
Abstract: We study reliability equivalence factors of a system of independent and identical components with exponentiated Weibull lifetimes. The system has n subsystems connected in parallel and subsystem i has m_i components connected in series, i=1,...,n. We consider improving the reliability of the system by (a)a reduction method and (b) several duplication methods: (i) hot duplication; (ii) cold duplication with perfect switching; (iii) cold duplication with imperfect switching. We compute two types of reliability equivalence factors: survival equivalence factors and mean equivalence factors. Although our methods adapt to allow for general lifetime models, we use the exponentiated Weibull distribution because it is flexible and enables comparisons with other reliability equivalence studies. The example we present demonstrates the potential for applying these methods to address specific questions that arise when attempting to improve the reliability of simple systems or simple configurations of possibly complex subsystems in many diverse applications.
TL;DR: In this article , a method of analyzing gasoline and diesel price drifts based on self-organizing maps and Bayesian regularized neural networks was proposed, which showed that out of the three training algorithms considered, the bayesian regularization gives better results.
Abstract: Abstract Any nation’s growth depends on the trend of the price of fuel. The fuel price drifts have both direct and indirect impacts on a nation’s economy. Nation’s growth will be hampered due to the higher level of inflation prevailing in the oil industry. This paper proposed a method of analyzing Gasoline and Diesel Price Drifts based on Self-organizing Maps and Bayesian regularized neural networks. The US gasoline and diesel price timeline dataset is used to validate the proposed approach. In the dataset, all grades, regular, medium, and premium with conventional, reformulated, all formulation of gasoline combinations, and diesel pricing per gallon weekly from 1995 to January 2021, are considered. For the data visualization purpose, we have used self-organizing maps and analyzed them with a neural network algorithm. The nonlinear autoregressive neural network is adopted because of the time series dataset. Three training algorithms are adopted to train the neural networks: Levenberg-Marquard, scaled conjugate gradient, and Bayesian regularization. The results are hopeful and reveal the robustness of the proposed model. In the proposed approach, we have found Levenberg-Marquard error falls from − 0.1074 to 0.1424, scaled conjugate gradient error falls from − 0.1476 to 0.1618, and similarly, Bayesian regularization error falls in − 0.09854 to 0.09871, which showed that out of the three approaches considered, the Bayesian regularization gives better results.
TL;DR: In this paper , the authors introduced the unit exponentiated half logistic power series (UEHLPS), a family of compound distributions with bounded support, which is produced by compounding the UELE and power series distributions.
Abstract: The unit exponentiated half logistic power series (UEHLPS), a family of compound distributions with bounded support, is introduced in this study. This family is produced by compounding the unit exponentiated half logistic and power series distributions. In the UEHLPS class, some interesting compound distributions can be found. We find formulas for the moments, density and distribution functions, limiting behavior, and other UEHLPS properties. Five well-known estimating approaches are used to estimate the parameters of one sub-model, and a simulation study is created. The simulated results show that the maximum product of spacing estimates had lower accuracy measure values than the other estimates. Ultimately, three real data sets from various scientific areas are used to analyze the performance of the new class.
TL;DR: In this article , the authors proposed different modified goodness-of-fit tests based on the empirical distribution function (EDF) for the Weibull distribution and compared them with their SRS counterparts.
Abstract: It is well known that ranked set sampling (RSS) is superior to conventional simple random sampling (SRS) in that it frequently results in more effective inference techniques. One of the most popular and broadly applicable models for lifetime data is the Weibull distribution. This article proposes different modified goodness-of-fit tests based on the empirical distribution function (EDF) for the Weibull distribution. The recommended RSS tests are compared to their SRS counterparts. For each scheme, the critical values of the relevant test statistics are computed. A comparison of the power of the suggested goodness-of-fit tests based on a number of alternatives is investigated. RSS tests are more effective than their SRS equivalents, according to simulated data.
TL;DR: The half-logistic modified Kies exponential (HLMKEx) distribution as mentioned in this paper is a three-parameter model that is introduced in the current work to expand the modified kies exponential distribution and improve its flexibility in modeling real-world data.
Abstract: The half-logistic modified Kies exponential (HLMKEx) distribution is a novel three-parameter model that is introduced in the current work to expand the modified Kies exponential distribution and improve its flexibility in modeling real-world data. Due to its versatility, the density function of the HLMKEx distribution offers symmetrical, asymmetrical, unimodal, and reversed-J-shaped, as well as increasing, reversed-J shaped, and upside-down hazard rate forms. An infinite linear representation can be used to represent the HLMKEx density. The HLMKEx model’s fundamental mathematical features are obtained, such as the quantile function, moments, incomplete moments, and moments of residuals. Additionally, some measures of uncertainty as well as stochastic ordering are derived. To estimate its parameters, eight estimation methods are used. With the use of detailed simulation data, we compare the performance of each estimating technique and obtain partial and total ranks for the accuracy measures of absolute bias, mean squared error, and mean absolute relative error. The simulation results demonstrate that, in contrast to other competing distributions, the proposed distribution can actually fit the data more accurately. Two actual data sets are investigated in the field of engineering to demonstrate the adaptability and application of the suggested distribution. The findings demonstrate that, in contrast to other competing distributions, the provided distribution can actually fit the data more accurately.
TL;DR: A reliability model to study the failure dependence of a system with a main component and several protective auxiliary components and shows that a high replacement threshold of the auxiliary components is required when the replacement cost of the main component is high.
Abstract: The complexity of dependence between different types of components results in many challenges to estimate system reliability and to optimize maintenance plans. In this paper, we develop a reliability model to study the failure dependence of a system with a main component and several protective auxiliary components. Damage to the main component caused by random environmental shocks depends on the number of auxiliary components in operation. When the execution of inspection and maintenance actions during system operation is difficult, failures of the main component provide opportunities to inspect and replace the auxiliary components. We derive the system reliability using Laplace transforms and the matrix method. The optimization problem is solved by an enumeration method. A numerical example and sensitivity studies of cost parameters show how the evolution of the parameters influences the optimal maintenance strategy. The results show that a high replacement threshold of the auxiliary components is required when the replacement cost of the main component is high. Conversely, the threshold could be adjusted to a lower level when the replacement cost of the auxiliary components and the downtime cost increase.
TL;DR: This paper presents a new method to deduce minimal cut sets depending on the minimal path sets of the complex systems (networks) to generate the Incidence Matrix, and then compared it with the truth table of the system.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 23 August, 2020 Accepted: 29 September, 2020 Online: 20 October, 2020 In this paper, we present a new method to deduce minimal cut sets depending on the minimal path sets of the complex systems (networks) to generate the Incidence Matrix, and then compared it with the truth table of the system. This comparison, based on some algebraic properties, gives minimal-cut sets of the complex network with an algorithm in Mathematica software. In addition, the minimal cut sets completely characterize the operating state of the system and equal to the complex system structural function information. So, the distinguish of the operational states of the system give us information about the binary operational states for some components. The system failure time is also given immediately if the failure times of the component parts are known.
TL;DR: In this paper , the reliability estimates for the second model are more efficient than those for the first model in the case of dissimilar set sizes when the strength and stress variables have identical set sizes.
Abstract: In this study, we look at how to estimate stress–strength reliability models, R1 = P (Y < X) and R2= P (Y < X), where the strength X and stress Y have the same distribution in the first model, 𝑅1, and strength X and stress Z have different distributions in the second model, R2. Based on the first model, the stress Y and strength X are assumed to have the Lomax distributions, whereas, in the second model, X and Z are assumed to have both the Lomax and inverse Lomax distributions, respectively. With the assumption that the variables in both models are independent, the median-ranked set sampling (MRSS) strategy is used to look at different possibilities. Using the maximum likelihood technique and an MRSS design, we derive the reliability estimators for both models when the strength and stress variables have a similar or dissimilar set size. The simulation study is used to verify the accuracy of various estimates. In most cases, the simulation results show that the reliability estimates for the second model are more efficient than those for the first model in the case of dissimilar set sizes. However, with identical set sizes, the reliability estimates for the first model are more efficient than the equivalent estimates for the second model. Medical data are used for further illustration, allowing the theoretical conclusions to be verified.
TL;DR: Two types of availability equivalence factors of the system are obtained to compare different system designs and numerical example to interpret how to utilize the obtained results is provided.
Abstract: The performance of a repairable bridge network system is improved by using the availability equivalence factors. All components for the bridge system have constant failure and repair rates. The system is improved through the use of five methods: reduction, increase, hot duplication, warm duplication, and cold duplication methods. The availability of the original and improved systems is derived. Two types of availability equivalence factors of the system are obtained to compare different system designs. Numerical example to interpret how to utilize the obtained results is provided.
TL;DR: The data imputation process, and an artificial neural network (ANN), will be established to predict the impact of dementia, based on the considered dataset.
Abstract: Dementia is a condition in which cognitive ability deteriorates beyond what can be anticipated with natural ageing. Characteristically it is recurring and deteriorates gradually with time affecting a person’s ability to remember, think logically, to move about, to learn, and to speak just to name a few. A decline in a person’s ability to control emotions or to be social can result in demotivation which can severely affect the brain’s ability to perform optimally. One of the main causes of reliance and disability among older people worldwide is dementia. Often it is misunderstood which results in people not accepting it causing a delay in treatment. In this research, the data imputation process, and an artificial neural network (ANN), will be established to predict the impact of dementia. based on the considered dataset. The scaled conjugate gradient algorithm (SCG) is employed as a training algorithm. Cross-entropy error rates are so minimal, showing an accuracy of 95%, 85.7% and 89.3% for training, validation, and test. The area under receiver operating characteristic (ROC) curve (AUC) is generated for all phases. A Web-based interface is built to get the values and make predictions.