scispace - formally typeset
Search or ask a question

Showing papers on "Reliability (statistics) published in 2013"


Book
09 Jun 2013
TL;DR: This chapter discusses the application of the Binomial Distribution to network Modelling and Evaluation of Simple Systems and System Reliability Evaluation Using Probability Distributions.
Abstract: Introduction. Basic Probability Theory. Application of the Binomial Distribution. Network Modelling and Evaluation of Simple Systems. Network Modelling and Evaluation of Complex Systems. Probability Distributions in Reliability Evaluation. System Reliability Evaluation Using Probability Distributions. Monte Carlo Simulation. Epilogue.

1,062 citations


Journal ArticleDOI
TL;DR: A comprehensive review of reliability assessment and improvement of power electronic systems from three levels: 1) metrics and methodologies of reliability assess of existing system; 2) reliability improvement of existing systems by means of algorithmic solutions without change of the hardware; and 3) reliability-oriented design solutions that are based on fault-tolerant operation of the overall systems.
Abstract: With wide-spread application of power electronic systems across many different industries, their reliability is being studied extensively. This paper presents a comprehensive review of reliability assessment and improvement of power electronic systems from three levels: 1) metrics and methodologies of reliability assessment of existing system; 2) reliability improvement of existing system by means of algorithmic solutions without change of the hardware; and 3) reliability-oriented design solutions that are based on fault-tolerant operation of the overall systems. The intent of this review is to provide a clear picture of the landscape of reliability research in power electronics. The limitations of the current research have been identified and the direction for future research is suggested.

681 citations


Journal ArticleDOI
TL;DR: Reliability and similarity of resting-state functional connectivity can be greatly improved by increasing the scan lengths, and that both the increase in the number of volumes as well as the length of time over which these volumes was acquired drove this increase in reliability.

668 citations


Journal ArticleDOI
TL;DR: In this paper, the main research instruments (questionnaire, interview and classroom observation) usually used in the mixed method designs are presented and elaborated on, and various ways of boosting the validity and reliability of the data and instruments are delineated at length.
Abstract: The mixed method approaches have recently risen to prominence. The reason that more researchers are opting for these types of research is that both qualitative and quantitative data are simultaneously collected, analyzed and interpreted. In this article the main research instruments (questionnaire, interview and classroom observation) usually used in the mixed method designs are presented and elaborated on. It is believed that using different types of procedures for collecting data and obtaining that information through different sources (learners, teachers, program staff, etc.) can augment the validity and reliability of the data and their interpretation. Therefore, the various ways of boosting the validity and reliability of the data and instruments are delineated at length. Finally, an outline of reporting the findings in the mixed method approaches is sketched out. It is believed that this article can be useful and beneficial to the researchers in general and postgraduate students in particular who want to start or are involved in the process of conducting research.

550 citations


Journal ArticleDOI
TL;DR: An original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling, based on the AK-MCS algorithm, that enables the correction or validation of the FORM approximation with only a very few mechanical model computations.

458 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose to use a Kriging surrogate for the performance function as a means to build a quasi-optimal importance sampling density, which can be applied to analytical and finite element reliability problems and proves efficient up to 100 basic random variables.

389 citations


Journal ArticleDOI
09 Sep 2013-PLOS ONE
TL;DR: A case is made for clinicians to consider measurement error (ME) indices Coefficient of Repeatability (CR) or the Smallest Real Difference (SRD) over relative reliability coefficients like the Pearson’s (r) and the Intraclass Correlation Coefficient (ICC) while selecting tools to measure change and inferring change as true.
Abstract: The use of standardised tools is an essential component of evidence-based practice. Reliance on standardised tools places demands on clinicians to understand their properties, strengths, and weaknesses, in order to interpret results and make clinical decisions. This paper makes a case for clinicians to consider measurement error (ME) indices Coefficient of Repeatability (CR) or the Smallest Real Difference (SRD) over relative reliability coefficients like the Pearson’s (r) and the Intraclass Correlation Coefficient (ICC), while selecting tools to measure change and inferring change as true. The authors present statistical methods that are part of the current approach to evaluate test–retest reliability of assessment tools and outcome measurements. Selected examples from a previous test–retest study are used to elucidate the added advantages of knowledge of the ME of an assessment tool in clinical decision making. The CR is computed in the same units as the assessment tool and sets the boundary of the minimal detectable true change that can be measured by the tool.

376 citations


Journal ArticleDOI
TL;DR: The Food Intake LEVEL Scale (FILS) seems to have fair reliability and validity as a practical tool for assessing the severity of dysphagia, and further study on the reliability, validity, and sensitivity of the FILS compared with the FOIS is needed.

241 citations


Journal ArticleDOI
TL;DR: A stochastic process (Wiener process) combined with a data analysis method (Principal Component Analysis) is proposed to model the deterioration of the components and to estimate the RUL on a case study.

235 citations


Journal ArticleDOI
TL;DR: A systematic and optimized approach for designing microgrids taking into account system reliability- and supply-security-related aspects is presented, and the effect of optimization coefficients on the design and the robustness of the algorithm are investigated using sensitivity studies.
Abstract: Microgrids are known as clusters of distributed energy resources serving a group of distributed loads in grid-connected and isolated grid modes. Nowadays, the concept of microgrids has become a key subject in the smart grid area, demanding a systematic procedure for their optimal construction. According to the IEEE Std 1547.4, large distribution systems can be clustered into a number of microgrids to facilitate powerful control and operation infrastructure in future distribution systems. However, clustering large systems into a set of microgrids with high reliability and security is not reported in current literature. To fill-out this gap, this paper presents a systematic and optimized approach for designing microgrids taking into account system reliability- and supply-security-related aspects. The optimum design considers sustained and temporary faults, for system reliability via a combined probabilistic reliability index, and real and reactive power balance, for supply security. The loads are assumed to be variable and different distributed generation (DG) technologies are considered. Conceptual design, problem formulation and solution algorithms are presented in this paper. The well-known PG&E 69-bus distribution system is selected as the test system. The effect of optimization coefficients on the design and the robustness of the algorithm are investigated using sensitivity studies.

226 citations


Journal ArticleDOI
TL;DR: Good-quality subjective and objective data suggest adequate construct validity for each of the CT instruments, but a major limitation of this literature is studies that assess the predictive validity of these instruments.
Abstract: The accurate measurement of circadian typology (CT) is critical because the construct has implications for a number of health disorders. In this review, we focus on the evidence to support the reliability and validity of the more commonly used CT scales: the Morningness-Eveningness Questionnaire (MEQ), reduced Morningness-Eveningness Questionnaire (rMEQ), the Composite Scale of Morningness (CSM), and the Preferences Scale (PS). In addition, we also consider the Munich ChronoType Questionnaire (MCTQ). In terms of reliability, the MEQ, CSM, and PS consistently report high levels of reliability (>0.80), whereas the reliability of the rMEQ is satisfactory. The stability of these scales is sound at follow-up periods up to 13 mos. The MCTQ is not a scale; therefore, its reliability cannot be assessed. Although it is possible to determine the stability of the MCTQ, these data are yet to be reported. Validity must be given equal weight in assessing the measurement properties of CT instruments. Most commonly repor...

Journal ArticleDOI
TL;DR: This study identified testing protocols that improve the reliability of measuring gait variability and recommends using a continuous walking protocol and to collect no fewer than 30 steps.

Proceedings ArticleDOI
29 May 2013
TL;DR: In this article, the authors introduce the most prominent reliability concerns from today's points of view and roughly recapitulate the progress in the community so far and suggest a way for coping with reliability challenges in upcoming technology nodes.
Abstract: Reliability concerns due to technology scaling have been a major focus of researchers and designers for several technology nodes. Therefore, many new techniques for enhancing and optimizing reliability have emerged particularly within the last five to ten years. This perspective paper introduces the most prominent reliability concerns from today's points of view and roughly recapitulates the progress in the community so far. The focus of this paper is on perspective trends from the industrial as well as academic points of view that suggest a way for coping with reliability challenges in upcoming technology nodes.

Proceedings ArticleDOI
07 Jul 2013
TL;DR: The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed, showing that designing a reliable Fog computing platform is feasible.
Abstract: This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new methodology for calculating the mean time between failure (MTBF) of a photovoltaic module-integrated inverter (PV-MII).
Abstract: This paper proposes a new methodology for calculating the mean time between failure (MTBF) of a photovoltaic module-integrated inverter (PV-MII). Based on a stress-factor reliability methodology, the proposed technique applies a usage model for the inverter to determine the statistical distribution of thermal and electrical stresses for the electrical components. The salient feature of the proposed methodology is taking into account the operating environment volatility of the module-integrated electronics to calculate the MTBF of the MII. This leads to more realistic assessment of reliability than if a single worst case or typical operating point was used. Measured data (module temperature and insolation level) are used to experimentally verify the efficacy of the methodology. The proposed methodology is used to examine the reliability of six different candidate inverter topologies for a PV-MII. This study shows the impact of each component on the inverter reliability, in particular, the power decoupling capacitors. The results confirm that the electrolytic capacitor is the most vulnerable component with the lowest MTBF, but more importantly provide a quantified assessment of realistic MTBF under expected operating conditions rather than a single worst case operating point, which may have a low probability of occurrence.

Book
24 Aug 2013
TL;DR: The hazard, mean residual, variance residual, and percentile residual quantiles functions, their mutual relationships and expressions for the quantile functions in terms of these functions, and some theoretical results relating to the Hankin and Lee (2006) lambda distribution are discussed.
Abstract: This book provides a fresh approach to reliability theory, an area that has gained increasing relevance in fields from statistics and engineering to demography and insurance. Its innovative use of quantile functions gives an analysis of lifetime data that is generally simpler, more robust, and more accurate than the traditional methods, and opens the door for further research in a wide variety of fields involving statistical analysis. In addition, the book can be used to good effect in the classroom as a text for advanced undergraduate and graduate courses in Reliability and Statistics.

01 Apr 2013
TL;DR: The PROV Family of Documents defines a model, corresponding serializations and other supporting definitions to enable the inter-operable interchange of provenance information in heterogeneous environments such as the Web.
Abstract: Provenance is information about entities, activities, and people involved in producing a piece of data or thing, which can be used to form assessments about its quality, reliability or trustworthiness The PROV Family of Documents defines a model, corresponding serializations and other supporting definitions to enable the inter-operable interchange of provenance information in heterogeneous environments such as the Web This document provides an overview of this family of documents

Journal ArticleDOI
TL;DR: In this article, a non-probabilistic reliability model is given for structures with convex model uncertainty, which is defined as a ratio of the multidimensional volume falling into the reliability domain to the one of the whole model.

Journal ArticleDOI
TL;DR: In this article, the authors point out several flaws in Frahm's paper, and provide some examples of PXRF measurements that are valid and reliable and conform to international standards as published.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a series of new metrics for the reliability and economic assessment of micro-grids in distribution system, including reliability parameters for a microgrid in the islanded mode, indices indicating distributed generation (DG) and load characteristics in the microgrid, microgrid economic indices, and customer based microgrid reliability indices.
Abstract: This paper proposes a series of new metrics for the reliability and economic assessment of microgrids in distribution system These metrics include reliability parameters for a microgrid in the islanded mode, indices indicating distributed generation (DG) and load characteristics in the microgrid, microgrid economic indices, and customer based microgrid reliability indices A two-step Monte Carlo simulation (MCS) method is proposed to assess the reliability and economics of a microgrid with intermittent DGs as well as the reliability of distribution system with microgrids An application in IEEE-RBTS shows the effectiveness of the reliability and economic assessment technique with the proposed metrics, which can provide scientific and comparative information for the design and operation of microgrids

Journal ArticleDOI
TL;DR: A review of the literature on TIM reliability can be found in this paper, where a test procedure is proposed for TIM selection based on the information available in the open literature, based on beginning and end of life performance.

Journal ArticleDOI
TL;DR: A study of two systematic mapping studies is presented to evaluate the reliability of mapping studies and point out some challenges related to this type of study in software engineering.

Journal ArticleDOI
TL;DR: Results show that the fault handling of three- and five-level three-phase topologies permits a great increase in reliability over a “relatively” short time duration, in addition to other benefits.
Abstract: Multilevel converters have many power devices and drivers. Thus, a direct reliability calculation based only on the first failure occurrence on one of the components clearly leads them to be devalued compared to two-level converters. However, taking into account that symmetrical multilevel converters such as the X-level active neutral point clamped (ANPC) family are based on imbricated and/or stacked switching cells on the one hand, with an additional center tap at the dc bus in three-phase operation on the other hand, several redundancies clearly appear which can be managed to increase the global reliability. For the first time, a general and theoretical methodology used to calculate reliability laws and failure rates and applied to compare two-, three-, and five-level topologies is proposed. Results show that the fault handling of three- and five-level three-phase topologies permits a great increase in reliability over a “relatively” short time duration, in addition to other benefits.

Journal ArticleDOI
TL;DR: The general conclusion that can be drawn from the findings of the example is that vulnerability analysis should be used to complement reliability studies, as well as other forms of probabilistic risk analysis.

Journal ArticleDOI
TL;DR: The Berg Balance Scale has acceptable reliability, although it might not detect modest, clinically important changes in balance in individual subjects, and was only able to comment on the absolute reliability of the Bergbalance Scale among people with moderately poor to normal balance.

Proceedings Article
27 May 2013
TL;DR: This paper addresses the problem of placing controllers in SDNs, so as to maximize the reliability of control networks, and develops several placement algorithms that can significantly improve the credibility of SDN control networks.
Abstract: The Software-Defined Network (SDN) approach decouples control and forwarding planes. Such separation introduces reliability design issues of the SDN control network, since disconnection between the control and forwarding planes may lead to severe packet loss and performance degradation. This paper addresses the problem of placing controllers in SDNs, so as to maximize the reliability of control networks. After presenting a metric to characterize the reliability of SDN control networks, several placement algorithms are developed. We evaluate these algorithms and further quantify the impact of controller number on the reliability of control networks using real topologies. Our approach can significantly improve the reliability of SDN control networks without introducing unacceptable latencies.

Book
27 Nov 2013
TL;DR: Reliability and Safety of Complex Technical Systems and Processes as mentioned in this paper offers a comprehensive approach to the analysis, identification, evaluation, prediction and optimization of complex technical systems operation, reliability and safety.
Abstract: Reliability and Safety of Complex Technical Systems and Processes offers a comprehensive approach to the analysis, identification, evaluation, prediction and optimization of complex technical systems operation, reliability and safety. Its main emphasis is on multistate systems with ageing components, changes to their structure, and their components reliability and safety parameters during the operation processes. Reliability and Safety of Complex Technical Systems and Processes presents integrated models for the reliability, availability and safety of complex non-repairable and repairable multistate technical systems, with reference to their operation processes and their practical applications to real industrial systems. The authors consider variables in different operation states, reliability and safety structures, and the reliability and safety parameters of components, as well as suggesting a cost analysis for complex technical systems.Researchers and industry practitioners will find information on a wide range of complex technical systems in Reliability and Safety of Complex Technical Systems and Processes. It may prove an easy-to-use guide to reliability and safety evaluations of real complex technical systems, both during their operation and at the design stages.

Journal ArticleDOI
TL;DR: In this paper, a response surface is built from an initial Latin Hypercube Sampling (LHS) where the most significant terms are chosen from statistical criteria and cross-validation method.

Journal ArticleDOI
TL;DR: This paper conducts a laboratory-style experiment in which several cases of flood forecasts and a choice of actions to take were presented as part of a game to participants, who acted as decision-makers and the results are presented.
Abstract: The last decade has seen growing research in producing probabilistic hydro-meteorological forecasts and increasing their reliability. This followed the promise that, supplied with information about uncertainty, people would take better risk-based decisions. In recent years, therefore, research and operational developments have also started focusing attention on ways of communicating the probabilistic forecasts to decision-makers. Communicating probabilistic forecasts includes preparing tools and products for visualisation, but also requires understanding how decision-makers perceive and use uncertainty information in real time. At the EGU General Assembly 2012, we conducted a laboratory-style experiment in which several cases of flood forecasts and a choice of actions to take were presented as part of a game to participants, who acted as decision-makers. Answers were collected and analysed. In this paper, we present the results of this exercise and discuss if we indeed make better decisions on the basis of probabilistic forecasts.

Journal ArticleDOI
TL;DR: The meta-analytic test–retest reliabilities of the test scores ranged from adequate to high and the reliability values were largely robust across factors such as age, clinical diagnosis, and the use of alternate forms.
Abstract: Test–retest reliability is an important psychometric property relevant to assessment instruments typically used in neuropsychological assessment. This review presents a quantitative summary of test–retest reliability coefficients for a variety of widely used neuropsychological measures. In general, the meta-analytic test–retest reliabilities of the test scores ranged from adequate to high (i.e., r=.7 and higher). Furthermore, the reliability values were largely robust across factors such as age, clinical diagnosis, and the use of alternate forms. The values for some of the memory and executive functioning scores were lower (i.e., less than r=.7). Some of the possible reasons for these lower values include ceiling effects, practice effects, and across time variability in cognitive abilities measured by those tests. In general, neuropsychologists who use these measures in their assessments can be encouraged by the magnitude of the majority of the meta-analytic test–retest correlations obtained.