scispace - formally typeset
Search or ask a question

Showing papers on "Reliability (statistics) published in 2013"


Book
09 Jun 2013
TL;DR: This chapter discusses the application of the Binomial Distribution to network Modelling and Evaluation of Simple Systems and System Reliability Evaluation Using Probability Distributions.
Abstract: Introduction. Basic Probability Theory. Application of the Binomial Distribution. Network Modelling and Evaluation of Simple Systems. Network Modelling and Evaluation of Complex Systems. Probability Distributions in Reliability Evaluation. System Reliability Evaluation Using Probability Distributions. Monte Carlo Simulation. Epilogue.

1,062 citations


Journal ArticleDOI
TL;DR: A comprehensive review of reliability assessment and improvement of power electronic systems from three levels: 1) metrics and methodologies of reliability assess of existing system; 2) reliability improvement of existing systems by means of algorithmic solutions without change of the hardware; and 3) reliability-oriented design solutions that are based on fault-tolerant operation of the overall systems.
Abstract: With wide-spread application of power electronic systems across many different industries, their reliability is being studied extensively. This paper presents a comprehensive review of reliability assessment and improvement of power electronic systems from three levels: 1) metrics and methodologies of reliability assessment of existing system; 2) reliability improvement of existing system by means of algorithmic solutions without change of the hardware; and 3) reliability-oriented design solutions that are based on fault-tolerant operation of the overall systems. The intent of this review is to provide a clear picture of the landscape of reliability research in power electronics. The limitations of the current research have been identified and the direction for future research is suggested.

681 citations


Journal ArticleDOI
TL;DR: Reliability and similarity of resting-state functional connectivity can be greatly improved by increasing the scan lengths, and that both the increase in the number of volumes as well as the length of time over which these volumes was acquired drove this increase in reliability.

668 citations


Journal ArticleDOI
TL;DR: In this paper, the main research instruments (questionnaire, interview and classroom observation) usually used in the mixed method designs are presented and elaborated on, and various ways of boosting the validity and reliability of the data and instruments are delineated at length.
Abstract: The mixed method approaches have recently risen to prominence. The reason that more researchers are opting for these types of research is that both qualitative and quantitative data are simultaneously collected, analyzed and interpreted. In this article the main research instruments (questionnaire, interview and classroom observation) usually used in the mixed method designs are presented and elaborated on. It is believed that using different types of procedures for collecting data and obtaining that information through different sources (learners, teachers, program staff, etc.) can augment the validity and reliability of the data and their interpretation. Therefore, the various ways of boosting the validity and reliability of the data and instruments are delineated at length. Finally, an outline of reporting the findings in the mixed method approaches is sketched out. It is believed that this article can be useful and beneficial to the researchers in general and postgraduate students in particular who want to start or are involved in the process of conducting research.

550 citations


Journal ArticleDOI
TL;DR: An original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling, based on the AK-MCS algorithm, that enables the correction or validation of the FORM approximation with only a very few mechanical model computations.

458 citations


Journal ArticleDOI
TL;DR: Overall findings indicated that BRFSS prevalence rates were comparable to other national surveys which rely on self-reports, although specific differences are noted for some categories of response.
Abstract: Background In recent years response rates on telephone surveys have been declining. Rates for the behavioral risk factor surveillance system (BRFSS) have also declined, prompting the use of new methods of weighting and the inclusion of cell phone sampling frames. A number of scholars and researchers have conducted studies of the reliability and validity of the BRFSS estimates in the context of these changes. As the BRFSS makes changes in its methods of sampling and weighting, a review of reliability and validity studies of the BRFSS is needed.

409 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose to use a Kriging surrogate for the performance function as a means to build a quasi-optimal importance sampling density, which can be applied to analytical and finite element reliability problems and proves efficient up to 100 basic random variables.

389 citations


Journal ArticleDOI
TL;DR: Evidence theory, probability bounds analysis with p-boxes, and fuzzy probabilities are discussed with emphasis on their key features and on their relationships to one another.

382 citations


Journal ArticleDOI
09 Sep 2013-PLOS ONE
TL;DR: A case is made for clinicians to consider measurement error (ME) indices Coefficient of Repeatability (CR) or the Smallest Real Difference (SRD) over relative reliability coefficients like the Pearson’s (r) and the Intraclass Correlation Coefficient (ICC) while selecting tools to measure change and inferring change as true.
Abstract: The use of standardised tools is an essential component of evidence-based practice. Reliance on standardised tools places demands on clinicians to understand their properties, strengths, and weaknesses, in order to interpret results and make clinical decisions. This paper makes a case for clinicians to consider measurement error (ME) indices Coefficient of Repeatability (CR) or the Smallest Real Difference (SRD) over relative reliability coefficients like the Pearson’s (r) and the Intraclass Correlation Coefficient (ICC), while selecting tools to measure change and inferring change as true. The authors present statistical methods that are part of the current approach to evaluate test–retest reliability of assessment tools and outcome measurements. Selected examples from a previous test–retest study are used to elucidate the added advantages of knowledge of the ME of an assessment tool in clinical decision making. The CR is computed in the same units as the assessment tool and sets the boundary of the minimal detectable true change that can be measured by the tool.

376 citations


Journal ArticleDOI
TL;DR: This proposed anatomically based classification system provides a consistent description of the various osteotomies performed in spinal deformity correction surgery and will provide a common frame for osteotomy assessment and permit comparative analysis of different treatments.
Abstract: Background Global sagittal malalignment is significantly correlated with health-related quality-of-life scores in the setting of spinal deformity. In order to address rigid deformity patterns, the use of spinal osteotomies has seen a substantial increase. Unfortunately, variations of established techniques and hybrid combinations of osteotomies have made comparisons of outcomes difficult. Objective To propose a classification system of anatomically-based spinal osteotomies and provide a common language among spine specialists. Methods The proposed classification system is based on 6 anatomic grades of resection (1 through 6) corresponding to the extent of bone resection and increasing degree of destabilizing potential. In addition, a surgical approach modifier is added (posterior approach or combined anterior and posterior approaches). Reliability of the classification system was evaluated by an analysis of 16 clinical cases, rated 2 times by 8 different readers, and calculation of Fleiss kappa coefficients. Results Intraobserver reliability was classified as "almost perfect"; Fleiss kappa coefficient averaged 0.96 (range, 0.92-1.0) for resection type and 0.90 (0.71-1.0) for the approach modifier. Results from the interobserver reliability for the classification were 0.96 for resection type and 0.88 for the approach modifier. Conclusion This proposed anatomically based classification system provides a consistent description of the various osteotomies performed in spinal deformity correction surgery. The reliability study confirmed that the classification is simple and consistent. Further development of its use will provide a common frame for osteotomy assessment and permit comparative analysis of different treatments.

339 citations


Journal ArticleDOI
TL;DR: Variable agreement and lack of evidence that the NOS can identify studies with biased results underscore the need for revisions and more detailed guidance for systematic reviewers using the N OS.

Journal ArticleDOI
TL;DR: An algorithm which uses a minimum of two and a maximum of three questions to facilitate an adequate and efficient evaluation of the Karnofsky Performance Status is proposed.
Abstract: For over 60 years, the Karnofsky Performance Status (KPS) has proven itself a valuable tool with which to perform measurement of and comparison between the functional statuses of individual patients. In recent decades conditions for patients have changed, and so too has the KPS undergone several adjustments since its initial development. The most important works regarding the KPS tend to focus upon a variety of issues, including but not limited to reliability, validity and health-related quality of life. Also discussed is the question of what quantity the KPS may in fact be said to measure. The KPS is increasingly used as a prognostic factor in patient assessment. Thus, questions regarding if and how it affects survival are relevant. In this paper, we propose an algorithm which uses a minimum of two and a maximum of three questions to facilitate an adequate and efficient evaluation of the KPS. This review honors the original intention of the discoverer and gives an overview of adaptations made in recent years. The proposed algorithm suggests specific updates with the goal of ensuring continued adequacy and expediency in the determination of the KPS.

Journal ArticleDOI
TL;DR: The Food Intake LEVEL Scale (FILS) seems to have fair reliability and validity as a practical tool for assessing the severity of dysphagia, and further study on the reliability, validity, and sensitivity of the FILS compared with the FOIS is needed.

Journal ArticleDOI
TL;DR: A stochastic process (Wiener process) combined with a data analysis method (Principal Component Analysis) is proposed to model the deterioration of the components and to estimate the RUL on a case study.

Journal ArticleDOI
TL;DR: A systematic and optimized approach for designing microgrids taking into account system reliability- and supply-security-related aspects is presented, and the effect of optimization coefficients on the design and the robustness of the algorithm are investigated using sensitivity studies.
Abstract: Microgrids are known as clusters of distributed energy resources serving a group of distributed loads in grid-connected and isolated grid modes. Nowadays, the concept of microgrids has become a key subject in the smart grid area, demanding a systematic procedure for their optimal construction. According to the IEEE Std 1547.4, large distribution systems can be clustered into a number of microgrids to facilitate powerful control and operation infrastructure in future distribution systems. However, clustering large systems into a set of microgrids with high reliability and security is not reported in current literature. To fill-out this gap, this paper presents a systematic and optimized approach for designing microgrids taking into account system reliability- and supply-security-related aspects. The optimum design considers sustained and temporary faults, for system reliability via a combined probabilistic reliability index, and real and reactive power balance, for supply security. The loads are assumed to be variable and different distributed generation (DG) technologies are considered. Conceptual design, problem formulation and solution algorithms are presented in this paper. The well-known PG&E 69-bus distribution system is selected as the test system. The effect of optimization coefficients on the design and the robustness of the algorithm are investigated using sensitivity studies.

Journal ArticleDOI
TL;DR: Psychometric indices of the Insomnia Severity Index (ISI) are examined to identify individuals with clinically significant insomnia in primary care settings and suggest that the ISI is a valid screening instrument for detecting insomnia among patients consulting inPrimary care settings.
Abstract: Background: Although insomnia is a prevalent complaint with significant consequences on quality of life, health, and health care utilization, it often remains undiagnosed and untreated in primary care settings. Brief, reliable, and valid instruments are needed to facilitate screening for insomnia in general practice. This study examined psychometric indices of the Insomnia Severity Index (ISI) to identify individuals with clinically significant insomnia in primary care settings. Methods: A sample of 410 patients recruited from 6 general medical clinics completed the ISI before their appointment with a primary care physician. A subsample of 101 individuals also completed a semistructured clinical interview by telephone to determine the presence or absence of an insomnia disorder. Reliability and validity indices were computed, as was the discriminative capacity of each individual item. Convergence between ISI total score and the diagnosis derived from the interview was investigated. Receiver operator characteristic analyses were used to determine the optimal ISI cutoff score that correctly identified individuals with an insomnia disorder. Results: ISI internal consistency was excellent (Cronbach α = 0.92), and each individual item showed adequate discriminative capacity (r = 0.65–0.84). The area under the receiver operator characteristic curve was 0.87 and suggested that a cutoff score of 14 was optimal (82.4% sensitivity, 82.1% specificity, and 82.2% agreement) for detecting clinical insomnia. Agreement between the ISI cut score and the diagnostic interview was moderate (κ = 0.62). Conclusions: These findings suggest that the ISI is a valid screening instrument for detecting insomnia among patients consulting in primary care settings.

Journal ArticleDOI
TL;DR: Good-quality subjective and objective data suggest adequate construct validity for each of the CT instruments, but a major limitation of this literature is studies that assess the predictive validity of these instruments.
Abstract: The accurate measurement of circadian typology (CT) is critical because the construct has implications for a number of health disorders. In this review, we focus on the evidence to support the reliability and validity of the more commonly used CT scales: the Morningness-Eveningness Questionnaire (MEQ), reduced Morningness-Eveningness Questionnaire (rMEQ), the Composite Scale of Morningness (CSM), and the Preferences Scale (PS). In addition, we also consider the Munich ChronoType Questionnaire (MCTQ). In terms of reliability, the MEQ, CSM, and PS consistently report high levels of reliability (>0.80), whereas the reliability of the rMEQ is satisfactory. The stability of these scales is sound at follow-up periods up to 13 mos. The MCTQ is not a scale; therefore, its reliability cannot be assessed. Although it is possible to determine the stability of the MCTQ, these data are yet to be reported. Validity must be given equal weight in assessing the measurement properties of CT instruments. Most commonly repor...

Journal ArticleDOI
TL;DR: This study identified testing protocols that improve the reliability of measuring gait variability and recommends using a continuous walking protocol and to collect no fewer than 30 steps.

Journal ArticleDOI
01 Feb 2013-Stroke
TL;DR: The BI has excellent inter-rater reliability for standard administration after stroke, with clinical heterogeneity and variable methodological quality, and seems an appropriate outcome measure for stroke trials and practice.
Abstract: Background and Purpose—The Barthel Index (BI) is a 10-item measure of activities of daily living which is frequently used in clinical practice and as a trial outcome measure in stroke. We sought to describe the reliability (interobserver variability) of standard BI in stroke cohorts using systematic review and meta-analysis of published studies. Methods—Two assessors independently searched various multidisciplinary electronic databases from inception to April 2012 inclusive. Inclusion criteria comprised: original research, human stroke participants, and inter-rater reliability data on equivalent methods of BI administration. Manuscripts were reviewed against prespecified inclusion criteria. Primary outcome for meta-analysis was reliability, measured by weighted κ (κw). Results—From 20 210 titles, 306 abstracts were reviewed, 12 studies met inclusion criteria, and 10 were included in meta-analysis (n=543 participants; range of participants in studies, 7–21). There was substantial clinical heterogeneity wit...

Journal ArticleDOI
TL;DR: In this article, the authors conducted a systematic review on measurement properties of outcome measurements for clinical signs of atopic dermatitis (AD) and provided evidence-based recommendations for the measurement of clinical signs in AD trials and to inform the Harmonising Outcome Measures for Atopic Dermatitis Initiative.
Abstract: Background Clinical signs are a core outcome domain for atopic dermatitis (AD) trials. The current lack of standardization of outcome measures in AD trials hampers evidence-based communication. Objective We sought to provide evidence-based recommendations for the measurement of clinical signs in AD trials and to inform the Harmonising Outcome Measures for Atopic Dermatitis Initiative. Methods We conducted a systematic review on measurement properties of outcome measurements for clinical signs of AD. We systematically searched MEDLINE and Embase (until October 1, 2012) for validation studies on instruments measuring the clinical signs of AD. Grading of the truth, discrimination, and feasibility of scales; methodological study quality; and recommendations were based on predefined criteria. Results Sixteen eligible instruments were identified, of which 2 were best validated. The Eczema Area and Severity Index has adequate validity, responsiveness, internal consistency, intraobserver reliability, and intermediate interobserver reliability but unclear interpretability and feasibility. The Severity Scoring of Atopic Dermatitis Index (SCORAD) has adequate validity, responsiveness, interobserver reliability, and interpretability and unclear intraobserver reliability. Only the objective SCORAD (ie, the clinical signs domain of the SCORAD) is internally consistent. The Six Area, Six Sign Atopic Dermatitis Index severity score and Three Item Severity Score fulfill some quality criteria, but the performance in other required measurement properties is unclear. The Patient-oriented Eczema Measure is reliable and responsive but has inadequate content validity to assess clinical signs of AD. The remaining 11 scales have either (almost) not been validated or performed inadequately. Conclusions The Eczema Area and Severity Index and SCORAD are the best instruments to assess the clinical signs of AD. The other 14 instruments identified are (currently) not recommended because of unclear or inadequate measurement properties.

Proceedings ArticleDOI
29 May 2013
TL;DR: In this article, the authors introduce the most prominent reliability concerns from today's points of view and roughly recapitulate the progress in the community so far and suggest a way for coping with reliability challenges in upcoming technology nodes.
Abstract: Reliability concerns due to technology scaling have been a major focus of researchers and designers for several technology nodes. Therefore, many new techniques for enhancing and optimizing reliability have emerged particularly within the last five to ten years. This perspective paper introduces the most prominent reliability concerns from today's points of view and roughly recapitulates the progress in the community so far. The focus of this paper is on perspective trends from the industrial as well as academic points of view that suggest a way for coping with reliability challenges in upcoming technology nodes.

Proceedings ArticleDOI
07 Jul 2013
TL;DR: The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed, showing that designing a reliable Fog computing platform is feasible.
Abstract: This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new methodology for calculating the mean time between failure (MTBF) of a photovoltaic module-integrated inverter (PV-MII).
Abstract: This paper proposes a new methodology for calculating the mean time between failure (MTBF) of a photovoltaic module-integrated inverter (PV-MII). Based on a stress-factor reliability methodology, the proposed technique applies a usage model for the inverter to determine the statistical distribution of thermal and electrical stresses for the electrical components. The salient feature of the proposed methodology is taking into account the operating environment volatility of the module-integrated electronics to calculate the MTBF of the MII. This leads to more realistic assessment of reliability than if a single worst case or typical operating point was used. Measured data (module temperature and insolation level) are used to experimentally verify the efficacy of the methodology. The proposed methodology is used to examine the reliability of six different candidate inverter topologies for a PV-MII. This study shows the impact of each component on the inverter reliability, in particular, the power decoupling capacitors. The results confirm that the electrolytic capacitor is the most vulnerable component with the lowest MTBF, but more importantly provide a quantified assessment of realistic MTBF under expected operating conditions rather than a single worst case operating point, which may have a low probability of occurrence.

Book
24 Aug 2013
TL;DR: The hazard, mean residual, variance residual, and percentile residual quantiles functions, their mutual relationships and expressions for the quantile functions in terms of these functions, and some theoretical results relating to the Hankin and Lee (2006) lambda distribution are discussed.
Abstract: This book provides a fresh approach to reliability theory, an area that has gained increasing relevance in fields from statistics and engineering to demography and insurance. Its innovative use of quantile functions gives an analysis of lifetime data that is generally simpler, more robust, and more accurate than the traditional methods, and opens the door for further research in a wide variety of fields involving statistical analysis. In addition, the book can be used to good effect in the classroom as a text for advanced undergraduate and graduate courses in Reliability and Statistics.

01 Apr 2013
TL;DR: The PROV Family of Documents defines a model, corresponding serializations and other supporting definitions to enable the inter-operable interchange of provenance information in heterogeneous environments such as the Web.
Abstract: Provenance is information about entities, activities, and people involved in producing a piece of data or thing, which can be used to form assessments about its quality, reliability or trustworthiness The PROV Family of Documents defines a model, corresponding serializations and other supporting definitions to enable the inter-operable interchange of provenance information in heterogeneous environments such as the Web This document provides an overview of this family of documents

Journal ArticleDOI
TL;DR: In this article, a non-probabilistic reliability model is given for structures with convex model uncertainty, which is defined as a ratio of the multidimensional volume falling into the reliability domain to the one of the whole model.

Journal ArticleDOI
TL;DR: In this article, the authors point out several flaws in Frahm's paper, and provide some examples of PXRF measurements that are valid and reliable and conform to international standards as published.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a series of new metrics for the reliability and economic assessment of micro-grids in distribution system, including reliability parameters for a microgrid in the islanded mode, indices indicating distributed generation (DG) and load characteristics in the microgrid, microgrid economic indices, and customer based microgrid reliability indices.
Abstract: This paper proposes a series of new metrics for the reliability and economic assessment of microgrids in distribution system These metrics include reliability parameters for a microgrid in the islanded mode, indices indicating distributed generation (DG) and load characteristics in the microgrid, microgrid economic indices, and customer based microgrid reliability indices A two-step Monte Carlo simulation (MCS) method is proposed to assess the reliability and economics of a microgrid with intermittent DGs as well as the reliability of distribution system with microgrids An application in IEEE-RBTS shows the effectiveness of the reliability and economic assessment technique with the proposed metrics, which can provide scientific and comparative information for the design and operation of microgrids

Journal ArticleDOI
TL;DR: A review of the literature on TIM reliability can be found in this paper, where a test procedure is proposed for TIM selection based on the information available in the open literature, based on beginning and end of life performance.

Journal ArticleDOI
TL;DR: A study of two systematic mapping studies is presented to evaluate the reliability of mapping studies and point out some challenges related to this type of study in software engineering.