scispace - formally typeset
Search or ask a question

Showing papers on "Reliability (statistics) published in 2020"


Book
23 Dec 2020
TL;DR: In this paper, the authors reviewed the evidence about the relationship between water quantity, water accessibility and health, including the effects of water reliability, continuity and price on water use, and provided guidance on domestic water supply to ensure beneficial health outcomes.
Abstract: Sufficient quantities of water for household use, including for drinking, food preparation and hygiene, are needed to protect public health and for well-being and prosperity. This second edition reviews the evidence about the relationships between water quantity, water accessibility and health. The effects of water reliability, continuity and price on water use, are also covered. Updated guidance, including recommended targets, is provided on domestic water supply to ensure beneficial health outcomes.

917 citations


Journal ArticleDOI
TL;DR: Cronbach's alpha (α) is a widely used measure of reliability used to quantify the amount of random measurement error that exists in a sum score or average generated by a multi-item measurement scalar.
Abstract: Cronbach’s alpha (α) is a widely-used measure of reliability used to quantify the amount of random measurement error that exists in a sum score or average generated by a multi-item measurement scal...

758 citations


Journal ArticleDOI
TL;DR: In this article, a meta-analysis of 90 experiments (N = 1,008) revealed poor overall reliability-mean intraclass correlation coefficient (ICC) =.397.
Abstract: Identifying brain biomarkers of disease risk is a growing priority in neuroscience. The ability to identify meaningful biomarkers is limited by measurement reliability; unreliable measures are unsuitable for predicting clinical outcomes. Measuring brain activity using task functional MRI (fMRI) is a major focus of biomarker development; however, the reliability of task fMRI has not been systematically evaluated. We present converging evidence demonstrating poor reliability of task-fMRI measures. First, a meta-analysis of 90 experiments (N = 1,008) revealed poor overall reliability-mean intraclass correlation coefficient (ICC) = .397. Second, the test-retest reliabilities of activity in a priori regions of interest across 11 common fMRI tasks collected by the Human Connectome Project (N = 45) and the Dunedin Study (N = 20) were poor (ICCs = .067-.485). Collectively, these findings demonstrate that common task-fMRI measures are not currently suitable for brain biomarker discovery or for individual-differences research. We review how this state of affairs came to be and highlight avenues for improving task-fMRI reliability.

365 citations


Journal ArticleDOI
TL;DR: A comprehensive review on the fault detection and diagnosis techniques for high-speed trains is presented using data-driven methods which are receiving increasing attention in transportation fields over the past ten years.
Abstract: High-speed trains have become one of the most important and advanced branches of intelligent transportation, of which the reliability and safety are still not mature enough for keeping up with other aspects. The first objective of this paper is to present a comprehensive review on the fault detection and diagnosis (FDD) techniques for high-speed trains. The second purpose of this work is, motivated by the pros and cons of the FDD methods for high-speed trains, to provide researchers and practitioners with informative guidance. Then, the application of FDD for high-speed trains is presented using data-driven methods which are receiving increasing attention in transportation fields over the past ten years. Finally, the challenges and promising issues are speculated for the future investigation.

239 citations


Journal ArticleDOI
TL;DR: It is suggested that weak correlations between self-report and behavioral measures of the same construct result from the poor reliability of many behavioral measures and the distinct response processes involved in the two measurement types.

234 citations


Journal ArticleDOI
TL;DR: Commercial wearable devices are accurate for measuring step count, heart rate, and energy expenditure in laboratory-based settings, but this varies by the manufacturer and device type.
Abstract: Background: Consumer-wearable activity trackers are small electronic devices that record fitness and health-related measures Objective: The purpose of this systematic review was to examine the validity and reliability of commercial wearables in measuring step count, heart rate, and energy expenditure Methods: We identified devices to be included in the review Database searches were conducted in PubMed, Embase, and SPORTDiscus, and only articles published in the English language up to May 2019 were considered Studies were excluded if they did not identify the device used and if they did not examine the validity or reliability of the device Studies involving the general population and all special populations were included We operationalized validity as criterion validity (as compared with other measures) and construct validity (degree to which the device is measuring what it claims) Reliability measures focused on intradevice and interdevice reliability Results: We included 158 publications examining nine different commercial wearable device brands Fitbit was by far the most studied brand In laboratory-based settings, Fitbit, Apple Watch, and Samsung appeared to measure steps accurately Heart rate measurement was more variable, with Apple Watch and Garmin being the most accurate and Fitbit tending toward underestimation For energy expenditure, no brand was accurate We also examined validity between devices within a specific brand Conclusions: Commercial wearable devices are accurate for measuring steps and heart rate in laboratory-based settings, but this varies by the manufacturer and device type Devices are constantly being upgraded and redesigned to new models, suggesting the need for more current reviews and research

225 citations


Journal ArticleDOI
TL;DR: The qualitative data analysis method of content analysis is described, which can be useful to pharmacy educators because of its application in the investigation of a wide variety of data sources, including textual, visual, and audio files.
Abstract: Objective. In the course of daily teaching responsibilities, pharmacy educators collect rich data that can provide valuable insight into student learning. This article describes the qualitative data analysis method of content analysis, which can be useful to pharmacy educators because of its application in the investigation of a wide variety of data sources, including textual, visual, and audio files. Findings. Both manifest and latent content analysis approaches are described, with several examples used to illustrate the processes. This article also offers insights into the variety of relevant terms and visualizations found in the content analysis literature. Finally, common threats to the reliability and validity of content analysis are discussed, along with suitable strategies to mitigate these risks during analysis. Summary. This review of content analysis as a qualitative data analysis method will provide clarity and actionable instruction for both novice and experienced pharmacy education researchers.

164 citations


Journal ArticleDOI
20 Apr 2020-PLOS ONE
TL;DR: Despite the brief, non-standard nature of the UK Biobank cognitive tests, some tests showed substantial concurrent validity and test-retest reliability and psychometric results provide currently-lacking information on the validity of the US Biobanks cognitive tests.
Abstract: UK Biobank is a health resource with data from over 500,000 adults. The cognitive assessment in UK Biobank is brief and bespoke, and is administered without supervision on a touchscreen computer. Psychometric information on the UK Biobank cognitive tests are limited. Despite the non-standard nature of these tests and the limited psychometric information, the UK Biobank cognitive data have been used in numerous scientific publications. The present study examined the validity and short-term test-retest reliability of the UK Biobank cognitive tests. A sample of 160 participants (mean age = 62.59, SD = 10.24) was recruited who completed the UK Biobank cognitive assessment and a range of well-validated cognitive tests (‘reference tests’). Fifty-two participants returned 4 weeks later to repeat the UK Biobank tests. Correlations were calculated between UK Biobank tests and reference tests. Two measures of general cognitive ability were created by entering scores on the UK Biobank cognitive tests, and scores on the reference tests, respectively, into separate principal component analyses and saving scores on the first principal component. Four-week test-retest correlations were calculated for UK Biobank tests. UK Biobank cognitive tests showed a range of correlations with their respective reference tests, i.e. those tests that are thought to assess the same underlying cognitive ability (mean Pearson r = 0.53, range = 0.22 to 0.83, p≤.005). The measure of general cognitive ability based on the UK Biobank cognitive tests correlated at r = 0.83 (p < .001) with a measure of general cognitive ability created using the reference tests. Four-week test-retest reliability of the UK Biobank tests were moderate-to-high (mean Pearson r = 0.55, range = 0.40 to 0.89, p≤.003). Despite the brief, non-standard nature of the UK Biobank cognitive tests, some tests showed substantial concurrent validity and test-retest reliability. These psychometric results provide currently-lacking information on the validity of the UK Biobank cognitive tests.

151 citations


Journal ArticleDOI
TL;DR: A cardinal consistency measurement to provide immediate feedback is proposed, called the input-based consistency measurement, after which an ordinal consistency measurement is proposed to check the coherence of the order of the results (weights) against the Order of the pairwise comparisons provided by the decision-maker.
Abstract: The Best-Worst Method (BWM) uses ratios of the relative importance of criteria in pairs based on the assessment done by decision-makers. When a decision-maker provides the pairwise comparisons in BWM, checking the acceptable inconsistency, to ensure the rationality of the assessments, is an important step. Although both the original and the extended versions of BWM have proposed several consistency measurements, there are some deficiencies, including: (i) the lack of a mechanism to provide immediate feedback to the decision-maker regarding the consistency of the pairwise comparisons being provided, (ii) the inability to consider the ordinal consistency into account, and (iii) the lack of consistency thresholds to determine the reliability of the results. To deal with these problems, this study starts by proposing a cardinal consistency measurement to provide immediate feedback, called the input-based consistency measurement, after which an ordinal consistency measurement is proposed to check the coherence of the order of the results (weights) against the order of the pairwise comparisons provided by the decision-maker. Finally, a method is proposed to balance cardinal consistency ratio under ordinal-consistent and ordinal-inconsistent conditions, to determine the thresholds for the proposed and the original consistency ratios.

149 citations


Journal ArticleDOI
TL;DR: A tool that enables researchers with and without thorough knowledge on measurement properties to assess the quality of a study on reliability and measurement error of outcome measurement instruments is developed.
Abstract: Scores on an outcome measurement instrument depend on the type and settings of the instrument used, how instructions are given to patients, how professionals administer and score the instrument, etc. The impact of all these sources of variation on scores can be assessed in studies on reliability and measurement error, if properly designed and analyzed. The aim of this study was to develop standards to assess the quality of studies on reliability and measurement error of clinician-reported outcome measurement instruments, performance-based outcome measurement instrument, and laboratory values. We conducted a 3-round Delphi study involving 52 panelists. Consensus was reached on how a comprehensive research question can be deduced from the design of a reliability study to determine how the results of a study inform us about the quality of the outcome measurement instrument at issue. Consensus was reached on components of outcome measurement instruments, i.e. the potential sources of variation. Next, we reached consensus on standards on design requirements (n = 5), standards on preferred statistical methods for reliability (n = 3) and measurement error (n = 2), and their ratings on a four-point scale. There was one term for a component and one rating of one standard on which no consensus was reached, and therefore required a decision by the steering committee. We developed a tool that enables researchers with and without thorough knowledge on measurement properties to assess the quality of a study on reliability and measurement error of outcome measurement instruments.

148 citations


Book
05 May 2020
TL;DR: Netflix engineers call the approach chaos engineering several principles underlying it and have used it to run experiments to verify such systems' reliability.
Abstract: Modern software-based services are implemented as distributed systems with complex behavior and failure modes Many large tech organizations are using experimentation to verify such systems' reliability Netflix engineers call this approach chaos engineering They've determined several principles underlying it and have used it to run experiments This article is part of a theme issue on DevOps

Journal ArticleDOI
TL;DR: The computed results in this paper are more conformity with statistical data for that the error of predicted failure rate of this study is 4.5%, which can be compared with that of 13% concluded by fault tree analysis.

Journal ArticleDOI
TL;DR: In this article, the impact of two phase shifting designs, namely coherent phase shifting and random discrete phase shifting, on the performance of intelligent reflecting surface assisted non-orthogonal multiple access (NOMA) is studied.
Abstract: In this letter, the impact of two phase shifting designs, namely coherent phase shifting and random discrete phase shifting, on the performance of intelligent reflecting surface (IRS) assisted non-orthogonal multiple access (NOMA) is studied. Analytical and simulation results are provided to show that the two designs achieve different tradeoffs between reliability and complexity. To further improve the reception reliability of the random phase shifting design, a low-complexity phase selection scheme is also proposed in this letter.

Journal ArticleDOI
TL;DR: Results showed that tracking performance of the proposed method has been increased, especially effected greatly on fast-moving, background clutter and motion blur, and the method is validated to play an important role in real industrial applications with edge computing, which is more suitable for IIoT environments and automotive industry.

Journal ArticleDOI
12 May 2020
TL;DR: This article reviews recent works applying machine learning techniques in the context of energy systems’ reliability assessment and control and argues that the methods, tools, etc. can be extended to other similar systems, such as distribution systems, microgrids, and multienergy systems.
Abstract: This article reviews recent works applying machine learning (ML) techniques in the context of energy systems’ reliability assessment and control. We showcase both the progress achieved to date as well as the important future directions for further research, while providing an adequate background in the fields of reliability management and of ML. The objective is to foster the synergy between these two fields and speed up the practical adoption of ML techniques for energy systems reliability management. We focus on bulk electric power systems and use them as an example, but we argue that the methods, tools, etc. can be extended to other similar systems, such as distribution systems, microgrids, and multienergy systems.

Journal ArticleDOI
TL;DR: This work tentatively supports the use of IMUs for joint angle measurement and other biomechanical outcomes such as stability, regularity, and segmental accelerations, and cautions against using spatiotemporal variability and symmetry metrics without strict protocol.
Abstract: Inertial measurement units (IMUs) offer the ability to measure walking gait through a variety of biomechanical outcomes (e.g., spatiotemporal, kinematics, other). Although many studies have assessed their validity and reliability, there remains no quantitive summary of this vast body of literature. Therefore, we aimed to conduct a systematic review and meta-analysis to determine the i) concurrent validity and ii) test-retest reliability of IMUs for measuring biomechanical gait outcomes during level walking in healthy adults. Five electronic databases were searched for journal articles assessing the validity or reliability of IMUs during healthy adult walking. Two reviewers screened titles, abstracts, and full texts for studies to be included, before two reviewers examined the methodological quality of all included studies. When sufficient data were present for a given biomechanical outcome, data were meta-analyzed on Pearson correlation coefficients (r) or intraclass correlation coefficients (ICC) for validity and reliability, respectively. Alternatively, qualitative summaries of outcomes were conducted on those that could not be meta-analyzed. A total of 82 articles, assessing the validity or reliability of over 100 outcomes, were included in this review. Seventeen biomechanical outcomes, primarily spatiotemporal parameters, were meta-analyzed. The validity and reliability of step and stride times were found to be excellent. Similarly, the validity and reliability of step and stride length, as well as swing and stance time, were found to be good to excellent. Alternatively, spatiotemporal parameter variability and symmetry displayed poor to moderate validity and reliability. IMUs were also found to display moderate reliability for the assessment of local dynamic stability during walking. The remaining biomechanical outcomes were qualitatively summarized to provide a variety of recommendations for future IMU research. The findings of this review demonstrate the excellent validity and reliability of IMUs for mean spatiotemporal parameters during walking, but caution the use of spatiotemporal variability and symmetry metrics without strict protocol. Further, this work tentatively supports the use of IMUs for joint angle measurement and other biomechanical outcomes such as stability, regularity, and segmental accelerations. Unfortunately, the strength of these recommendations are limited based on the lack of high-quality studies for each outcome, with underpowered and/or unjustified sample sizes (sample size median 12; range: 2–95) being the primary limitation.

Posted Content
TL;DR: It is argued that ML is capable of providing novel insights and opportunities to solve important challenges in reliability and safety applications and is also capable of teasing out more accurate insights from accident datasets than with traditional analysis tools, and this can lead to better informed decision-making and more effective accident prevention.
Abstract: Machine learning (ML) pervades an increasing number of academic disciplines and industries. Its impact is profound, and several fields have been fundamentally altered by it, autonomy and computer vision for example; reliability engineering and safety will undoubtedly follow suit. There is already a large but fragmented literature on ML for reliability and safety applications, and it can be overwhelming to navigate and integrate into a coherent whole. In this work, we facilitate this task by providing a synthesis of, and a roadmap to this ever-expanding analytical landscape and highlighting its major landmarks and pathways. We first provide an overview of the different ML categories and sub-categories or tasks, and we note several of the corresponding models and algorithms. We then look back and review the use of ML in reliability and safety applications. We examine several publications in each category/sub-category, and we include a short discussion on the use of Deep Learning to highlight its growing popularity and distinctive advantages. Finally, we look ahead and outline several promising future opportunities for leveraging ML in service of advancing reliability and safety considerations. Overall, we argue that ML is capable of providing novel insights and opportunities to solve important challenges in reliability and safety applications. It is also capable of teasing out more accurate insights from accident datasets than with traditional analysis tools, and this in turn can lead to better informed decision-making and more effective accident prevention.

Journal ArticleDOI
TL;DR: The issues of trustworthiness in qualitative leisure research, often demonstrated through particular techniques of reliability and/or validity, are often either nonexistent, unsubstantial, or unexplaine.
Abstract: Issues of trustworthiness in qualitative leisure research, often demonstrated through particular techniques of reliability and/or validity, is often either nonexistent, unsubstantial, or unexplaine...

Journal ArticleDOI
04 Jun 2020-BMJ
TL;DR: Researchers, clinicians, and healthcare policy decision makers can consider using this instrument to evaluate the design, conduct, and analysis of studies estimating anchor based minimal important differences.
Abstract: Objective To develop an instrument to evaluate the credibility of anchor based minimal important differences (MIDs) for outcome measures reported by patients, and to assess the reliability of the instrument. Design Instrument development and reliability study. Data sources Initial criteria were developed for evaluating the credibility of anchor based MIDs based on a literature review (Medline, Embase, CINAHL, and PsycInfo databases) and the experience of the authors in the methodology for estimation of MIDs. Iterative discussions by the team and pilot testing with experts and potential users facilitated the development of the final instrument. Participants With the newly developed instrument, pairs of masters, doctoral, or postdoctoral students with a background in health research methodology independently evaluated the credibility of a sample of MID estimates. Main outcome measures Core credibility criteria applicable to all anchor types, additional criteria for transition rating anchors, and inter-rater reliability coefficients were determined. Results The credibility instrument has five core criteria: the anchor is rated by the patient; the anchor is interpretable and relevant to the patient; the MID estimate is precise; the correlation between the anchor and the outcome measure reported by the patient is satisfactory; and the authors select a threshold on the anchor that reflects a small but important difference. The additional criteria for transition rating anchors are: the time elapsed between baseline and follow-up measurement for estimation of the MID is optimal; and the correlations of the transition rating with the baseline, follow-up, and change score in the patient reported outcome measures are satisfactory. Inter-rater reliability coefficients (ĸ) for the core criteria and for one item from the additional criteria ranged from 0.70 to 0.94. Reporting issues prevented the evaluation of the reliability of the three other additional criteria for the transition rating anchors. Conclusions Researchers, clinicians, and healthcare policy decision makers can consider using this instrument to evaluate the design, conduct, and analysis of studies estimating anchor based minimal important differences.

Journal ArticleDOI
TL;DR: This article first identifies reliability challenges posed by specific enabling technologies of each layer of the layered IoT architecture, and presents a systematic synthesis and review of IoT reliability-related literature.
Abstract: The Internet of Things (IoT) aims to transform the human society toward becoming intelligent, convenient, and efficient with potentially enormous economic and environmental benefits. Reliability is one of the main challenges that must be addressed to enable this revolutionized transformation. Based on the layered IoT architecture, this article first identifies reliability challenges posed by specific enabling technologies of each layer. This article then presents a systematic synthesis and review of IoT reliability-related literature. Reliability models and solutions at four layers (perception, communication, support, and application) are reflected and classified. Despite the rich body of works performed, the IoT reliability research is still in its early stage. Challenging research problems and opportunities are then discussed in relation to current underexplored behaviors and future new aspects of evolving IoT system complexity and dynamics.

Journal ArticleDOI
TL;DR: This paper presents an adaptive Kriging oriented importance sampling (AKOIS) approach, which is able to adaptively cover all branches of the investigated limit-state surface for structural system reliability analysis.

Journal ArticleDOI
TL;DR: This work identifies a pervasive, yet previously undocumented threat to the reliability of MTurk data—and discusses how this issue is symptomatic of opportunities and incentives that facilitate fraud.
Abstract: We identify a pervasive, yet previously undocumented threat to the reliability of MTurk data—and discuss how this issue is symptomatic of opportunities and incentives that facilitate fraud...

Journal ArticleDOI
14 Feb 2020
TL;DR: It can be seen that reliability assessment of modern power systems also requires introducing local reliability concepts as well as incorporating different electro-magnetic/mechanical stability issues.
Abstract: Renewable energy resources are becoming the dominating element in power systems. Along with de-carbonization, they transform power systems into a more distributed, autonomous, bottom-up style one. We speak of Smart Grid and Microgrid when distributed energy resources take over. While being a means to improve technical and financial efficiency, planning, operations, and carbon footprint, it is these new technologies that also introduce new challenges. Reliability is one of them, deserving a new way of describing and assessing system and component reliability. This paper introduces a new reliability framework that covers these new elements in modern power systems. It can be seen that reliability assessment of modern power systems also requires introducing local reliability concepts as well as incorporating different electro-magnetic/mechanical stability issues.

Journal ArticleDOI
20 Mar 2020
TL;DR: Kuresel bir pandemiye yol acan Yeni Tip Koronavirus Hastaligi (COVID-19) milyonlarca insanin enfekte olmasina ve yuzbinlerce outsanin da olumune yolo acmistir.
Abstract: Kuresel bir pandemiye yol acan Yeni Tip Koronavirus Hastaligi (COVID-19) milyonlarca insanin enfekte olmasina ve yuzbinlerce insanin da olumune yol acmistir. COVID-19 insanlarin buyuk bir bolumunde ve vakalarin yakin cevrelerininde ciddi bir kaygi yaratmistir. COVID-19 pandemisiyle ilgili cok sayida arastirma yapilmakla birlikte pandeminin yarattigi kaygiyi dikkate alan sinirli calismalarin oldugu gorulmekktedir. Bu calismada Sherman A. Lee (2020) tarafindan COVID-19 kriziyle iliskili olasi disfonksiyonel anksiyete vakalarini tanimlamak icin kisa bir ruh sagligi taramasi olan Koronavirus Anksiyete Olcegi (KAO)’nin Turkce gecerlik ve guvenirlik analizlerini gerceklestirmek amaclanmistir. Calisma kapsaminda 467 yetiskin bireye ulasilmistir. Calisma kapsaminda 5 maddelik kisa olcek optimal guvenilirlik ve gecerlilik gostermistir. Olcegin gecerlilik calismalari icin aciklayici ve dogrulayici faktor analizleri, guvenilirlik calismalari icin ise ic tutarlilik analizleri yapilmistir. Elde edilen sonuclara gore, gecerlilik ve guvenirlik calismasi yapilan KAO tek boyutlu ve bes sorudan olusan orijinal olcek ile ayni ozellikler gostermektedir. Gerceklestirilen bu calisma ile literature gecerli ve guvenilir bir KAO kazandirildigi dusunulmektedir. Bu calisma koronavirusun yarattigi anksiyetinin olculmesinde ve toplum ruh sagliginin gelistirilmesinde gelecekte arastirmalara referans olabilecegi dusunulmektedir.

Journal ArticleDOI
TL;DR: The results demonstrate that the proposed PEM has a higher accuracy and efficiency to assess the positioning accuracy reliability of industrial robots.
Abstract: The uncertain variables of the link dimensions and joint clearances, whose deviation is caused by manufacturing and assembling errors, have a considerable influence on the positioning accuracy of industrial robots. Understanding how these uncertain variables affect the positioning accuracy of industrial robots is very important to select appropriate parameters during design process. In this paper, the positioning accuracy reliability of industrial robots is analyzed considering the influence of uncertain variables. First, the kinematic models of industrial robots are established based on the Denavit–Hartenberg method, in which the link lengths and joint rotation angles are treated as uncertain variables. Second, the Sobol’ method is used to analyze the sensitivity of uncertain variables for the positioning accuracy of industrial robots, by which the sensitive variables are determined to perform the reliability analysis. Finally, in view of the sensitive variables, the first-four order moments and probability density function of the manipulator's positioning point are assessed by the point estimation method (PEM) in three examples. The Monte Carlo simulation method, the maximum entropy problem with fractional order moments (maximum entropy problem with fractional order moments method (ME-FM) method), and the experimental method are also performed as comparative methods. All the results demonstrate that the proposed PEM has a higher accuracy and efficiency to assess the positioning accuracy reliability of industrial robots.

Journal ArticleDOI
TL;DR: A survey of techniques for studying and optimizing the reliability of DNN accelerators and architectures, and underscores the importance of designing for reliability as the first principle.

Journal ArticleDOI
11 Oct 2020
TL;DR: This paper proposes a novel subjective weighting method called the Fuzzy Full Consistency Method (FUCOM-F) for determining weights as accurately as possible under fuzziness and obtains the most accurate weight values with very few pairwise comparisons.
Abstract: Values, opinions, perceptions, and experiences are the forces that drive almost each and every kind of decision-making. Evaluation criteria are considered as sources of information used to compare alternatives and, as a result, make selection easier. Seeing their direct effect on the solution, weighting methods that most accurately determine criteria weights are needed. Unfortunately, the crisp values are insufficient to model real life problems due to the lack of complete information and the vagueness arising from linguistic assessments of decision-makers. Therefore, this paper proposes a novel subjective weighting method called the Fuzzy Full Consistency Method (FUCOM-F) for determining weights as accurately as possible under fuzziness. The most prominent feature of the proposed method is obtaining the most accurate weight values with very few pairwise comparisons. Consequently, thanks to this model, consistency and reliability of the results increase while the processing time and effort decrease. Moreover, an illustrative example related to the green supplier evaluation problem is performed. Finally, the robustness and effectiveness of the proposed fuzzy model is demonstrated by comparing it with fuzzy best-worst method (F-BWM) and fuzzy AHP (F-AHP) models.

Journal ArticleDOI
TL;DR: A novel linear programming model is proposed which includes precisely assessing reliability and considers post-fault network reconfiguration strategies involving operational constraints and is suitable for inclusion in reliability-constrained operational and planning optimization models for power distribution systems.
Abstract: Analytical methods for evaluating the reliability of simple and radial distribution networks have been well established. Since these analytical methods cannot consider post-fault load transfer between feeders, the reliability indices are significantly underestimated for mesh-constructed distribution networks. To accommodate various application scenarios, Monte-Carlo simulations are widely used for complex distribution networks and heavy computation burden is involved. In this paper, we propose a novel linear programming model which includes precisely assessing reliability and considers post-fault network reconfiguration strategies involving operational constraints. Moreover, this model also can formulate the influences of demand variations, uncertainty of distributed generations and protection failures on the reliability indices. Numerical simulations show that the proposed model yields the same results as the simulation-based algorithm. Specifically, the system average interruption duration indices are reduced when considering post-fault network reconfiguration strategies in all tested systems. Moreover, the proposed model is suitable for inclusion in reliability-constrained operational and planning optimization models for power distribution systems.

Journal ArticleDOI
TL;DR: The proposed comprehensive failure function over the useful lifetime and wear-out phase can be used for optimal design and manufacturing by identifying the failure prone components and end-of-life prediction and can be use for optimal decision-making in design, planning, operation, and maintenance of modern power electronic-based power systems.
Abstract: Reliability prediction in power electronic converters is of paramount importance for converter manufacturers and operators. Conventional approaches employ generic data provided in handbooks for random chance failure probability prediction within useful lifetime. However, the wear-out failures affect the long-term performance of the converters. Therefore, this article proposes a comprehensive approach for estimating the converter reliability within useful lifetime and wear-out period. Moreover, this article proposes a wear-out failure prediction approach based on a structural reliability concept. The proposed approach can quickly predict the converter wear-out behavior unlike conventional Monte Carlo-based techniques. Hence, it facilitates reliability modeling and evaluation in large-scale power electronic-based power systems with huge number of components. The proposed comprehensive failure function over the useful lifetime and wear-out phase can be used for optimal design and manufacturing by identifying the failure prone components and end-of-life prediction. Moreover, the proposed reliability model can be used for optimal decision-making in design, planning, operation, and maintenance of modern power electronic-based power systems. The proposed methodology is exemplified for a photovoltaic inverter by predicting its failure characteristics.

Proceedings ArticleDOI
25 Oct 2020
TL;DR: The ITU-T Recommendation P.808 provides a crowdsourcing approach for conducting a subjective assessment of speech quality using the Absolute Category Rating (ACR) method and this work provides an open-source implementation that runs on the Amazon Mechanical Turk platform and extends it to include Degradation Category Ratings (DCR) and Comparison Category ratings (CCR) test methods.
Abstract: The ITU-T Recommendation P.808 provides a crowdsourcing approach for conducting a subjective assessment of speech quality using the Absolute Category Rating (ACR) method. We provide an open-source implementation of the ITU-T Rec. P.808 that runs on the Amazon Mechanical Turk platform. We extended our implementation to include Degradation Category Ratings (DCR) and Comparison Category Ratings (CCR) test methods. We also significantly speed up the test process by integrating the participant qualification step into the main rating task compared to a two-stage qualification and rating solution. We provide program scripts for creating and executing the subjective test, and data cleansing and analyzing the answers to avoid operational errors. To validate the implementation, we compare the Mean Opinion Scores (MOS) collected through our implementation with MOS values from a standard laboratory experiment conducted based on the ITU-T Rec. P.800. We also evaluate the reproducibility of the result of the subjective speech quality assessment through crowdsourcing using our implementation. Finally, we quantify the impact of parts of the system designed to improve the reliability: environmental tests, gold and trapping questions, rating patterns, and a headset usage test.