scispace - formally typeset
Search or ask a question

Showing papers on "Reliability (statistics) published in 2016"


Journal ArticleDOI
TL;DR: A practical guideline for clinical researchers to choose the correct form of ICC is provided and the best practice of reporting ICC parameters in scientific publications is suggested.

12,717 citations


Journal ArticleDOI
TL;DR: Using Monte Carlo simulation, the performance of Cronbach's alpha reliability coefficients under a one-dimensional model is evaluated in terms of skewness and no tau-equivalence and shows that omega coefficient is always better choice than alpha and in the presence of skew items is preferable to use omega and glb coefficients even in small samples.
Abstract: The Cronbach's alpha is the most widely used method for estimating internal consistency reliability. This procedure has proved very resistant to the passage of time, even if its limitations are well documented and although there are better options as omega coefficient or the different versions of glb, with obvious advantages especially for applied research in which the items differ in quality or have skewed distributions. In this paper, using Monte Carlo simulation, the performance of these reliability coefficients under a one-dimensional model is evaluated in terms of skewness and no tau-equivalence. The results show that omega coefficient is always better choice than alpha and in the presence of skew items is preferable to use omega and glb coefficients even in small samples.

473 citations


BookDOI
25 Nov 2016
TL;DR: This chapter discusses Reliability Engineering in Perspective, which examines the development of Reliability Modeling in the context of Repairable Components and Systems.
Abstract: Chapter 1 Reliability Engineering in Perspective 1.1 Why Study Reliability? 1.2 Failure Models 1.3 Failure Mechanisms 1.4 Performance Measures 1.5 Formal Definition of Reliability 1.6 Definition of Availability 1.7 Definition of Risk Chapter 2 Basic Reliability Mathematics: Review of Probability and Statistics 2.1 Introduction 2.2 Elements of Probability 2.3 Probability Distributions 2.4 Basic Characteristics of Random Variables 2.5 Estimation and Hypothesis Testing 2.6 Frequency Tables and Histograms 2.7 Goodness-of-Fit Tests 2.8 Regression Analysis Chapter 3 Elements of Component Reliability 3.1 Concept of Reliability 3.2 Common Distributions in Component Reliability 3.3 Component Reliability Model Selection 3.4 Maximum Likelihood Estimation of Reliability Distribution Parameters 3.5 Classical Nonparametric Distribution Estimation 3.6 Bayesian Estimation Procedures 3.7 Methods of Generic Failure Rate Determination Chapter 4 System Reliability Analysis 4.1 Reliability Block Diagram Method 4.2 Fault Tree and Success Tree Methods 4.3 Event Tree Method 4.4 Master Logic Diagram 4.5 Failure Mode and Effect Analysis Chapter 5 Reliability and Availability of Repairable Components and Systems 5.1 Repairable System Reliability 5.2 Availability of Repairable Systems 5.3 Use of Markov Processes for Determining System Availability 5.4 Use of System Analysis Techniques in the Availability Calculations of Complex Systems Chapter 6 Selected Topics in Reliability Modeling 6.1 Probabilistic Physics-of-Failure Reliability Modeling 6.2 Software Reliability Analysis 6.3 Human Reliability 6.4 Measures of Importance 6.5 Reliability-Centered Maintenance 6.6 Reliability Growth Chapter 7 Selected Topics in Reliability Data Analysis 7.1 Accelerated Life Testing 7.2 Analysis of Dependent Failures 7.3 Uncertainty Analysis 7.4 Use of Expert Opinion for Estimating Reliability Parameters 7.5 Probabilistic Failure Analysis Chapter 8 Risk Analysis 8.1 Determination of Risk Values8.2 Formalization of Quantitative Risk Assessment 8.3 Probabilistic Risk Assessment 8.4 Compressed Natural Gas Powered Buses: A PRA Case Study 8.5 A Simple Fire Protection Risk Analysis 8.6 Precursor Analysis Appendices Index

416 citations


01 Jan 2016
TL;DR: Reliability measurement of composite variables has attracted a considerable amount of interest among sociologists in the last several years as mentioned in this paper, although little attention has been paid to techniques of reliability assessment and improvement within the sociological methodology literature itself.
Abstract: Reliability measurement of composite variables has attracted a considerable amount of interest among sociologists in the last several years.' Although sociologists have always lamented the problem of reliability in sociological measurement, until recently little attention has been paid to techniques of reliability assessment and improvement within the sociological methodology literature itself. While much of the recent

392 citations


Journal ArticleDOI
15 Nov 2016-Energy
TL;DR: In this paper, a hybrid wind-solar generation microgrid system with hydrogen energy storage is designed for a 20-year period of operation using novel multi-objective optimization algorithm to minimize the three objective functions namely annualized cost of the system, loss of load expected and loss of energy expected.

265 citations


Journal ArticleDOI
TL;DR: Results of three numerical examples show that the proposed single-loop Kriging (SILK) surrogate modeling method significantly increases the efficiency of time-dependent reliability analysis without sacrificing the accuracy.
Abstract: Current surrogate modeling methods for time-dependent reliability analysis implement a double-loop procedure, with the computation of extreme value response in the outer loop and optimization in the inner loop. The computational effort of the double-loop procedure is quite high even though improvements have been made to improve the efficiency of the inner loop. This paper proposes a single-loop Kriging (SILK) surrogate modeling method for time-dependent reliability analysis. The optimization loop used in current methods is completely removed in the proposed method. A single surrogate model is built for the purpose of time-dependent reliability assessment. Training points of random variables and over time are generated at the same level instead of at two separate levels. The surrogate model is refined adaptively based on a learning function modified from timeindependent reliability analysis and a newly developed convergence criterion. Strategies for building the surrogate model are investigated for problems with and without stochastic processes. Results of three numerical examples show that the proposed single-loop procedure significantly increases the efficiency of time-dependent reliability analysis without sacrificing the accuracy. [DOI: 10.1115/1.4033428]

214 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce the channel and spatial reliability concepts to DCF tracking and provide a novel learning algorithm for its efficient and seamless integration in the filter update and the tracking process.
Abstract: Short-term tracking is an open and challenging problem for which discriminative correlation filters (DCF) have shown excellent performance. We introduce the channel and spatial reliability concepts to DCF tracking and provide a novel learning algorithm for its efficient and seamless integration in the filter update and the tracking process. The spatial reliability map adjusts the filter support to the part of the object suitable for tracking. This both allows to enlarge the search region and improves tracking of non-rectangular objects. Reliability scores reflect channel-wise quality of the learned filters and are used as feature weighting coefficients in localization. Experimentally, with only two simple standard features, HoGs and Colornames, the novel CSR-DCF method -- DCF with Channel and Spatial Reliability -- achieves state-of-the-art results on VOT 2016, VOT 2015 and OTB100. The CSR-DCF runs in real-time on a CPU.

203 citations


Journal ArticleDOI
TL;DR: In this article, the authors reviewed previous studies on developments and applications of response surface methods (RSMs) in different slope reliability problems and provided some suggestions for selecting relatively appropriate RSMs in slope reliability analysis.

181 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a systematic naming convention, formula-generating methods, and methods of representing each of the reliability coefficients, and an easy-to-use solution to the issue of choosing between coefficient alpha and composite reliability.
Abstract: The current conventions for test score reliability coefficients are unsystematic and chaotic. Reliability coefficients have long been denoted using names that are unrelated to each other, with each formula being generated through different methods, and they have been represented inconsistently. Such inconsistency prevents organizational researchers from understanding the whole picture and misleads them into using coefficient alpha unconditionally. This study provides a systematic naming convention, formula-generating methods, and methods of representing each of the reliability coefficients. This study offers an easy-to-use solution to the issue of choosing between coefficient alpha and composite reliability. This study introduces a calculator that enables its users to obtain the values of various multidimensional reliability coefficients with a few mouse clicks. This study also presents illustrative numerical examples to provide a better understanding of the characteristics and computations of reliability...

174 citations


01 Jan 2016
TL;DR: The reliability evaluation of engineering systems is universally compatible with any devices to read, and is available in the book collection an online access to it is set as public so you can get it instantly.
Abstract: Thank you very much for reading reliability evaluation of engineering systems. As you may know, people have look hundreds times for their favorite readings like this reliability evaluation of engineering systems, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they are facing with some infectious bugs inside their computer. reliability evaluation of engineering systems is available in our book collection an online access to it is set as public so you can get it instantly. Our book servers saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the reliability evaluation of engineering systems is universally compatible with any devices to read.

173 citations


Journal ArticleDOI
TL;DR: A static quantitative reliability analysis is presented that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering.
Abstract: Emerging high-performance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and masking of soft errors is challenging, expensive, and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning, and big data analytics) can often naturally tolerate soft errors.We present Rely a programming language that enables developers to reason about the quantitative reliability of an application -- namely, the probability that it produces the correct result when executed on unreliable hardware. Rely allows developers to specify the reliability requirements for each value that a function produces.We present a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering. The analysis takes a Rely program with a reliability specification and a hardware specification that characterizes the reliability of the underlying hardware components and verifies that the program satisfies its reliability specification when executed on the underlying unreliable hardware platform. We demonstrate the application of quantitative reliability analysis on six computations implemented in Rely.

Journal ArticleDOI
TL;DR: In this article, the authors explored the advantage of moving least squares method (MLSM) over LSM to reduce the number of iterations required to obtain the updated centre point of design of experiment (DOE) to construct the final response surface for efficient reliability analysis of structures.

Journal ArticleDOI
TL;DR: A Global Sensitivity Analysis enhanced Surrogate (GSAS) modeling method for reliability analysis is proposed and the results show that the efficiency and accuracy of the proposed method are better than those of EGRA and AK-MCS.
Abstract: An essential issue in surrogate model-based reliability analysis is the selection of training points. Approaches such as efficient global reliability analysis (EGRA) and adaptive Kriging Monte Carlo simulation (AK-MCS) methods have been developed to adaptively select training points that are close to the limit state. Both the learning functions and convergence criteria of selecting training points in EGRA and AK-MCS are defined from the perspective of individual responses at Monte Carlo samples. This causes two problems: (1) some extra training points are selected after the reliability estimate already satisfies the accuracy target; and (2) the selected training points may not be the optimal ones for reliability analysis. This paper proposes a Global Sensitivity Analysis enhanced Surrogate (GSAS) modeling method for reliability analysis. Both the convergence criterion and strategy of selecting new training points are defined from the perspective of reliability estimate instead of individual responses of MCS samples. The new training points are identified according to their contribution to the uncertainty in the reliability estimate based on global sensitivity analysis. The selection of new training points stops when the accuracy of the reliability estimate reaches a specific target. Five examples are used to assess the accuracy and efficiency of the proposed method. The results show that the efficiency and accuracy of the proposed method are better than those of EGRA and AK-MCS.

Journal ArticleDOI
TL;DR: A practical method is introduced for the selection of a suitable system elements maintenance strategy and to plan the preventive maintenance budget for the system elements, the cost optimization method and the fuzzy Analytic Hierarchy Process (AHP) method.
Abstract: In power distribution systems, with their great vastness and various outage causes, one of the most important problems of power distribution companies is to select a suitable maintenance strategy of system elements and method of financial planning for the maintenance of system elements with the two objectives of decrease in outage costs and improvement of system reliability. In this article, a practical method is introduced for the selection of a suitable system elements maintenance strategy; moreover, to plan the preventive maintenance budget for the system elements, two methods are offered: the cost optimization method and the fuzzy Analytic Hierarchy Process (AHP) method. In the former method, a new model of system maintenance cost is offered. This model, based on system outage information, the elements maintenance costs are determined as functions of system reliability indices and preventive maintenance budget. The latter method, too, a new guideline is introduced for considering the cost and reliability criteria in the trend of preventive maintenance budget planning. In this method, the preventive maintenance budget for the elements is determined based on relative priority of elements with reliability criteria. © 2015 Wiley Periodicals, Inc. Complexity 21: 70–88, 2016

Journal ArticleDOI
15 Sep 2016-Sensors
TL;DR: The main advantages of the proposed method are that it provides a more robust measure of reliability to the sensor data, and the complementary information of multi-sensors reduces the uncertainty of the fault recognition, thus enhancing the reliability of fault detection.
Abstract: Sensor data fusion technology is widely employed in fault diagnosis. The information in a sensor data fusion system is characterized by not only fuzziness, but also partial reliability. Uncertain information of sensors, including randomness, fuzziness, etc., has been extensively studied recently. However, the reliability of a sensor is often overlooked or cannot be analyzed adequately. A Z-number, Z = (A, B), can represent the fuzziness and the reliability of information simultaneously, where the first component A represents a fuzzy restriction on the values of uncertain variables and the second component B is a measure of the reliability of A. In order to model and process the uncertainties in a sensor data fusion system reasonably, in this paper, a novel method combining the Z-number and Dempster–Shafer (D-S) evidence theory is proposed, where the Z-number is used to model the fuzziness and reliability of the sensor data and the D-S evidence theory is used to fuse the uncertain information of Z-numbers. The main advantages of the proposed method are that it provides a more robust measure of reliability to the sensor data, and the complementary information of multi-sensors reduces the uncertainty of the fault recognition, thus enhancing the reliability of fault detection.

Journal ArticleDOI
TL;DR: This work was carried out in collaboration between all authors and author SA designed the study, wrote the protocol and supervised the work, and all authors read and approved the final manuscript.
Abstract: This work was carried out in collaboration between all authors. Author SA designed the study, wrote the protocol and supervised the work. Authors NNAZ and FIK carried out all laboratories work and performed the statistical analysis. Author NNAZ managed the analyses of the study. Author SA wrote the first draft of the manuscript. Author FIK managed the literature searches and edited the manuscript. All authors read and approved the final manuscript.

Proceedings ArticleDOI
13 Aug 2016
TL;DR: A highly accurate SMART-based analysis pipeline that can correctly predict the necessity of a disk replacement even 10-15 days in advance and uses statistical techniques to automatically detect which SMART parameters correlate with disk replacement.
Abstract: Disks are among the most frequently failing components in today's IT environments. Despite a set of defense mechanisms such as RAID, the availability and reliability of the system are still often impacted severely. In this paper, we present a highly accurate SMART-based analysis pipeline that can correctly predict the necessity of a disk replacement even 10-15 days in advance. Our method has been built and evaluated on more than 30000 disks from two major manufacturers, monitored over 17 months. Our approach employs statistical techniques to automatically detect which SMART parameters correlate with disk replacement and uses them to predict the replacement of a disk with even 98% accuracy.

Journal ArticleDOI
TL;DR: This paper presents a thorough review of existing techniques for reliability and energy efficiency and their trade-off in cloud computing and discusses the classifications on resource failures, fault tolerance mechanisms and energy management mechanisms in cloud systems.

Journal ArticleDOI
TL;DR: Walking test at the normal pace appears suitable for estimating physical function and deterioration due to chronic disease and walking test at a maximum pace might be useful for estimating subjective general health and skeletal muscle mass.
Abstract: [Purpose] Gait speed is an important objective values associated with several health-related outcomes including functional mobility in aging people. However, walking test methodologies and descriptions are not standardized considering specific aims of research. This study examine the reliability and validity of gait speed measured at various distances and paces in elderly Koreans.

Journal ArticleDOI
TL;DR: The Monte Carlo simulation method is utilized to compute the DFT model with consideration of system replacement policy and the results show that this integrated approach is more flexible and effective for assessing the reliability of complex dynamic systems.

Journal ArticleDOI
TL;DR: In the context of longitudinal studies, this article showed that it is difficult to determine whether a difference between two successive observations is attributable to a real change of the respondents or only to the characteristics of the measurement tool, which then leads to a possible misinterpretation of the results.
Abstract: Test–retest is a concept that is routinely evaluated during the validation phase of many measurement tools. However, this term covers at least two related but very different concepts: reliability and agreement. Reliability is the ability of a measure applied twice upon the same respondents to produce the same ranking on both occasions. Agreement requires the measurement tool to produce twice the same exact values. An analysis of research papers showed that the distinction between both concepts remains anything but clear, and that the current practice is to evaluate reliability only, generally on the basis of the sole Pearson’s correlation. This practice is very problematic in the context of longitudinal studies because it becomes difficult to determine whether a difference between two successive observations is attributable to a real change of the respondents or only to the characteristics of the measurement tool, which then leads to a possible misinterpretation of the results. More focus should be given ...

Journal ArticleDOI
TL;DR: The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results.

Journal ArticleDOI
TL;DR: In this article, an analysis of worldwide blackouts classifies the events according to defined indices and establishes common causes among them, concluding that a voluntary system of compliance with reliability regulations is inadequate to the needs of current power systems.

Journal ArticleDOI
TL;DR: The authors meta-analyzes reliability coefficients (internal consistency, interrater, and intrarater) as reported in published L2 research to guide interpretations of reliability estimates beyond generic benchmarks for acceptability.
Abstract: Ensuring internal validity in quantitative research requires, among other conditions, reliable instrumentation. Unfortunately, however, second language (L2) researchers often fail to report and even more often fail to interpret reliability estimates beyond generic benchmarks for acceptability. As a means to guide interpretations of such estimates, this article meta-analyzes reliability coefficients (internal consistency, interrater, and intrarater) as reported in published L2 research. We recorded 2,244 reliability estimates in 537 individual articles along with study (e.g., sample size) and instrument features (e.g., item formats) proposed to influence reliability. We also coded for the indices employed (e.g., alpha, KR20). The coefficients were then aggregated (i.e., meta-analyzed). The three types of reliability varied, with internal consistency as the lowest: median = .82. Interrater and intrarater estimates were substantially higher (.92 and .95, respectively). Overall estimates were also found to vary according to study and instrument features such as proficiency (low = .79, intermediate = .84, advanced = .89) and target skill (e.g., writing = .88 vs. listening = .77). We use our results to inform and encourage interpretations of reliability estimates relative to the larger field as well as to the substantive and methodological features particular to individual studies and subdomains. [ABSTRACT FROM AUTHOR]

Journal ArticleDOI
TL;DR: In this article, a new reliability evaluation approach is presented, in which Smart Agent Communication (SAC) based system reconfiguration is innovatively integrated into the reliability evaluation process.

Journal ArticleDOI
TL;DR: In this paper, the authors presented an optimization method for remanufacturing process planning in which reliability and cost are taken into consideration, and the results showed that the proposed method is effective for improving reliability and reducing cost.

Journal ArticleDOI
01 Mar 2016
Abstract: This study examined the psychometric properties of the Brazilian-Portuguese version of the Generalized Anxiety Disorder GAD-7 questionnaire in a community sample (n = 206) of Brazilian adults. The sample was 41% female, with a mean age of 21.10 (SD = 4.49), 75.6% from colleges/universities. Results of a confi rmatory factor analysis provided support to the original unidimensional model of the GAD-7 in the Brazilian context. Analyses of Variance (ANOVA) showed that the GAD-7 scores were signifi cantly different between males and females, with females scoring higher than males. The scale demonstrated good reliability evidence; both Cronbach’s alpha coeffi cient (α = .916) and rho composite reliability coeffi cient (ρ = .909) were adequate. Item parameter analysis showed items 5 and 7 presented 1 Mailing address: Av. Dr. João Severiano Rodrigues da Cunha, 101, Uberaba, MG, Brazil 14048-900. Phone: (34) 9198-3998. E-mail: moreno.andreluiz@gmail.com, diogo.a.sousa@gmail.com, anamariafl ps@gmail. com, gmanfro@gmail.com, gsallumjr@gmail.com, silvia.koller@gmail.com, fl aviaosorio@ig.com.br and jcrippa@fmrp.usp.br Moreno, A. L., DeSousa, D. A., Souza, A. M. F. L. P., Manfro, G. G., Salum, G. A., Koller, S. H., Osório, F. L., Crippa, J. A. S. 368 the highest severity thresholds for the generalized anxiety latent trait, whereas item 1 presented the lowest ones. Our fi ndings suggest that the Brazilian-Portuguese version of the GAD-7 is suitable for assessing Generalized Anxiety Disorder symptoms in Brazilian adults in community settings.

Journal ArticleDOI
TL;DR: In this article, a new local approximation method using the most probable point (LMPP) is proposed to improve the accuracy and efficiency of RBDO methods using Kriging model.


Journal ArticleDOI
15 Sep 2016-Nature
TL;DR: There are ways to ensure reliability in cell experiments, from the batch of serum to the shape of growth plates, where numerous variables can torpedo attempts to replicate cell experiments.
Abstract: Numerous variables can torpedo attempts to replicate cell experiments, from the batch of serum to the shape of growth plates. But there are ways to ensure reliability.