scispace - formally typeset
Search or ask a question

Showing papers on "Reliability (statistics) published in 2014"


Book ChapterDOI
01 Jan 2014
TL;DR: Palano, Peter Rosenbaum, Stephen Walter, Dianne Russell, Ellen Wood, and Barbara Galappi as mentioned in this paper, the authors of this paper. But they did not discuss their work with the authors.
Abstract: Robert Palisano, Peter Rosenbaum, Stephen Walter, Dianne Russell, Ellen Wood, Barbara Galappi

647 citations


Journal ArticleDOI
TL;DR: The Consortium for Reliability and Reproducibility (CoRR) has aggregated 1,629 typical individuals’ resting state fMRI data from 18 international sites, and is openly sharing them via the International Data-sharing Neuroimaging Initiative (INDI).
Abstract: Efforts to identify meaningful functional imaging-based biomarkers are limited by the ability to reliably characterize inter-individual differences in human brain function. Although a growing number of connectomics-based measures are reported to have moderate to high test-retest reliability, the variability in data acquisition, experimental designs, and analytic methods precludes the ability to generalize results. The Consortium for Reliability and Reproducibility (CoRR) is working to address this challenge and establish test-retest reliability as a minimum standard for methods development in functional connectomics. Specifically, CoRR has aggregated 1,629 typical individuals’ resting state fMRI (rfMRI) data (5,093 rfMRI scans) from 18 international sites, and is openly sharing them via the International Data-sharing Neuroimaging Initiative (INDI). To allow researchers to generate various estimates of reliability and reproducibility, a variety of data acquisition procedures and experimental designs are included. Similarly, to enable users to assess the impact of commonly encountered artifacts (for example, motion) on characterizations of inter-individual variation, datasets of varying quality are included.

406 citations


ComponentDOI
12 Aug 2014-PLOS ONE

394 citations


Journal ArticleDOI
TL;DR: The findings of this study show that the ISI-K is a reliable and valid instrument for assessing the severity of insomnia in a Korean population.
Abstract: Background and Purpose The purposes of this study were to standardize and validate a Korean version of the Insomnia Severity Index (ISI-K), and to evaluate its clinical usefulness.

306 citations


Journal ArticleDOI
TL;DR: A step-by-step guide for conducting a visual analysis of graphed data and considerations for persons interested in using visual analysis to evaluate an intervention are highlighted, especially the importance of collecting reliability data for dependent measures and fidelity of implementation of study procedures.
Abstract: Visual analysis of graphic displays of data is a cornerstone of studies using a single case experimental design (SCED). Data are graphed for each participant during a study with trend, level, and stability of data assessed within and between conditions. Reliable interpretations of effects of an intervention are dependent on researchers' understanding and use of systematic procedures. The purpose of this paper is to provide readers with a rationale for visual analysis of data when using a SCED, a step-by-step guide for conducting a visual analysis of graphed data, as well as to highlight considerations for persons interested in using visual analysis to evaluate an intervention, especially the importance of collecting reliability data for dependent measures and fidelity of implementation of study procedures.

297 citations


Journal ArticleDOI
TL;DR: In this article, the authors make recommendations for scale inspection that take these dynamics and this distinction into account, and make use of the Greatest Lower Bound (LWB) or Omega.
Abstract: Health Psychologists using questionnaires rely heavily on Cronbach’s alpha as indicator of scale reliability and internal consistency. Cronbach’s alpha is often viewed as some kind of quality label: high values certify scale quality, low values prompt removal of one or several items. Unfortunately, this approach suffers two fundamental problems. First, Cronbach’s alpha is both unrelated to a scale's internal consistency and a fatally flawed estimate of its reliability. Second, the approach itself assumes that scale items are repeated measurements, an assumption that is often violated and rarely desirable. The problems with Cronbach’s alpha are easily solved by computing readily available alternatives, such as the Greatest Lower Bound or Omega. Solving the second problem, however, is less straightforward. This requires forgoing the appealing comfort of a quantitative, seemingly objective indicator of scale quality altogether, instead acknowledging the dynamics of reliability and validity and the distinction between scales and indices. In this contribution, I will explore these issues, and provide recommendations for scale inspection that takes these dynamics and this distinction into account.

296 citations


Journal ArticleDOI
TL;DR: In this paper, a new cost function is defined to include the cost of active power losses of the network and the customer interruption costs simultaneously, in order to calculate the reliability indices of the load points, the reconfiguration technique is considered as a failure-rate reduction strategy.
Abstract: This paper proposes a new method to improve the reliability of the distribution system using the reconfiguration strategy. In this regard, a new cost function is defined to include the cost of active power losses of the network and the customer interruption costs simultaneously. Also, in order to calculate the reliability indices of the load points, the reconfiguration technique is considered as a failure-rate reduction strategy. Regarding the reliability cost, the composite customer damage function is employed to find the customer interruption cost data. Meanwhile, a powerful stochastic framework based on a two- point estimate method is proposed to capture the uncertainty of random parameters. Also, a novel self-adaptive modification method based on the clonal selection algorithm is proposed as the optimization tool. The feasibility and satisfying performance of the proposed method are examined on the 69-bus IEEE test system.

222 citations


Journal ArticleDOI
TL;DR: This paper focuses on sampling techniques and, considering the recent adaptation of the EGRA method for systems, a strategy is presented to adapt the AK-MCS method for system reliability.

201 citations


Journal ArticleDOI
TL;DR: Reliability was good to excellent for IR and ER ROM and isometric strength measurements, regardless of patient or shoulder position or equipment used, and all procedures examined showed acceptable reliability for clinical use.

193 citations


Journal ArticleDOI
TL;DR: In this article, the authors review some applications where field reliability data are used and explore some of the opportunities to use modern reliability data to provide stronger statistical methods to operate and predict the performance of systems in the field.
Abstract: This article reviews some applications where field reliability data are used and explores some of the opportunities to use modern reliability data to provide stronger statistical methods to operate and predict the performance of systems in the field.

181 citations


Journal ArticleDOI
TL;DR: In this paper, a robust transit network optimization method, in which travel time reliability on road is considered, is presented, where a robust optimization model, taking into account the stochastic travel time, is formulated to satisfy the demand of passengers and provide reliable transit service.
Abstract: This paper presents a transit network optimization method, in which travel time reliability on road is considered. A robust optimization model, taking into account the stochastic travel time, is formulated to satisfy the demand of passengers and provide reliable transit service. The optimization model aims to maximize the efficiency of passenger trips in the optimized transit network. Tabu search algorithm is defined and implemented to solve the problem. Then, transit network optimization method proposed in this paper is tested with two numerical examples: a simple route and a medium-size network. The results show the proposed method can effectively improve the reliability of a transit network and reduce the travel time of passengers in general.

Journal ArticleDOI
TL;DR: A reliability generalization study was conducted on three widely studied information systems constructs from the technology acceptance model: perceived ease of use, perceived usefulness, and behavioral intentions, which summarizes the reliability coefficients of the scores on a specified scale across studies and identifies the study characteristics that influence the reliability of these scores.
Abstract: A reliability generalization study (a meta-analysis of reliability coefficients) was conducted on three widely studied information systems constructs from the technology acceptance model (TAM): perceived ease of use, perceived usefulness, and behavioral intentions. This form of meta-analysis summarizes the reliability coefficients of the scores on a specified scale across studies and identifies the study characteristics that influence the reliability of these scores. Reliability is a critical issue in conducting empirical research as the reliability of the scores on well-established scales can vary with study characteristics, attenuating effect sizes. In conducting this study, an extensive literature search was conducted, with 380 articles reviewed and coded to perform reliability generalization. Study characteristics, including technology, sample, and measurement characteristics, for these articles were recorded along with effect size data for the relationships among these variables. After controlling for number of items, sample size, and sampling error, differences in reliability coefficients were found with several study characteristics for the three technology acceptance constructs. The reliability coefficients of PEOU and PU were lower in hedonic contexts than in utilitarian contexts, and were higher when the originally validated scales were used as compared to when other items were substituted. Only 27 percent of the studies that provided the measurement items used the original PEOU items, while 39 percent used the original PU items. Scales that were administered in English had higher reliability coefficients for PU and BI, with a marginal effect for PEOU. Reliability differences were also found for other study characteristics, including reliability type, subject experience, and gender composition. While average reliability coefficients were high, the results show that, on average, relationships among these constructs are attenuated by 12 percent with maximum attenuation in the range of 35 to 43 percent. Implications for technology acceptance research are discussed and suggestions for addressing variation in reliability coefficients across studies are provided.

BookDOI
14 Mar 2014
TL;DR: Risk assessment of power systems, Risk assessment ofPower systems, کتابخانه دیجیتالی علوم پزشکی و شهید بهشتی.
Abstract: Risk assessment of power systems , Risk assessment of power systems , کتابخانه دیجیتالی دانشگاه علوم پزشکی و خدمات درمانی شهید بهشتی

Journal ArticleDOI
TL;DR: It is suggested that researchers should adjust their expectations concerning replications and shift to a meta-analytic mind-set, given the large impact that even modest amounts of measurement error can have on observed associations.
Abstract: Failures to replicate published psychological research findings have contributed to a “crisis of confidence.” Several reasons for these failures have been proposed, the most notable being questiona...

Proceedings ArticleDOI
11 Aug 2014
TL;DR: In this article, a security measure called effective security is defined that includes strong secrecy and stealth communication, which is defined as the ability of a message to be deciphered and the presence of meaningful communication is hidden.
Abstract: A security measure called effective security is defined that includes strong secrecy and stealth communication. Effective secrecy ensures that a message cannot be deciphered and that the presence of meaningful communication is hidden. To measure stealth we use resolvability and relate this to binary hypothesis testing. Results are developed for wire-tap channels and broadcast channels with confidential messages.

Journal ArticleDOI
TL;DR: The use of MSF employing medical colleagues, coworkers, and patients as a method to assess physicians in practice has been shown to have high reliability, validity, and feasibility.
Abstract: Purpose The use of multisource feedback (MSF) or 360-degree evaluation has become a recognized method of assessing physician performance in practice. The purpose of the present systematic review was to investigate the reliability, generalizability, validity, and feasibility of MSF for the assessment of physicians. Method

Journal ArticleDOI
TL;DR: It is demonstrated how structural reliability methods can be used to effectively model the VoI and an efficient algorithm for its computation is proposed and demonstrated by an illustrative application to monitoring of a structural system subjected to fatigue deterioration.

Journal ArticleDOI
29 Jan 2014
TL;DR: Be included as a separate mental disorder until the defining features of IGD have been identified, reliability and validity of specific IGD criteria have been obtained cross-culturally, and prevalence rates have been determined in representative epidemiologi -cal samples across the world.
Abstract: be included as a separate mental disorder until the defining features of IGD have been identified, reliability and validity of specific IGD criteria have been obtained cross-culturally, prevalence rates have been determined in representative epidemiologi -cal samples across the world, and etiology and associated biological features have been evaluated

Journal ArticleDOI
TL;DR: In this article, an efficient GA-based method to improve the reliability and power quality of distribution systems using network reconfiguration is presented, in which two new objective functions are formulated to address power quality and reliability issues for the re-configuration problem.

Journal ArticleDOI
TL;DR: In general, ERP amplitudes showed adequate to excellent test-retest reliability across a 4-week interval, depending on the component studied, and averaging across multiple trials substantially improved reliability.

Journal ArticleDOI
TL;DR: The strongest levels of evidence for reliability exists in support of the Debrunner kyphometer, Spinal Mouse andflexicurve index, and for validity supports the arcometer and Flexicurv index.

Journal ArticleDOI
TL;DR: The main objective of the paper is to raise awareness on existing trade‐offs between different qualities of possible food security measurement tools that must be taken into account when such tools are proposed for practical application, especially for use within an international monitoring framework.
Abstract: This paper reviews some of the existing food security indicators, discussing the validity of the underlying concept and the expected reliability of measures under reasonably feasible conditions. The main objective of the paper is to raise awareness on existing trade-offs between different qualities of possible food security measurement tools that must be taken into account when such tools are proposed for practical application, especially for use within an international monitoring framework. The hope is to provide a timely, useful contribution to the process leading to the definition of a food security goal and the associated monitoring framework within the post-2015 Development Agenda.

Journal ArticleDOI
TL;DR: The AMP is one of the most widely used implicit attitude measures, and evidence regarding its reliability and validity has grown rapidly as mentioned in this paper, and the AMP can be used effectively for a wide variety of research purposes.
Abstract: The affect misattribution procedure (AMP) measures automatically activated responses based on the misattributions people make about the sources of their affect or cognitions. The AMP is one of the most widely used implicit attitude measures, and evidence regarding its reliability and validity has grown rapidly. In this brief review, we survey the evidence of reliability and validity while discussing the mechanisms that drive priming effects in the AMP. We consider the unique capabilities of this procedure to measure implicit and explicit cognition with simplicity and greater experimental control than other measures. Finally, we offer recommendations for using the AMP effectively for a wide variety of research purposes.

Journal ArticleDOI
TL;DR: In this article, an efficient method for solving the multi-objective reconfiguration of radial distribution systems with regard to distributed generators is presented, which considers reliability, operation cost and loss simultaneously.
Abstract: Power loss reduction can be considered as one of the main purposes for distribution system operators. Reconfiguration is an operation process used for this optimisation by means of changing the status of switches in a distribution network. Recently, all system operators tried their best in order to obtain well-balanced distribution systems to decrease the operation cost, improve reliability and reduce power loss. This study presents an efficient method for solving the multi-objective reconfiguration of radial distribution systems with regard to distributed generators. The conventional distribution feeder reconfiguration (DFR) problem cannot meet the reliability requirements, because it only considers loss and voltage deviation as objective functions. The proposed approach considers reliability, operation cost and loss simultaneously. By adding the reliability objective to the DFR problem, this problem becomes more complicated than before and it needs to be solved with an accurate algorithm. Therefore this study utilises an Enhanced Gravitational Search Algorithm called EGSA which profits from a special mutation strategy in order to reduce the processing time and improve the quality of solutions, particularly to avoid being trapped in local optima. The proposed approach has been applied to two distribution test systems including IEEE 33 and 70-node test systems.

OtherDOI
29 Sep 2014
TL;DR: The properties of the modeling framework that are of highest importance for reliability practitioners are discussed.
Abstract: Over the last decade, Bayesian networks (BNs) have become a popular tool for modeling many kinds of statistical problems. We have also seen a growing interest for using BNs in the reliability analysis community. This article discusses the properties of the modeling framework that are of highest importance for reliability practitioners. Keywords: Bayesian networks; influence diagrams; modelling; decision making

Journal ArticleDOI
TL;DR: The proposed stochastic approach is scalable for analyzing large circuits and can further account for various fault models as well as calculating the soft error rate (SER), supported by extensive simulations and detailed comparison with existing approaches.
Abstract: Reliability is fast becoming a major concern due to the nanometric scaling of CMOS technology. Accurate analytical approaches for the reliability evaluation of logic circuits, however, have a computational complexity that generally increases exponentially with circuit size. This makes intractable the reliability analysis of large circuits. This paper initially presents novel computational models based on stochastic computation; using these stochastic computational models (SCMs), a simulation-based analytical approach is then proposed for the reliability evaluation of logic circuits. In this approach, signal probabilities are encoded in the statistics of random binary bit streams and non-Bernoulli sequences of random permutations of binary bits are used for initial input and gate error probabilities. By leveraging the bit-wise dependencies of random binary streams, the proposed approach takes into account signal correlations and evaluates the joint reliability of multiple outputs. Therefore, it accurately determines the reliability of a circuit; its precision is only limited by the random fluctuations inherent in the stochastic sequences. Based on both simulation and analysis, the SCM approach takes advantages of ease in implementation and accuracy in evaluation. The use of non-Bernoulli sequences as initial inputs further increases the evaluation efficiency and accuracy compared to the conventional use of Bernoulli sequences, so the proposed stochastic approach is scalable for analyzing large circuits. It can further account for various fault models as well as calculating the soft error rate (SER). These results are supported by extensive simulations and detailed comparison with existing approaches.

Proceedings ArticleDOI
15 Oct 2014
TL;DR: Chisel as discussed by the authors is a system for reliability and accuracy-aware optimization of approximate computational kernels that run on approximate hardware platforms, given a combined reliability and/or accuracy specification, automatically selects approximate kernel operations to synthesize an approximate computation that minimizes energy consumption.
Abstract: The accuracy of an approximate computation is the distance between the result that the computation produces and the corresponding fully accurate result. The reliability of the computation is the probability that it will produce an acceptably accurate result. Emerging approximate hardware platforms provide approximate operations that, in return for reduced energy consumption and/or increased performance, exhibit reduced reliability and/or accuracy. We present Chisel, a system for reliability- and accuracy-aware optimization of approximate computational kernels that run on approximate hardware platforms. Given a combined reliability and/or accuracy specification, Chisel automatically selects approximate kernel operations to synthesize an approximate computation that minimizes energy consumption while satisfying its reliability and accuracy specification. We evaluate Chisel on five applications from the image processing, scientific computing, and financial analysis domains. The experimental results show that our implemented optimization algorithm enables Chisel to optimize our set of benchmark kernels to obtain energy savings from 8.7% to 19.8% compared to the fully reliable kernel implementations while preserving important reliability guarantees.

01 Oct 2014
TL;DR: The experimental results show that the implemented optimization algorithm enables Chisel to optimize the authors' set of benchmark kernels to obtain energy savings from 8.7% to 19.8% compared to the fully reliable kernel implementations while preserving important reliability guarantees.
Abstract: The accuracy of an approximate computation is the distance between the result that the computation produces and the corresponding fully accurate result. The reliability of the computation is the probability that it will produce an acceptably accurate result. Emerging approximate hardware platforms provide approximate operations that, in return for reduced energy consumption and/or increased performance, exhibit reduced reliability and/or accuracy. We present Chisel, a system for reliability- and accuracy-aware optimization of approximate computational kernels that run on approximate hardware platforms. Given a combined reliability and/or accuracy specification, Chisel automatically selects approximate kernel operations to synthesize an approximate computation that minimizes energy consumption while satisfying its reliability and accuracy specification. We evaluate Chisel on five applications from the image processing, scientific computing, and financial analysis domains. The experimental results show that our implemented optimization algorithm enables Chisel to optimize our set of benchmark kernels to obtain energy savings from 8.7% to 19.8% compared to the fully reliable kernel implementations while preserving important reliability guarantees.

Journal ArticleDOI
TL;DR: In this paper, the reliability and validity of QRA through a case study of ship-ship collision risk is investigated. But, the reliability of the proposed encounter detection mechanisms is questionable and significant uncertainty is found regarding this encounter definition in the selected methods.

Journal ArticleDOI
TL;DR: An empirical investigation to rank different factors influencing on maintenance strategies on Iranian oil terminals’ company indicates that reliability ranks first, followed by production quality, reliability, cost and safety, and safety is the most important issue.
Abstract: Article history: Received December 28, 2013 Accepted 24 March 2014 Available online March 31 2014 This paper presents an empirical investigation to rank different factors influencing on maintenance strategies on Iranian oil terminals’ company. The study determines four main factors, production quality, reliability, cost and safety. Using fuzzy analytical process, the study determines various factors associated with each main factor and ranks them by performing pairwise comparisons. The results indicate that reliability ranks first (0.255), followed by production quality (0.252), cost (0.25) and safety (0.244). In terms of reliability, the best utilization of resources is number one priority followed by increase access to maintenance tools, reduction in production interruption are among the most important issues. In terms of production quality, reduction in system failure as well as reworks is the most important factors followed by customer satisfaction and defects. In terms of cost items, ease of access to accessories and consulting are important factors followed by necessary software, hardware and training programs. Finally, in terms of safety factors, external, internal and employee services are the most important issues, which are needed to be considered. © 2014 Growing Science Ltd. All rights reserved.