scispace - formally typeset
Search or ask a question

Showing papers on "Reliability (statistics) published in 1987"


Book
01 Jan 1987
TL;DR: Measures of Structural Reliability Assessment, including second-Moment and Transformation Methods, and Probabilistic Evaluation of Existing Structures.
Abstract: Measures of Structural Reliability. Structural Reliability Assessment. Integration and Simulation Methods. Second-Moment and Transformation Methods. Reliability of Structural Systems. Time Dependent Reliability. Load and Load Effect Modelling. Resistance Modelling. Codes and Structural Reliability. Probabilistic Evaluation of Existing Structures. Appendices. References. Index.

3,151 citations


Journal ArticleDOI
TL;DR: Responsiveness should join reliability and validity as necessary requirements for instruments designed primarily to measure change over time in psychometric measures.

1,867 citations


Journal ArticleDOI
TL;DR: In this article, the authors demonstrate that closer attention to the ways people construct meaning can suggest new ways to improve reliability in air traffic control, nuclear power generation, and naval carrier operations.
Abstract: Organizations in which reliable performance is a more pressing issue than efficient performance often must learn to cope with incomprehensible technologies by means other than trial and error, since the cost of failure is too high. Discovery and consistent application of substitutes for trial and error—such as imagination, simulation, vicarious experience, and stories—contribute to heightened reliability. Organizational culture is integral to the creation of effective substitutes. Using examples taken from air traffic control, nuclear power generation, and naval carrier operations, this article demonstrates that closer attention to the ways people construct meaning can suggest new ways to improve reliability.

1,290 citations



Journal ArticleDOI
TL;DR: It is concluded that the utility approach is beyond the experimental stage, and is now a viable alternative for investigators to use in measuring health-related quality of life.

1,038 citations



Journal ArticleDOI
TL;DR: This paper provides exact power contours to guide the planning of reliability studies, where the parameter of interest is the coefficient of intraclass correlation rho derived from a one-way analysis of variance model.
Abstract: This paper provides exact power contours to guide the planning of reliability studies, where the parameter of interest is the coefficient of intraclass correlation rho derived from a one-way analysis of variance model. The contours display the required numbers of subjects k and number of repeated measurements n that provide 80 per cent power for testing Ho: rho less than or equal to rho 0 versus H1: rho greater than rho 0 at the 5 per cent level of significance for selected values of rho o. We discuss the design considerations of these results.

642 citations



Journal ArticleDOI
TL;DR: It is argued that a new measure, Cohen's k statistic, 2 was the appropriate index of diagnostic agreement in psychiatry since it took into account the fact that raters agree by chance alone some of the time and only gave a perfect value if there was total agreement among the raters.
Abstract: Eighteen years ago in this journal, Spitzer and colleagues 1 published "Quantification of Agreement in Psychiatric Diagnosis," in which they argued that a new measure, Cohen's k statistic, 2 was the appropriate index of diagnostic agreement in psychiatry. They pointed out that other measures of diagnostic reliability then in use, such as the total percent agreement and the contingency coefficient, were flawed as indexes of agreement since they either overestimated the discriminating power of the diagnosticians or were affected by associations among the diagnoses other than strict agreement. The new statistic seemed to overcome the weaknesses of the other measures. It took into account the fact that raters agree by chance alone some of the time, and it only gave a perfect value if there was total agreement among the raters. Furthermore, generalizations of the simple k statistic were already available. This family of statistics could be used to assess

399 citations


Journal ArticleDOI
TL;DR: In this article, the classical first-and second-order reliability methods as applied to complex systems are reviewed and justified and improved by new concepts of asymptotic analysis, which are used for reliability analysis of structural or other technical systems.

367 citations


Journal ArticleDOI
TL;DR: The marked inconsistency of findings across studies comparing anorexics or bulimics with some "control" group on body-image variables is discussed in terms of variations in measurement techniques, subject characteristics, and experimental setting.
Abstract: Disturbances in body image are often regarded as a cardinal feature of anorexia nervosa and bulimia nervosa. The various approaches to assessing body-image disturbances in anorexics and bulimics are detailed, including body-part size estimation techniques, distorting image methods, silhouettes, and attitudinal measures. The marked inconsistency of findings across studies comparing anorexics or bulimics with some "control" group on body-image variables is discussed in terms of variations in measurement techniques, subject characteristics, and experimental setting. The reliability and validity of existing measures are discussed. Finally, conclusions and recommendations for future research are provided, in addition to a brief presentation of therapeutic approaches to treating body-image disturbances.

Journal ArticleDOI
TL;DR: In this paper, the authors present a framework for a model that can be used to determine the optimal (least cost) design of a water distribution system subject to continuity, conservation of energy, nodal head bounds, and reliability constraints.
Abstract: This paper presents the basic framework for a model that can be used to determine the optimal (least‐cost) design of a water distribution system subject to continuity, conservation of energy, nodal head bounds, and reliability constraints. Reliability is defined as the probability of satisfying nodal demands and pressure heads for various possible pipe failures (breaks) in the water distribution system. The overall model includes three that are linked: a steady‐state simulation model, a reliability model, and an optimization model. The simulation model is used to implicitly solve the continuity and energy constraints and is used in the reliability model to define minimum cut sets. The reliability model, which is based on a minimum cut‐set method, determines the values of system and nodal reliability. The optimization model is based on a generalized reduced‐gradient method. Examples are used to illustrate the model.

Journal ArticleDOI
TL;DR: This paper presents an approach for avoiding the large state space problem and uses a hierarchical modeling technique for analyzing complex reliability models that allows the flexibility of Markov models where necessary and retains the efficiency of combinatorial solution where possible.
Abstract: Combinatorial models such as fault trees and reliability block diagrams are efficient for model specification and often efficient in their evaluation. But it is difficult, if not impossible, to allow for dependencies (such as repair dependency and near-coincident-fault type dependency), transient and intermittent faults, standby systems with warm spares, and so on. Markov models can capture such important system behavior, but the size of a Markov model can grow exponentially with the number of components in this system. This paper presents an approach for avoiding the large state space problem. The approach uses a hierarchical modeling technique for analyzing complex reliability models. It allows the flexibility of Markov models where necessary and retains the efficiency of combinatorial solution where possible. Based on this approach a computer program called SHARPE (Symbolic Hierarchical Automated Reliability and Performance Evaluator) has been written. The hierarchical modeling technique provides a very flexible mechanism for using decomposition and aggregation to model large systems; it allows for both combinatorial and Markov or semi-Markov submodels, and can analyze each model to produce a distribution function. The choice of the number of levels of models and the model types at each level is left up to the modeler. Component distribution functions can be any exponential polynomial whose range is between zero and one. Examples show how combinations of models can be used to evaluate the reliability and availability of large systems using SHARPE.

Journal ArticleDOI
TL;DR: In this article, a theory for an enhanced mathematical model of R/C frame members is presented and its accuracy is verified by simulating various laboratory experiments for which data were available in the literature.
Abstract: A theory for an enhanced mathematical model of R/C frame members is presented and its accuracy is verified by simulating various laboratory experiments for which data were available in the literature. New member and global damage parameters are defined. These damage parameters are useful for subsequent reliability analysis of damaged concrete frames.

Book
01 Jan 1987

Journal ArticleDOI
TL;DR: The findings indicate a high rate of stability in self-reporting of substance use, both cross-sectionally and longitudinally, in agreement with other studies of self-reported drug use and suggest that questionnaire may provide highly reliable data for research.
Abstract: Summary The paper presents an evaluation of the stability and consistency of self-reported adolescent drug use. The data were collected from 1900 high-school students. Analyses included estimates of alternate forms reliability, non-response rates, logical consistency in the responses, test-retest reliability as well as estimates of exaggerated reports. The findings indicate a high rate of stability in self-reporting of substance use, both cross-sectionally and longitudinally. These results are in agreement with other studies of self-reported drug use and suggest that questionnaire may provide highly reliable data for research.

01 Jan 1987
TL;DR: The ASEP HRA Procedure as mentioned in this paper consists of a Pre-Accident Screening HRA, a Preaccident Nominal HRA and a Post-accident screening HRA.
Abstract: This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.

Book
28 Dec 1987
TL;DR: Introduction Reliability Theory Failure Mechanisms failure Mechanisms and Device Technologies Packaging Screening Accelerated Testing Physical Failure Analysis Techniques Reliability Prediction and Failure Modelling Quality Assurance Conclusions.
Abstract: Introduction Reliability Theory Failure Mechanisms Failure Mechanisms and Device Technologies Packaging Screening Accelerated Testing Physical Failure Analysis Techniques Reliability Prediction and Failure Modelling Quality Assurance Conclusions.

Journal ArticleDOI
TL;DR: The models are used to show that one method of creating fault-tolerant software systems, the Consensus Recovery Block, is more reliable than the other two, and it presents reliability models for each.
Abstract: In situations in which computers are used to manage life-critical situations, software errors that could arise due to inadequate or incomplete testing cannot be tolerated. This paper examines three methods of creating fault-tolerant software systems, Recovery Block, N-Version Programming, and Consensus Recovery Block, and it presents reliability models for each. The models are used to show that one method, the Consensus Recovery Block, is more reliable than the other two.


Journal ArticleDOI
TL;DR: In this paper, the test-retest reliability of the Children's Depression Inventory (CDI) in 108 normal 7- to 12-year-old children was assessed and the results showed that CDI scores decreased significantly over trials, with the greatest change occurring from the first to the second trial.
Abstract: We assessed the test-retest reliability of the Children's Depression Inventory (CDI) in 108 normal 7- to 12-year-old children. All children completed the CDI initially and at 2-week, 4-week, and 6-week intervals. Reliability coefficients ranged from .82 over 2 weeks to. 66 and .67 for the longer intervals. CDI scores decreased significantly over trials, with the greatest change occurring from the first to the second trial. Results were compared to previous findings, and implications for those employing the CDI as an outcome measure over repeated trials were discussed.

Journal ArticleDOI
TL;DR: Good reliability was obtained in the atrophy inspection of the small muscles of the hand, in the sensitivity tests for touch and pain, and in the neck compression and axial manual traction tests, but poor reliability was obtaining for many palpations.
Abstract: The purpose of this study was to collect data on interexaminer reliability of a set of tests representative of the clinical examination of a patient with neck and radicular pain. A conventional neurological examination, palpations, and tests for the provocation or relief of radicular symptoms were performed on 52 patients by two independent raters. Good reliability was obtained in the atrophy inspection of the small muscles of the hand, in the sensitivity tests for touch and pain, and in the neck compression and axial manual traction tests. Fair reliability was obtained in muscle strength testing and in the estimation of the range of motion, and poor reliability was obtained for many palpations. Poor standardization of examination procedures and changes in the patients' attention were considered the main factors affecting reliability. Better operational definitions and procedures, such as the standardization of palpation pressure and traction force, are suggested for future studies.


Journal ArticleDOI
TL;DR: The results show that this method can be used for workplace exposure zoning, but that the usefulness of the estimates for epidemiological purposes is not clear-cut and depends strongly on the actual exposure characteristics within a workplace.
Abstract: A method for qualitative estimation of the exposure at task level was used and validated with actual measurements in five small factories. The results showed that occupational hygienists were in general the most successful estimators. Plant supervisors and workers handled the estimation method less successfully because of more misclassification of the tasks. The method resulted, in general, in a classification of tasks in four exposure categories ranging from no exposure to high exposure. The exposure categories correlated positively with mean concentrations, but showed overlapping exposure distributions. This resulted in misclassification of the exposure for individual workers when a relatively large interindividual variability in exposure levels within an exposure category was present. The results show that this method can be used for workplace exposure zoning, but that the usefulness of the estimates for epidemiological purposes is not clear-cut and depends strongly on the actual exposure characteristics within a workplace. A combination of the qualitative exposure estimation method together with assessment of the exposure levels by measurements makes a rearrangement of tasks or individual workers possible and could improve the validity of this method for epidemiological purposes.

Book ChapterDOI
TL;DR: In this article, a new method is proposed to evaluate structural reliability under stochastic loadings, in which the system parameters such as stiffness, damping, strength, excitation frequency content and duration are assumed given.


Journal ArticleDOI
TL;DR: The main focus is a state of the art summary of analytical and numerical methods used to solve computer system availability models and will consider both transient and steady-state availability measures and for transient measures, both expected values and distributions.
Abstract: System availability is becoming an increasingly important factor in evaluating the behavior of commercial computer systems. This is due to the increased dependence of enterprises on continuously operating computer systems and to the emphasis on fault-tolerant designs. Thus, we expect availability modeling to be of increasing interest to computer system analysts and for performance models and availability models to be used to evaluate combined performance/availability (performability) measures. Since commercial computer systems are repairable, availability measures are of greater interest than reliability measures. Reliability measures are typically used to evaluate nonrepairable systems such as occur in military and aerospace applications. We will discuss system aspects which should be represented in an availability model; however, our main focus is a state of the art summary of analytical and numerical methods used to solve computer system availability models. We will consider both transient and steady-state availability measures and for transient measures, both expected values and distributions. We are developing a program package for system availability modeling and intend to incorporate the best solution methods.

Journal ArticleDOI
TL;DR: A simple and efficient algorithm, SYREL, to obtain compact terminal reliability expressions between a terminal pair of computers of complex networks that incorporates conditional probability, set theory, and Boolean algebra in a distinct approach.
Abstract: Symbolic terminal reliability algorithms are important for analysis and synthesis of computer networks. In this paper, we present a simple and efficient algorithm, SYREL, to obtain compact terminal reliability expressions between a terminal pair of computers of complex networks. This algorithm incorporates conditional probability,, set theory, and Boolean algebra in a distinct approach in which most of the computations performed are directly executable Boolean operations. The conditibnal probability is used to avoid applying at each iteration the most time consuming step in reliability algorithms, which is making a set of events mutually exclusive. The algorithm has been implemented on a VAX 11/750 and can analyze fairly large networks with modest memory and time requirements.

Journal ArticleDOI
TL;DR: A cost-reliability optimal software release problem is investigated for three existing software reliability growth models by evaluating both software cost and software reliability criteria simultaneously.