scispace - formally typeset
Search or ask a question

Showing papers on "Reliability (statistics) published in 1981"


Book
01 Jun 1981
TL;DR: A number of new classes of life distributions arising naturally in reliability models are treated systematically and each provides a realistic probabilistic description of a physical property occurring in the reliability context, thus permitting more realistic modeling of commonly occurring reliability situations.
Abstract: : This is the first of two books on the statistical theory of reliability and life testing. The present book concentrates on probabilistic aspects of reliability theory, while the forthcoming book will focus on inferential aspects of reliability and life testing, applying the probabilistic tools developed in this volume. This book emphasizes the newer, research aspects of reliability theory. The concept of a coherent system serves as a unifying theme for much of the book. A number of new classes of life distributions arising naturally in reliability models are treated systematically: the increasing failure rate average, new better than used, decreasing mean residual life, and other classes of distributions. As the names would seem to indicate, each such class of life distributions provides a realistic probabilistic description of a physical property occurring in the reliability context. Also various types of positive dependence among random variables are considered, thus permitting more realistic modeling of commonly occurring reliability situations.

3,876 citations


Book
01 Jan 1981
TL;DR: Practical Reliability Engineering as mentioned in this paper is the most widely used and widely used reliability textbook for engineering courses, with a focus on practical aspects of engineering, including mathematics of reliability, physics of failure, graphical and software methods of failure data analysis, reliability prediction and modelling.
Abstract: With emphasis on practical aspects of engineering, this bestseller has gained worldwide recognition through progressive editions as the essential reliability textbook. This fifth edition retains the unique balanced mixture of reliability theory and applications, thoroughly updated with the latest industry best practices.Practical Reliability Engineering fulfils the requirements of the Certified Reliability Engineer curriculum of the American Society for Quality (ASQ). Each chapter is supported by practice questions, and a solutions manual is available to course tutors via the companion website.Enhanced coverage of mathematics of reliability, physics of failure, graphical and software methods of failure data analysis, reliability prediction and modelling, design for reliability and safety as well as management and economics of reliability programmes ensures continued relevance to all quality assurance and reliability courses.Notable additions include:New chapters on applications of Monte Carlo simulation methods and reliability demonstration methods.Software applications of statistical methods, including probability plotting and a wider use of common software tools.More detailed descriptions of reliability prediction methods.Comprehensive treatment of accelerated test data analysis and warranty data analysis.Revised and expanded end-of-chapter tutorial sections to advance students practical knowledge.The fifth edition will appeal to a wide range of readers from college students to seasoned engineering professionals involved in the design, development, manufacture and maintenance of reliable engineering products and systems.www.wiley.com/go/oconnor_reliability5

1,106 citations



Book
11 Aug 1981

227 citations


Journal ArticleDOI
TL;DR: The suggested model results in earlier fault-fixes having a greater effect than later ones, the faults which make the greatest contribution to the overall failure rate tend to show themselves earlier, and the DFR property between fault fixes being fixed earlier.
Abstract: An assumption commonly made in early models of software reliability is that the failure rate of a program is a constant multiple of the (unknown) number of faults remaining. This implies that all faults contribute the same amount to the failure rate of the program. The assumption is challenged and an alternative proposed. The suggested model results in earlier fault-fixes having a greater effect than later ones (the faults which make the greatest contribution to the overall failure rate tend to show themselves earlier, and so are fixed earlier), and the DFR property between fault fixes (assurance about programs increases during periods of failure-free operation, as well as at fault fixes). The model is tractable and allows a variety of reliability measures to be calculated. Predictions of total execution time to achieve a target reliability, and total number of fault fixes to target reliability, are obtained. The model might also apply to hardware reliability growth resulting from the elimination of design errors.

222 citations


Proceedings ArticleDOI
Arnold Berman1
07 Apr 1981
TL;DR: In this paper, the authors used a long-established feature of time dependent dielectric breakdown (TDDB) to predict the rate of breakdown failures in the field using a ramped voltage breakdown histogram of a sample population.
Abstract: Using a long-established feature of time dependent dielectric breakdown (TDDB) it is demonstrated that a ramped voltage breakdown histogram of a sample population can be used to accurately forecast the rate of breakdown failures in the field. It is shown that such a histogram can be interpreted as the field dependence of failure at constant time. The tamp-TDDB relationship involves no fitting parameters and only a single material-related parameter. The temperature dependence of this parameter is established for SiO2 Extensive ramp-life test measurements have verified the relationship experimentally. It is argued that the usual models used to relate laboratory life tests to reliability failures are inherently faulty. The faults stem from the temperature dependence and the distributions of failure times, both of which must be assumed in order to extrapolate accelerated life tests to use conditions. On the other hand the actual distribution is measured in a ramp test and the temperature acceleration is not needed. This finding has far-reaching implications for reliability assessment. Dielectric life tests can be replaced by the relatively simple and rapid ramp test with increased confidence in projection. From the analysis it is shown that the effect on reliability of a high field screen can be quantitatively determined in an absolute manner.

181 citations


Journal ArticleDOI
TL;DR: In this article, 17 measures of association for observer reliability (interobserver agreement) are reviewed and computational formulas are given in a common notational system, and an empirical comparison of 10 of these measures is made over a range of potential reliability check results.
Abstract: Seventeen measures of association for observer reliability (interobserver agreement) are reviewed and computational formulas are given in a common notational system. An empirical comparison of 10 of these measures is made over a range of potential reliability check results. The effects on percentage and correlational measures of occurrence frequency, error frequency, and error distribution are examined. The question of which is the “best” measure of interobserver agreement is discussed in terms of critical issues to be considered

148 citations


Journal Article
Anne Cutler1
TL;DR: In the last decade, a large upsurge in interest in speech production processes, which have otherwise been accorded much less research attention than the processes of comprehension, has emerged as discussed by the authors.
Abstract: Collecting speech errors is enjoyable. For instance, it can give the collector the feeling of doing some useful work while on holiday, at a dinner party, or watching a television interview. And speech error collections are valuable: in the last decade research based on slips of the tongue has provided one of the major components of a long-overdue upsurge in interest in speech production processes, which have otherwise been accorded much less research attention than the processes of comprehension.

136 citations


Journal ArticleDOI
TL;DR: In this article, between-sample reliability differenti cation was found to be a potential threat to conclusions drawn from cross-national marketing surveys, arising from differences in measure reliability.
Abstract: A potential threat to conclusions drawn from cross-national marketing surveys is that arising from differences in measure reliability. In a five-country study, between-sample reliability differenti...

135 citations


Journal ArticleDOI
TL;DR: Boundary value problems in ODE’s arising in various applications are frequently not, in the “standard” form required by the currently existing software, however, many problems can be converted to such a form, thus enabling the practitioner to take advantage of the availability and reliability of this general purpose software.
Abstract: Boundary value problems in ODE’s arising in various applications are frequently not, in the “standard” form required by the currently existing software. However, many problems can be converted to such a form, thus enabling the practitioner to take advantage of the availability and reliability of this general purpose software. Here, various conversion devices are surveyed.

128 citations


Journal ArticleDOI
TL;DR: Functional assessment instruments used in rheumatology are critically evaluated against five basic criteria: whether they allow for quantification, whether they have been tested for reliability, validity, and precision, and whether the data collection procedures are specified.
Abstract: Functional assessment instruments used in rheumatology are critically evaluated against five basic criteria: whether they allow for quantification, whether they have been tested for reliability, validity, and precision, and whether the data collection procedures are specified. No one instrument completely fulfills all criteria. Few adequately address the issues of measurement reliability and validity; none have sufficient measurement precision to detect subtle but clinically significant changes in function. Issues to be addressed in future research are discussed.

Journal ArticleDOI
TL;DR: Under the framework of a stochastic point process of failures, this paper discusses basic ways to characterize reliability and results pertaining to the appealing alternative of superimposed processes are reviewed.
Abstract: Under the framework of a stochastic point process of failures, this paper discusses basic ways to characterize reliability. The distinction between the failure rate of a process, useful for repairable systems, and the failure rate of a distribution, useful for nonrepairable systems, is drawn. The paper then concentrates on modeling the wearout characteristics of repairable system reliability. Neither the homogeneous Poisson nor the renewal processes will suffice for this purpose. The nonhomogeneous Poisson process is appealing as a general wearout model, but it too has nonintuitive features; for example, the distribution of first failure determines the entire process. This leads us to search for other alternatives and to consider the reliability characteristics of general point processes of failures. Results pertaining to the appealing alternative of superimposed processes are reviewed.

Journal ArticleDOI
TL;DR: This paper is a review of literature related to system reliability evaluation techniques for small to large complex systems and the technique(s) the authors recommend for each system model are indicated.
Abstract: This paper is a review of literature related to system reliability evaluation techniques for small to large complex systems. The literature is classified according to system models and evaluation techniques. The technique(s) we recommend for each system model are indicated.

Journal ArticleDOI
TL;DR: A large and significant difference was found between mean recalled and mean observed intakes for both kilocalories and protein, which coupled with a low but significant coefficient for reliability limits the usefulness of this dietary assessment tool in the age group studied.
Abstract: A large and significant difference was found between mean recalled and mean observed intakes of both hilocalories and protein in the two groups of children studied.


Journal ArticleDOI
TL;DR: In this paper, a general method for convolving discrete distributions using Fast Fourier Transforms is described, which can be used in evaluating reliability of any system involving discrete or discretised convolution and has been used in power system studies to deduce capacity-outage probability tables and to solve probabilistic load flows.
Abstract: This paper describes a general method for convolving discrete distributions using Fast Fourier Transforms. It can be used in evaluating reliability of any system involving discrete or discretised convolution. It has been used in power system studies to deduce capacity-outage probability tables and to solve probabilistic load flows. These studies have shown it to be much less time-consuming and more efficient than the conventional direct methods. The method is used in the paper to evaluate the loss of load probability of a generating system in order to demonstrate the method's application and inherent merits.

Journal ArticleDOI
TL;DR: The validity and reliability studies on which it is possible to judge the value of this new test of the clinical competence of medical students when compared to the traditional approach are reported.
Abstract: In a previous study we described a problem-based criterion-referenced test of the clinical competence of medical students which was felt to offer advantages over the traditional final-year examination. This paper reports the validity and reliability studies on which it is possible to judge the value of this new test when compared to the traditional approach. The results demonstrate a high level of content validity and provide evidence of the construct validity of the test. Efforts to obtain measures of concurrent and predictive validity were thwarted by a failure to attain reliable assessments of ward performance from resident and consultant staff. Satisfactory levels of internal consistency were established for the whole test. Marker reliability was satisfactory in all sections of the test except for those requiring examiners to rate practical clinical skills. This was so despite the use of simulated patients, behavioural check-lists and rater training. Possible solutions to this problem are discussed. It is concluded that this new approach overcomes many of the measurement problems inherent in the traditional final examination. It has been shown to be feasible to construct and administer in the medical school setting without the need for the allocation of additional resources.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a test theory based on the classical test theory and methods for reliability and generalizability theory, which they call test theory-based test theory.
Abstract: INTRODUCTION 629 CLASSICAL TEST THEORY AND METHODS 630 Reliability.... 630 Generalizability Theory 634

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the differences in reservoir performance reliability obtained on the basis of long- and short-memory models fitted to the same historic streamflow record of a length typically encountered in reservoir design.
Abstract: The theoretical value of long-memory stochastic models lies in their ability to generate time series that resemble long historic records of some geophysical processes better than do short-memory models. Since the use of short-memory streamflow models has become standard practice in reservoir design, it is worth asking whether the suspected theoretical superiority of long-memory models justifies a change in this practice; in other words, whether their theoretical appeal can be translated into better design of storage reservoirs. This paper analyzes the problem from the point of view of the differences in reservoir performance reliability obtained on the basis of long- and short-memory models fitted to the same historic streamflow record of a length typically encountered in reservoir design. It appears that the differences in reliability resulting from the replacement of one model by the other (shorter memory yields higher reliability) are small compared both to (1) the accuracy of measurement of the socioeconomic impact of reliability changes and (2) the accuracy of estimating the reliability itself on the basis of available streamflow records and for economically relevant lengths of reservoir operation periods. Thus it is argued that given the socioeconomic and hydrologic data typically available for reservoir planning and design, the replacement of short-memory models with long-memory models in reservoir analyses cannot be objectively justified. It is suggested that for a long time to come the use of long-memory models will, in principle, remain equivalent to the use of a small safety factor in the intrinsically inaccurate estimate of reservoir performance reliability.


Journal ArticleDOI
TL;DR: A simple, necessary and sufficient condition for the maximum likelihood estimates to be finite is presented and it is suggested that this condition be tested prior to using the model.
Abstract: A simple model for software reliability growth, originally suggested by Jelinski & Moranda, has been widely used but suffers from difficulties associated with parameter estimation. We show that a major reason for obtaining nonsensical results from the model is its application to data sets which exhibit decreasing reliability. We present a simple, necessary and sufficient condition for the maximum likelihood estimates to be finite and suggest that this condition be tested prior to using the model.

Journal ArticleDOI
TL;DR: In this article, life testing and reliability estimation is used to estimate the reliability of a life-testing and reliability estimator in the context of a single-objective system with a single test set.
Abstract: (1981). Life Testing and Reliability Estimation. Technometrics: Vol. 23, No. 3, pp. 310-311.

Journal ArticleDOI
TL;DR: The technical and the interpersonal skills of resident physicians in four separate samples were examined with subjective performance evaluations from four different sources: attending physicians, peers, patients and the residents themselves.
Abstract: The technical and the interpersonal skills of resident physicians in four separate samples were examined with subjective performance evaluations from four different sources: attending physicians, peers, patients and the residents themselves. Residents were from programs in internal medicine, family practice and general surgery. The reliabilities of measures from all four sources were found to be substantial, suggesting the potential usefulness of these sources of physician evaluation. Ratings of technical and interpersonal skill were found to be highly intercorrelated within each source. Reasons for this high degree of overlap are discussed. Finally, the ratings from the four sources were found to be fairly independent, indicating that they provide relatively separate measures of physician performance. The implications of these findings for medical care, education and research are considered.

Journal ArticleDOI
TL;DR: No totally satisfactory method is yet available to assess zinc and copper status in the laboratory, although recognition of factors that can influence the results of laboratory tests can improve reliability.




Journal ArticleDOI
TL;DR: In this paper, a new analytical approach to the calculation of generating system reliability indices is presented, which makes it possible to relax idealizing assumptions and explicitly model the effects of operating considerations such as: (1) unit duty cycles reflecting load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; (2) start-up failures; (3) startup times; and (4) outage postponability.
Abstract: The paper presents a new analytical approach to the calculation of generating system reliability indices. The new approach makes it possible to relax idealizing assumptions and to explicitly model the effects of operating considerations such as: (1) unit duty cycles reflecting load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; (2) start-up failures; (3) start-up times; and (4) outage postponability. The models presented can also be used to consider the effects of basic energy limitations and to give production cost estimates.

Journal ArticleDOI
TL;DR: Simple statistical tools are given for comparing two PSTHs on a bin-by-bin basis and to judge the reliability of the columns of a PSTH.