scispace - formally typeset
Search or ask a question

Showing papers on "Reliability (statistics) published in 1975"


Journal ArticleDOI
TL;DR: In this article, the interrater reliability and agreement measures for ordinal and interval scales are reviewed and suggestions regarding their use in counseling psychology research are made regarding their application in the counseling psychology field.
Abstract: Indexes of interrater reliability and agreement are reviewed and suggestions are made regarding their use in counseling psychology research. The distinction between agreement and reliability is clarified and the relationships between these indexes and the level of measurement and type of replication are discussed. Indexes of interrater reliability appropriate for use with ordinal and interval scales are considered. The intraclass correlation as a measure of interrater reliability is discussed in terms of the treatment of between-raters variance and the appropriateness of reliability estimates based on composite or individual ratings. The advisability of optimal weighting schemes for calculating composite ratings is also considered. Measures of interrater agreement for ordinal and interval scales are described, as are measures of interrater agreement for data at the nominal level of measurement.

868 citations


Journal ArticleDOI
TL;DR: (First of Two Parts)
Abstract: SKILLED physicians examining a patient may disagree regarding the findings. Such disagreements reflect the imperfect reliability of clinical methods and data. A decade ago, Fletcher1 urged physicia...

621 citations




01 Dec 1975
TL;DR: In this paper, the authors consider the theoretical and practical implications of the nonhomogeneous Poisson process model for reliability, and give estimation, hypotheses testing, comparison and goodness of fit procedures when the process has a Weilbull intensity function.
Abstract: : The reliability of a complex system that is repaired (but not replaced) upon failure will often depend on the system chronological age. If only minimal repair is made so that the intensity (instantaneous rate) of system failure is not disturbed, than a nonhomogeneous Poisson process may be used to model this age-dependent reliability. This paper considers the theoretical and practical implications of the nonhomogeneous Poisson process model for reliability, and gives estimation, hypotheses testing, comparison and goodness of fit procedures when the process has a Weilbull intensity function. Applications of the Weilbull model in the field of reliability and in other areas are discussed. (Author)

478 citations


Journal ArticleDOI
TL;DR: It is suggested that including specified criteria in the next edition of APA's Diagnostic and Statistical Manual of Mental Disorders (DSM-III) would improve the reliability and validity of routine psychiatric diagnosis.
Abstract: The authors identify the differences in formal inclusion and exclusion criteria used to classify patient data into diagnoses as the largest source of diagnostic unreliability in psychiatry. They describe the efforts that have been made to reduce these differences, particularly the specified criteria approach to defining diagnostic categories, which was developed for research purposes. On the basis of studies showing that the use of specified criteria increases the reliability of diagnostic judgments, they suggest that including such criteria in the next edition of APA's Diagnostic and Statistical Manual of Mental Disorders (DSM-III) would improve the reliability and validity of routine psychiatric diagnosis.

388 citations


Journal ArticleDOI
TL;DR: In this article, a simple method is given for calculating reliability characteristics of repairable and nonrepairable systems, and the importance of the individual system components, assuming independent component failures and constant failure and repair rates for the components.
Abstract: A simple method is given for calculating a) reliability characteristics of repairable and nonrepairable systems, and b) the importance of the individual system components. Assumptions made include independent component failures and constant failure and repair rates for the components. The methods can easily be implemented in a computer program that would be inexpensive to execute and would always overpredict (usually very slightly) the system failure characteristics.

321 citations



Journal Article
TL;DR: The most frequently used methods such as spectrophotometry, fluorometry, etc., for the analysis of drugs in biological fluids are compiled and some results are compared to those found by other methods like labelling and microbiology.
Abstract: The most frequently used methods such as spectrophotometry, fluorometry, etc., for the analysis of drugs in biological fluids are compiled. The usefulness of quantitative chromatographical procedures is also mentioned. Reliability criteria of these assays are extensively discussed and some results are compared to those found by other methods like labelling and microbiology.

278 citations


Journal ArticleDOI
TL;DR: This correspondence describes a technique by which the reliability expression for such a system can be conveniently derived and it is shown that using the concept of this correspondence, it is possible to extend all the existing reliability-evaluation algorithms to communication systems with little effort.
Abstract: Very few techniques exist for reliability evaluation of communication systems where links as well as nodes have certain probability of failure. This correspondence describes a technique by which the reliability expression for such a system can be conveniently derived. It is also shown that using the concept of this correspondence, it is possible to extend all the existing reliability-evaluation algorithms to communication systems with little effort.

207 citations


Journal ArticleDOI
Albert Endres1
TL;DR: Using a classification of the errors according to various attributes, conclusions can be drawn concerning the possible causes ofThese errors detected during internal testing of the operating system DOS/VS.
Abstract: Program errors detected during internal testing of the operating system DOS/VS form the basis for an investigation of error distributions in system programs. Using a classification of the errors according to various attributes, conclusions can be drawn concerning the possible causes of these errors. The information thus obtained is applied in a discussion of the most effective methods for the detection and prevention of errors.

Journal ArticleDOI
TL;DR: Male rats and mice showed good initial exposure reliability, whereas the female mouse groups differed significantly, and the animals exposed to the hole-board for two 10-min periods showed both significant habituation and test-retest reliability.
Abstract: Two aspects of the reliability of the hole-board apparatus were investigated-the similarity between scores of different samples of the same population on their first exposure to the apparatus, and the test-retest reliability. Rats and mice were given a 5-min exposure to the hole-board and then retested for 5 min after 1, 2 or 8 days. Male rats and mice showed good initial exposure reliability, whereas the female mouse groups differed significantly. All animals showed a positive test-retest correlation (range 0.31-0.78), but a homogeneous group (e.g. all animals habituating) produced higher correlations (range 0.60-0.99). Comparison of scores on the two 5-min exposures showed that not all groups showed significant habituation, but the animals exposed to the hole-board for two 10-min periods showed both significant habituation and test-retest reliability.


Journal ArticleDOI
TL;DR: The paper aims to justify the applicability of the gamma distribution to inventory control and to collect under one heading the properties of the distribution and associated formulae arising from inventory control theory.
Abstract: The paper aims to justify the applicability of the gamma distribution to inventory control and to collect under one heading the properties of the distribution and associated formulae arising from i...

Journal ArticleDOI
TL;DR: A review of teacher bias effects in the classroom can be found in this article, where a number of articles have appeared in the popular press in which the data in Pygmalion were interpreted to imply that children's school performance could be improved simply by making the teacher think better of the child's ability.
Abstract: The publication of Pygmalion in the Classroom by Rosenthal and Jacobson (1968) stirred heated professional controversy and considerable public interest in the notion that teachers' expectations regarding a child's ability influence the child's classroom learning and test performance Indeed, a number of articles have appeared in the popular press in which the data in Pygmalion were interpreted to imply that children's school performance could be improved simply by making the teacher think better of the child's ability (eg, see Yunker, 1970) At the same time serious doubts about the reliability and validity of Rosenthal and Jacobson's findings were being raised in the professional literature (eg, Snow, 1969; Thorndike, 1968, 1969) Since the publication of Pygmalion, the professional literature dealing directly with the issue of teacher-bias effects has grown rapidly (see Baker & Crist, 1971) This type of research is concerned with the possibility that teachers might intentionally or unintentionally suppress the learning of some students simply because they subjectively feel these students are not capable of grasping certain material as quickly or as well as most students The term "bias" is used to describe this phenomenon when objective measures do not indicate differences in learning potential between the students expected to do poorly and the remainder of the class The major purpose of this paper is to review the

Journal ArticleDOI
TL;DR: The reliability expression involves fewer terms and arithmetic operations than any of the existing methods, considering the size of the system a reliability engineer normally handles and the frequency with which the expression is used for reliability studies.
Abstract: An algorithm is developed to obtain a simplified reliability expression for a general network. All the success paths of the network are determined; then they are modified to be mutually disjoint. The reliability expression follows directly from the disjoint paths. The algorithm is easy and computationally economical. The reliability expression involves fewer terms and arithmetic operations than any of the existing methods. This is an advantage, considering the size of the system a reliability engineer normally handles and the frequency with which the expression is used for reliability studies.

Journal ArticleDOI
TL;DR: In this article, a cost/benefit argument is presented for reducing the 1-day-in-10-year loss of load probability target reliability planning criterion to at least a 5-day in 10-year level.
Abstract: Providing excess electrical generation capacity for reliability purposes has an economic cost; it is also true that higher reliability adds to the value of electric service. After some point, however, the additional benefits do not warrant the additional cost. In this paper we examine the considerations that should determine sensible reliability levels for electric generation systems. We construct a cost/benefit argument which suggests -- subject to various provisos that we make -- that the present "1-day-in-10-year" loss of load probability target reliability planning criterion may be uneconomically high and that these targets might reasonably be reduced to at least a "5-day-in-10-year" level.

Journal ArticleDOI
TL;DR: The smear is a poor screening technique in a population where the prevalence of tuberculosis is low, and patients in the true- positive group had clinical and radiological evidence to support the diagnosis of tuberculosis, while those in the false-positive group had new findings of the disease.
Abstract: The ability of any screening test to correctly identify diseased patients is directly related to the prevalence of the disease in question. The continuing use of smears for the detection o...

Journal ArticleDOI
TL;DR: This paper presents a formulation of a novel methodology for evaluation of testing in support of operational reliability assessment and prediction that features an incremental evaluation of the representativeness of a set of development and validation test cases together with definition of additional test cases to enhance those qualities.
Abstract: This paper presents a formulation of a novel methodology for evaluation of testing in support of operational reliability assessment and prediction. The methodology features an incremental evaluation of the representativeness of a set of development and validation test cases together with definition of additional test cases to enhance those qualities.If test cases are derived in typical fashion (i.e., to find and remove bugs, to investigate software performance under off-nominal conditions, to exercise structural elements and functional capabilities of the software, and to demonstrate satisfaction of software requirements), then the complete set of test cases is not necessarily representative of anticipated operational usage. The paper reports on initial research into formulation of valid measures of testing representativeness.Several techniques which permit specification of expected operational usage are described, and a technique for evaluating the correlation between actual testing accomplished and expected operational usage is defined. An unbiased estimator for operational usage reliability is proposed and justified as a function of a specified operational profile; confidence in the estimate is derived from a measure of the degree to which testing is representative of expected operational application.An experimental application of the techniques to a small program is provided as an illustration of the proposed use of the methodology for operational software reliability estimation. The relationship between structural exercise testing thoroughness and operational usage representativeness is discussed; the specification of a quantified reliability requirement and an explicit, required representativeness measure (or confidence) is identified as integral to effective application of the proposed reliability testing methodology; efforts to extend, formalize and generalize the methodology are described; and expected benefits, as well as potential problems and limitations are identified.

Journal Article
TL;DR: "A strong tradition of research, of rigorous testing of methods, treatments, and ideas, does not exist in the criminological disciplines."
Abstract: strong tradition of research, of rigorous testing of methods, treatments, and ideas, does not exist in the criminological disciplines.\" This may be unfair to some workers in the field but it has to be admitted that there is all too little guidance to legislators regarding sentencing. Certainly some of the principles involved in punishing behaviour set out by Singer require investigation \"in the field\" and not in terms of \"ivory tower argument\". It is more likely than not that a simple increasing of the severity of penalties will make no significant difference to the incidence of the offences they are supposed to control. Whitaker\" puts the matter very well when he suggests that it is a delusion

Journal ArticleDOI
TL;DR: In this paper, many different techniques for reliability evaluation of general systems have been presented and merits and demerits of every method are discussed.

Journal ArticleDOI
TL;DR: A Monte Carlo simulation procedure is presented to estimate the reliability of a complex system with relative ease, and all minimal tie-sets from the system configuration are provided as a coded reliability flow graph.
Abstract: The reliability of a system can be found analytically, given the time-to-failure distribution for each element and the system configuration. Such analysis becomes increasingly difficult as the complexity of a system increases. This paper presents a Monte Carlo simulation procedure to estimate the reliability of a complex system with relative ease. A computer program, written in FORTRAN IV G Level code for an IBM 360/65 computer, finds all minimal tie-sets from the system configuration, which is provided as a coded reliability flow graph. Each replication in the simulation involves a search through the minimal tie-sets to identify the success or failure of the system for each value of the required time of satisfactory performance. The reliability of the system is then estimated as a tabulated function of time.


Journal ArticleDOI
TL;DR: This paper discusses the design of programming languages to enhance reliability by presenting several general design principles, and then applies them to particular languages constructs.
Abstract: The language in which programs are written can have a substantial effect on their reliability. This paper discusses the design of programming languages to enhance reliability. It presents several general design principles, and then applies them to particular languages constructs. Since the validity of such design principles cannot be logically proved, empirical evidence is needed to support or discredit them. A major experiment to measure the effect of nine specific language-design decisions in one context has been performed. Analysis of the frequency and persistence of errors shows that several decisions had a significant impact on reliability.

Journal ArticleDOI
TL;DR: It is argued that reliability and correctness are not synonyms and the differences suggest techniques by which the reliability of software can be improved even while the production of correct software remains beyond the authors' reach.
Abstract: This paper assumes software structure to be characterized by the interfaces between subsystems or modules. Reliability is considered to be a measure of the extent to which the system can be expected to deliver usable services when those services are demanded. It is argued that reliability and correctness (in the sense used in current computer literature) are not synonyms. The differences suggest techniques by which the reliability of software can be improved even while the production of correct software remains beyond our reach. In general, the techniques involve considering certain unpleasant facts of life at an early stage in the design process, the stage where the structure is determined, rather than later. An appendix gives some specific examples of questions which, if they are thoughtfully considered early in the design, can lead to more reliable systems.


Journal ArticleDOI
TL;DR: In this article, a linear flow network is used to model the transmission interconnections and an efficient graph theory algorithm is applied to find critical minimal cuts in the network to determine the probability of failure.
Abstract: This paper presents the results of an investigation of a technique for the evaluation of the reliability of supplying power in a system with a number of interconnected load-generation areas. There is no restriction as to how the areas may be interconnected. Most previous techniques using analytical methods (as opposed to Monte Carlo simulations) have been limited to systems with a maximum of three interconnected areas. Systems with more areas have been analyzed assuming that the interconnecting electrical network did not contain any loops. The application of straightforward enumerative methods to systems with more complex interconnections than these can result in an improbably large number of computations. The method of analysis described in this paper is based upon the use of a linear flow network to model the transmission interconnections and makes use of an efficient graph theory algorithm to segregate the failure states by finding critical minimal cuts in the network. The probabilities of failure to supply the various loads are computed by evaluating the various combined event probabilities associated with these critical minimal cuts. For the cases tested, the technique reduces the number of probability evaluations required by about one to two orders of magnitude in comparison with complete state enumeration methods. The tested method provides reliability measures (i.e., the probability of failure to meet the load) for each individual area and the total system, and also allows the computation of the probability that each "link" (transmission line or source) is a member of a critical minimal cut. The latter will facilitate the application of the method to the design of systems and specifically to the problem of evaluating the reliability benefits of increased transmission capacity versus added generation.

Journal ArticleDOI
TL;DR: In this article, the authors describe state of the art on optimum design of systems based on reliability effectiveness, and describe a general design philosophy and methodology that may be found useful for designing reliable systems.

Journal ArticleDOI
TL;DR: Some of the conceptual and methodological tools which are available for the solution of the problems of achieving data reliability, including the concept of type, direct product, union, sequence, recursion and mapping are outlined.
Abstract: This paper surveys the problems of achieving data reliability, and finds them more severe than those of program reliability. It then outlines some of the conceptual and methodological tools which are available for the solution of these problems, including the concept of type, direct product, union, sequence, recursion and mapping. It touches on the topdown design of data and programs, and argues that references or pointers are to be avoided. It concludes with an annotated bibliography for further reading.

Journal ArticleDOI
TL;DR: In this article, the relative sensitivity of almost all decision variables and performance measures to mean lead time is solely determined by the coefficient of variation of demand, and the implication of the results is discussed and numerical illustrations are provided.
Abstract: In the (Q, R) inventory model with variable lead time the relative sensitivity of almost all decision variables and performance measures to mean lead time unlike that to the variance of lead time is solely determined by the coefficient of variation of demand The implication of the results is discussed and numerical illustrations are provided