scispace - formally typeset
Search or ask a question

Showing papers on "Reliability (statistics) published in 1980"


Book
01 Jan 1980
TL;DR: This chapter concludes with a "Summary," "Key Concepts," "Exercises," "Notes," and "References" of the Research Report.
Abstract: Each chapter concludes with a "Summary," "Key Concepts," "Exercises," "Notes," and "References." Preface. Acknowledgments. 1.Educational Research: Its Nature and Characteristics. Introduction. The Nature of Educational Research. Classification of Educational Research. The Role of Theory. The Activities of the Research Process. 2.Identification of a Research Problem. Selection of a Research Problem. Statement of the Research Problem. 3.The Review of the Literature. The Activities of the Review of the Literature. Sources of Information. Computer Searches. Selecting Studies for the Review of the Literature. Assembling and Summarizing Information. Interpreting and Using Information. 4.Research Design in Quantitative Research. The Purposes of Research Design. The Concept of Controlling Variance. Characteristics of Good Research Design. 5.Experimental Research. The Meaning of Experimental Design. Criteria for a Well-Designed Experiment. Posttest-Only Control Group Design. Pretest-Posttest Control Group Design. Solomon Four-Group Design. Factorial Designs. Repeated Measures Designs. Designs Extended in Time. Interpreting Results of Experiments. Randomness and Representativeness. 6.Quasi-Experimental Research. The Problems of Validity. Posttest-Only, Nonequivalent Control Group Design. Pretest-Posttest, Nonequivalent Control Group Design. Time Series Designs. Single-Subject Designs. Action Research and Quasi-Experimental Research. 7.Survey Research. Survey Research: Its Scope and Description. Survey Designs. The Methodology of Survey Research. Questionnaire Surveys. Interview Surveys. Other Surveys. Analyzing and Reporting Survey Results. 8.Research Design in Qualitative Research. The Epistemology of Qualitative Research. Components of Research Design. Types of Designs in Qualitative Research. Perspectives for Qualitative Research. Reliability and Validity of Qualitative Research. Use of Computers in Qualitative Research. 9.Historical Research. The Value of Historical Research. Sources of Information in Historical Research. The Methodology of Historical Research. Quantitative Methods in Historical Research. Comments on the Reporting of Historical Research. 10.Ethnographic Research. The Nature of Ethnography in Education. A Conceptual Schema for Ethnographic Research. The Process of Ethnographic Research. Examples of Ethnographic Research in Education. The Reliability and Validity of Ethnographic Research. The Role of Ethnographic Research. 11.Sampling Designs. The Concept of a Random Sample. Criteria for a Sampling Design. Stratified Random Sampling. Cluster Sampling. Systematic Sampling. Considerations in Determining Sample Size - Random Sampling. Purposeful Sampling. 12.Measurement and Data Collection. Concepts of Measurement. The Variables Measured in Educational Research. Tests and Inventories Used for Measurement. Measures Involving Holistic Scoring. Where to Find Test Information. Scoring and Data Preparation. 13.Data Analysis: Descriptive Statistics. The Multiple Meanings of Statistics. Distributions. Correlation - A Measure of Relationship. Data Analysis by Computer. 14.Data Analysis: Inferential Statistics. Context for Using Inferential Statistics. Testing Hypotheses and Estimating Parameters. Inferences from Statistics to Parameters: A Review. Parametric Analyses. Nonparametric Analysis. Correlational Analyses. Selecting an Appropriate Statistical Analysis. Comments about Statistical Analysis. Meta-Analysis. 15.Communicating about Research. Major Sections of the Research Proposal. Major Sections of the Research Report. Other Sections of the Research Report. Putting a Report Together. Guidelines for Presenting Papers at Meetings. Presentations to Dissertation and Thesis Committees. 16.Evaluating Research Reports. Types of Errors and Shortcomings in Reports. Critiquing Major Sections of a Research Report. Overall Impressions When Evaluating a Report. The Evaluation of Proposals. Appendix 1: Conducting a CD-ROM Search Using SilverPlatter. Appendix 2: Ethical and Legal Considerations in Conducting Research. Appendix 3: Solutions to Exercises. Appendix 4: Tables. Table A. Ordinates and Areas of the Normal Curve. Table B. Critical Values of t. Table C. Upper Percentage Points of the x2 Distribution. Table D. Upper Percentage Points of the F-Distribution. Table E. Critical Values of the Correlation Coefficient. Glossary of Research Methods Terms. Name Index. Subject Index. Disk Instructions.

2,036 citations


Journal ArticleDOI
Meyer1
TL;DR: A hierarchical modeling scheme is used to formulate the capability function and capability is used, in turn, to evaluate performability, and techniques are illustrated for a specific application: the performability evaluation of an aircraft computer in the environment of an air transport mission.
Abstract: If the performance of a computing system is "degradable," performance and reliability issues must be dealt with simultaneously in the process of evaluating system effectiveness. For this purpose, a unified measure, called "performability," is introduced and the foundations of performability modeling and evaluation are established. A critical step in the modeling process is the introduction of a "capability function" which relates low-level system behavior to user-oriented performance levels. A hierarchical modeling scheme is used to formulate the capability function and capability is used, in turn, to evaluate performability. These techniques are then illustrated for a specific application: the performability evaluation of an aircraft computer in the environment of an air transport mission.

760 citations


Journal ArticleDOI
TL;DR: In this article, a generic stochastic finite-element method for modeling structures is proposed as a means to analyze and design structures in a probabilistic framework, which is applied in structures discretized with the finite element methodology, and an estimate of the probability of failure based on known and established procedures in second-moment reliability analysis is made with the aid of a transformation to gaussian space of the random variables that define structural reliability.

648 citations


Journal ArticleDOI
R.C. Cheung1
TL;DR: A user-oriented software reliability figure of merit is defined to measure the reliability of a software system with respect to a user environment and the effects of the user profile, which summarizes the characteristics of the users of a system, on system reliability are discussed.
Abstract: A user-oriented reliability model has been developed to measure the reliability of service that a system provides to a user community. It has been observed that in many systems, especially software systems, reliable service can be provided to a user when it is known that errors exist, provided that the service requested does not utilize the defective parts. The reliability of service, therefore, depends both on the reliability of the components and the probabilistic distribution of the utilization of the components to provide the service. In this paper, a user-oriented software reliability figure of merit is defined to measure the reliability of a software system with respect to a user environment. The effects of the user profile, which summarizes the characteristics of the users of a system, on system reliability are discussed. A simple Markov model is formulated to determine the reliability of a software system based on the reliability of each individual module and the measured intermodular transition probabilities as the user profile. Sensitivity analysis techniques are developed to determine modules most critical to system reliability. The applications of this model to develop cost-effective testing strategies and to determine the expected penalty cost of failures are also discussed. Some future refinements and extensions of the model are presented.

505 citations


Journal ArticleDOI
TL;DR: In this paper, sufficient conditions are obtained that a lifetime density has a bathtub-shaped failure rate and analogous conditions handle increasing, decreasing, and upside-down bathtub shape failure rates.
Abstract: Sufficient conditions are obtained that ensure that a lifetime density has a bathtub-shaped failure rate. Analogous conditions handle increasing, decreasing, and upside-down bathtub-shaped failure rates. Application of these results to exponential families of densities is particularly straightforward and effective. Examples are furnished that introduce new bathtub models and illustrate the use of the general results for existing models. Examples involving mixtures are also considered.

450 citations


Book
30 Apr 1980
TL;DR: This chapter discusses reliability and validity in factor analysis, which involves evaluating systematic error, as well as multiple indicators, which investigate the relationship between factor analysis and reliability.
Abstract: 1. Introduction to measurement 2. Factor analysis 3. Reliability 4. Validity 5. Evaluating systematic error 6. Integrating reliability and validity Appendix: multiple indicators Bibliography Index.

363 citations



Journal ArticleDOI
TL;DR: In this paper, a new instrument for measuring two dimensions of perceived usefulness is developed and the results of an empirical study designed to test the reliability and construct validity of this instrument in a capital-budgeting setting are presented.
Abstract: The perceived usefulness of information is an important construct for the design of management information systems. Yet an examination of existing measures of perceived usefulness shows that the instruments developed have not been validated nor has their reliability been verified. In this paper a new instrument for measuring two dimensions of perceived usefulness is developed. The results of an empirical study designed to test the reliability and construct validity of this instrument in a capital-budgeting setting are presented.

292 citations


Journal Article
TL;DR: The findings suggest that it is feasible to quantify level of function using self-report methods and the Functional Status Index is recommended for use in investigations where changes in functional ability are of interest.

242 citations


Journal ArticleDOI
TL;DR: An extension of the kappa coefficient is proposed which is appropriate for use with multiple observations per subject and for multiple response choices per observation and to illustrate new approaches to difficult problems in evaluation of reliability.
Abstract: An extension of the kappa coefficient is proposed which is appropriate for use with multiple observations per subject (not necessarily an equal number) and for multiple response choices per observation. Computational methods and nonasymptotic, nonnull distribution theory are discussed. The proposed method is applied to previously published data, not only to compare results with those in earlier methods, but to illustrate new approaches to difficult problems in evaluation of reliability. The effect of using an 'Other' response category is examined. Strategies to enhance reliability are evaluated, including empirical investigation of the Spearman-Brown formula as applied to nominal response measures.

211 citations


Journal ArticleDOI
P. A. Brunt1
TL;DR: The modern historian of Greece and Rome often depends for his information on writings whose reliability is no greater, though often much less, than that of the histories, now lost in whole or part, which their authors followed.
Abstract: The modern historian of Greece and Rome often depends for his information on writings whose reliability is no greater, though often much less, than that of the histories, now lost in whole or part, which their authors followed The quality of these histories can sometimes be detected from the internal evidence of the extant derivative accounts, even when we cannot name the historians with any certainty




Journal ArticleDOI
TL;DR: In this article, three methods of gathering and evaluating value profiles for use in market segmentation are compared, and different reliability estimates are found to produce different conclusions as to the relative t...
Abstract: Three methods of gathering and evaluating value profiles for use in market segmentation are compared. Different reliability estimates are found to produce different conclusions as to the relative t...

Journal ArticleDOI
TL;DR: In this paper, the reliability behavior of systems is investigated if two types of failures can happen and the results are used for calculating the s-expected long-run cost rate for a generalized age replacement policy and repair limits.
Abstract: The reliability behavior of systems is investigated if two types of failures can happen. Type 1 is removed by minimal repair, Type 2 by replacement. Reliability expressions are derived. The results are used for calculating the s-expected long-run cost rate for a generalized age replacement policy and repair limits.

01 Dec 1980
TL;DR: A new modeling methodology to characterize failure processes in Time-sharing systems due to hardware transients and software errors is summarized, which gives quantitative relationships between performance, workload, and (lack of) reliability for digital compiuting systems.
Abstract: In this paper a new modeling methodology to characterize failure processes in Time-sharing systems due to hardware transients and software errors is summarized. The basic assumption made is that the instantaneous failure rate of a system resource can be approximated by a deterministic function of time plus a zero-mean stationary Gaussian process, both depending on the usage of the resource considered. The probability density function of the time to failure obtained under this assumption has a decreasing hazard function, partially explaining why other decreasing hazard function densities such as the Weibull fit experimental data so well. %rttiermore, by considering the Kernel of the Operating System as a sysSem resource, this methodology sets the basis for independent methods of evaluating the contribution of software to system unreliability, and gives some non obvious hints about how system reliability could be improved. A real system has been characterized according to this methodology, and an extremely good fit between predicted and observed behavior has been found. Also, the predicted system behavior according to this methology is compared with the predictions of other models such as the exponential, Weibull. and periodic failure rate. The work presented in this paper describes a new modeling methodology. This methodology gives quantitative relationships between performance, workload, and (lack of) reliability for digital compiuting systems. Current methodologies for reliability assessment may provide good models for explaining and predicting the behavior of systems in the presence of hard (recurrent) faults, but the effect and charcterization of transient (non recurrent) faults and software (either design or implementation) errors is still. very elusive. These current reliability measures do not give individual users a feeling of the impact of unreliability on performance in genetpal purpose system operating under a variety of workloads. That is, there are no general methods for a quantitative assessment of the! benefits of fault.tolerance. 2 Prior work

Journal ArticleDOI
Ying Wang1, Avizienis
TL;DR: The results of an extended effort to develop a unified approach to reliability modeling of fault-tolerant computers which strikes a good compromise between generality and practicality are summarized.
Abstract: The diversified nature of fault-tolerant computers led to the development of a multiplicity of reliability models which are seemingly unrelated to each other. As a result, it becomes difficult to develop automated tools for reliability analysis which are both general and efficient. Thus, the potential of reliability modeling as a practical and useful tool in the design process of fault-tolerant computers has not been fully realized. This paper summarizes the results of an extended effort to develop a unified approach to reliability modeling of fault-tolerant computers which strikes a good compromise between generality and practicality. The unified model developed encompasses repairable and nonrepairable systems and models, transient as well as permanent faults, and their recovery. Based on the unified model, a powerful and efficient reliability estimation program ARIES has been developed.

Journal ArticleDOI
TL;DR: In this paper, conditional inference procedures are discussed for the shape parameter and for the current system reliability for a time truncated Weibull process. And approximate confidence limits for the scale parameter are also developed.
Abstract: Conditional inference procedures are discussed for the shape parameter and for the current system reliability for a time truncated Weibull process. These tests are shown to be uniformly most powerful unbiased. Approximate confidence limits for the scale parameter are also developed.


10 Jan 1980
TL;DR: Although still limited because of the restrictive assumptions used, the model gives quantitative results about how much a user can expect from a time sharing system, as a function of the system workload and reliability.
Abstract: : In this paper some measures are presented that characterize both the performance and reliability of digital computing systems in time sharing environments from a user viewpoint. The measures (Apparent Capacity and Expected Elapsed Time required to correctly execute a given program) are based on a mathematical model built upon traditional assumptions. The model is a hybrid in that is uses statistics gathered from a real system while giving analytical expressions for other statistics such as the Expected Elapsed time. The main parameters of the model are the system workload and the distribution of the time between errors. Although still limited because of the restrictive assumptions used, the model gives quantitative results about how much a user can expect from a time sharing system, as a function of the system workload and reliability. For example, this study measured a four to one range in mean time to system failure as a function of system load. For the maximum load period measured the model predicts a 40 % contribution from system unreliability to expected computation time for a program that could require 30 minutes of CPU time in an unloaded situation. (Author)


Journal ArticleDOI
TL;DR: A survey of the work performed over the last two decades on the system reliability optimization, and a global view of the state of art of the field of optimal reliability design is provided.
Abstract: The paper provides a survey of the work performed over the last two decades on the system reliability optimization. The relevant system models are first given and a set of problems, covering most cases, are formulated. Then, the optimization techniques, used for solving these problems, are briefly described, and a number of representative illustrative examples are collected. It is hoped that the paper helps in obtaining a global view of the state of art of the field of optimal reliability design.

Journal ArticleDOI
TL;DR: The items in a standard interview, The Edinburgh Alcohol Dependence Schedule, are shown to have acceptable inter-rater reliability, although difficulties in operationalizing ‘impaired control’ remain.
Abstract: Summary Instruments measuring alcohol dependence have seldom been presented with evidence of their reliability. There are a number of difficulties in designing such instruments which contribute to unreliability. The items in a standard interview, The Edinburgh Alcohol Dependence Schedule, are shown in this paper to have acceptable inter-rater reliability, although difficulties in operationalizing ‘impaired control’ remain.

Journal ArticleDOI
TL;DR: In this article, a scientific approach to systematically account for the uncertainties and their interactions in the selection of safety factors and return periods for various risk levels in hydraulic design is presented, which can be used to develop risk-safety relationships for various return periods and expected service life.
Abstract: Hydraulic structures are designed with reference to some natural events which could be imposed on the structure during its expected service life. Conventionally return period design methods fail to systematically account for the many uncertainties in design. By systematically analyzing the component uncertainties and their interactions using the concepts of reliability theory and first-order analysis of uncertainties, a composite risk and reliability can be defined. This paper presents static and dynamic risk and reliability models that can be used to develop risk-safety relationships for various return periods and expected service lifes that can be used in design. The static models consider single loading application and the dynamic models consider repeated application of random loadings to define a composite risk. The models are applied as examples of the methodology to develop risk-safety curves for culvert design. This work presents a scientific approach to systematically account for the uncertainties and their interactions in the selection of safety factors and return periods for various risk levels in hydraulic design.


A. Vary1
01 Jan 1980
TL;DR: The state-of-the-art of ultrasonic methods is reviewed with reference to the basic measurements, signal acquisition and processing, strength property and morphological condition measurements, and industrial applications as mentioned in this paper.
Abstract: The state-of-the-art of ultrasonic methods is reviewed with reference to the basic measurements, signal acquisition and processing, strength property and morphological condition measurements, and industrial applications. The emphasis is placed on techniques that indicate quantitative ultrasonic correlations with material strength and morphology relevant to the reliability of load-bearing structures.

Journal ArticleDOI
TL;DR: In this article, the conditions required for both the parameters of the log-normal and Weibull distribution functions of fatigue life and the cumulative damage rule to satisfy the statistical Miner's rule are derived.

Journal ArticleDOI
TL;DR: The past several years have marked a considerable upsurge of interest in the conceptualization and measurement of reliability in the social sciences, particularly in regard to measures that are formed as linear composites (weighted or unweighted sums) of individual items.
Abstract: The past several years have marked a considerable upsurge of interest in the conceptualization and measurement of reliability in the social sciences, particularly in regard to measures that are formed as linear composites (weighted or unweighted sums) of individual items (Novick and Lewis, 1967; Bohrnstedt, 1969; Werts and Linn, 1970; Heise and Bohrnstedt, 1970; Armor, 1974; Smith, 1974a). One result of this interest has been a proliferation in the literature of a variety of "reliability coefficients" that measure

Journal ArticleDOI
TL;DR: The reliability analysis of the engineered safety systems of nuclear power plants requires the calculation of the pointwise and average unavailabilities of redundant systems under periodic test and failure.
Abstract: The reliability analysis of the engineered safety systems of nuclear power plants requires the calculation of the pointwise and average unavailabilities of redundant systems under periodic test and...