scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Reliability in 1973"


Journal ArticleDOI
TL;DR: In this article, a cost-reliability curve is used to illustrate the feasibility of arriving at an optimal system design using the latter concept, which is more general and can be extended to any number of constraints.
Abstract: The reliability literature offers an abundance of methods for the optimal design of systems under some constraints. In most of the papers, the problem considered is: given reliabilities of each constituent component and their constraint-type data, optimize the system reliability. This amounts to the assignment of optimal redundancies to each stage of the system, with each component reliability specified. This is a partial optimization of the system reliability. At the design stage, a designer has many options, e.g., component reliability improvement and use of redundancy. A true optimal system design explores these alternatives explicitly. Our paper demonstrates the feasibility of arriving at an optimal system design using the latter concept. For simplicity, only a cost constraint is used, however, the approach is more general and can be extended to any number of constraints. A particular cost-reliability curve is used to illustrate the approach.

121 citations


Journal ArticleDOI
TL;DR: The best linear unbiased estimator of the parameter of the Rayleigh distribution using order statistics in a Type II censored sample from a potential sample of size N is considered in this article.
Abstract: The best linear unbiased estimator of the parameter of the Rayleigh distribution using order statistics in a Type II censored sample from a potential sample of size N is considered. The coefficients for this estimator are tabled to five decimal places for N = 2(1)15 and censoring values of r1, (the number of observations censored from the left) and r2 (the number of observations censored from the right) such that r1 + r2 ? N - 2 for N = 2(1)10, r1 + r2 ? N - 3 for N = 11(1)15.

80 citations


Journal ArticleDOI
TL;DR: In this paper, the location, shape, and scale parameters of the Weibull distribution are estimated from Type I progressively censored samples by the method of maximum likelihood, and the approximate asymptotic variance-covariance matrix for the maximum likelihood parameter estimates is given.
Abstract: The location, shape, and scale parameters of the Weibull distribution are estimated from Type I progressively censored samples by the method of maximum likelihood. Nonlinear logarithmic likelihood estimating equations are derived, and the approximate asymptotic variance-covariance matrix for the maximum likelihood parameter estimates is given. The iterative procedure to solve the likelihood equations is a stable and rapidly convergent constrained modified quasilinearization algorithm which is applicable to the general case in which all three parameters are unknown. The numerical results indicate that, in terms of the number of iterations required for convergence and in the accuracy of the solution, the proposed algorithm is a very effective technique for solving systems of logarithmic likelihood equations for which all iterative approximations to the solution vector must satisfy certain intrinsic constraints on the parameters. A FORTRAN IV program implementing the maximum likelihood estimation procedure is included.

54 citations


Journal ArticleDOI
TL;DR: In this article, the ranks, coefficients, variance, and efficiency for the k-optimum linear unbiased estimator of the Rayleigh distribution for k = 2(1)4 and a sample size of N = 2 (1)22.
Abstract: The member of the class of best linear unbiased estimators (BLUEs) of a parameter based on k order statistics which has minimum variance is called the k-optimum BLUE. The ranks, coefficients, variance, and efficiency are given for the k-optimum BLUE of the parameter of the Rayleigh distribution for k = 2(1)4 and a sample size of N = 2(1)22. In addition, an approximate k-optimum BLUE is given for k = 2(1)4 and N ? 23.

42 citations


Journal ArticleDOI
G. Boyd Swartz1
TL;DR: In this paper, necessary and sufficient conditions for a function to be a mean residual lifetime function of a random variable with finite mean are given, where the conditions are defined as follows:
Abstract: Necessary and sufficient conditions for a function to be a mean residual lifetime function of a random variable with finite mean are given.

40 citations


Journal Article
TL;DR: In this article, the authors provide management with an overview of product reliability and related areas by providing charts which show the expected relation of part count to laboratory test results and the expected relations of laboratory test result to operational performance.
Abstract: This paper provides management with an overview of product reliability and related areas. Charts are provided which show the expected relation of part count to laboratory test results and the expected relation of laboratory test results to operational performance. These charts are followed by laboratory and operational material used in the development of the overview charts.

38 citations


Journal ArticleDOI
TL;DR: In this article, the reliability and availability characteristics of a 2-unit cold standby system with a single repair facility are analyzed under the assumption that the failure and the repair times are both generally distributed.
Abstract: The reliability and the availability characteristics of a 2-unit cold standby system with a single repair facility are analyzed under the assumption that the failure and the repair times are both generally distributed. System breakdown occurs when the operating unit fails while the other unit is undergoing repair. The system is characterized by the probability of being up or down. Integral equations corresponding to different initial conditions are set up by identifying suitable regenerative stochastic processes. The probability of the first passage to the down-state starting from specified initial conditions is obtained by the same method. An explicit expression for a Laplace Transform of the probability density function (pdf) of the downtime during an arbitrary time interval is obtained when the repair time is exponentially distributed. A general method is suggested for the calculation of the moments of the downtime when the repair time is arbitrarily distributed.

37 citations


Journal ArticleDOI
TL;DR: In this article, a reliability model is proposed and evaluated for a fault tolerant computer system which consists of multiple classes of modules and allows for degraded modes of performance, each module of a given class has both an active and a passive hazard rate; constant hazard rates are assumed for active and dormant failures, and the given class may operate either in N Modular Redundancy (NMR) or as a standby sparing system.
Abstract: A reliability model is proposed and evaluated for a fault tolerant computer system which consists of multiple classes of modules and allows for degraded modes of performance. Each module of a given class has both an active and a passive hazard rate; constant hazard rates are assumed for active and dormant failures, and the given class may operate either in N Modular Redundancy (NMR: n + 1 out of 2n + 1 = N) or as a standby sparing system. The model allows for mission-phase changes at deterministic time points when the numbers of modules per class can be changed. The analysis proceeds by generalizing the notions of standby and NMR redundancy, which for N = 3 is TMR (Triple Modular Redundancy), into a concept called hybrid-degraded redundancy. The probabilistic evaluation of the unified redundancy concept is then developed to yield, for a given modular class, the joint distribution of success and the number of nonfailed modules from that class, at special times. With this information, a Markov chain analysis gives the reliability of an entire sequence of phases (mission profile).

36 citations


Journal ArticleDOI
TL;DR: In this paper, a normative 2-stage model for incorporating reliability measurements of data-reporting sources in a Bayesian inference system is presented, where human subjects are asked to make intuitive inferences about two hypotheses on the basis of sample data which were reported with a given reliability.
Abstract: A normative 2-stage model for incorporating reliability measurements of data-reporting sources in a Bayesian inference system is presented. An experiment required human subjects to make intuitive inferences about two hypotheses on the basis of sample data which were reported with a given reliability. When compared with the optimal model, subjects exhibited systematic errors in estimating the diagnostic impact of less than perfectly reliable data. Their responses reflected the use of specific nonoptimal heuristic strategies to process the information. A utility function was added to the normative model to illustrate how a best choice might be made from among potential data-gathering experiments whose costs increase with their reliabilities. Recommendations for using computer aids to enhance efficiency in inference systems are made

29 citations


Journal ArticleDOI
TL;DR: Cognitive reliability is useful for examining man's role in complex systems where cognitive as well as perceptual-motor skills are required and in terms of factors which affect the occurrence of these errors.
Abstract: Human components in manned systems often compensate for hardware error by utilizing other information and past experience in addition to the normal hardware output. However, errors in human information processing and utilization, cognitive reliability, often lowers the overall system reliability. Cognitive reliability in manned systems is discussed in terms of the types of human errors which may occur and in terms of factors which affect the occurrence of these errors. It is a complex function of attitudinal and structural factors and their interaction. Cognitive reliability is useful for examining man's role in complex systems where cognitive as well as perceptual-motor skills are required.

28 citations


Journal ArticleDOI
TL;DR: Similarities and differences among 22 methods of quantitatively predicting operator and technician performance are described and emphasis has been given to eight methods most fully developed and most likely to be used by system engineers.
Abstract: Similarities and differences among 22 methods of quantitatively predicting operator and technician performance are described. Emphasis has been given to eight methods most fully developed and most likely to be used by system engineers. Two general techniques are employed: analysis of historical data and computer-simulation of behavioral processes. No general purpose methodology is available; each method deals with some types of tasks and systems more efficiently than others. In general, simulation-based methods are more powerful than nonsimulation methods. Most methods output probability estimates of successful task/system performance and completion time, but are relatively insensitive to equipment design parameters, manpower selection and training needs. With only one exception no operability method utilizes a formal data base as input and in most cases the parameters these input data describe are not specifically indicated. For most methods validation and/or system application data are either lacking or incomplete.

Journal ArticleDOI
TL;DR: In this article, a parametric approach for the optimization of system reliability with linear constraints is presented, which is analytically complete, sufficiently accurate and computationally simple, gives optimum or near optimum design.
Abstract: This paper presents a new method for the optimization of system reliability with linear constraints, using the parametric approach [1]. The classical nonlinear programming technique is used for the solution. This method, which is analytically complete, sufficiently accurate and computationally simple, gives optimum or near optimum design. The procedure is illustrated with examples, and flow charts for the problems are given.

Journal ArticleDOI
TL;DR: In this article, a mathematical model for the reliability of modularly redundant systems with repair is presented. But the model allows different hazard rates for active units and for standby units, and the hazard rates are assumed to be constant.
Abstract: A mathematical model is established for the reliability of modularly redundant systems with repair. The model allows different hazard rates for active units and for standby units. The hazard rates are assumed to be constant. The cases of constant repair rate and constant repair time for a two unit system are evaluated using the reliability and mean time between failure. The approach is then extended to systems with more than two units. A system parameter, relating to certain types of sensing, switching, and/or recovery has a very significant impact on system reliability for modularly redundant systems with repair.

Journal ArticleDOI
TL;DR: In this article, the Kalman Filter technique is applied to the problem of predicting human operator performance in the execution of a wide variety of tasks described by an exponential improvement model, which can be used as a guide by management on the efficiency of task design, operator selection, and operator training functions.
Abstract: The Kalman Filter technique is applied to the problem of predicting human operator performance in the execution of a wide variety of tasks described by an exponential improvement model. Reliable predictions can be used as a guide by management on the efficiency of task design, operator selection, and operator training functions. Results of industrial case studies involving mechanical and electrical assemblies show that realistic predictions can be made even when the model parameters are nonstationary. Steady-state detection is also included in the paper to permit the isolation of the ``improvement plateau'' phenomenon which indicates a false performance ceiling. In such instances both the initial improvement phase and the recovery phase are described by exponential models.

Journal ArticleDOI
TL;DR: A taxonomy suitable for process control is suggested and a scheme for collection of data on reliability in process plants is outlined, where the importance of the process computer is emphasized, both in its impact on the control system and as a research tool.
Abstract: Application of reliability engineering to process plant design, operation and control has led to a demand for estimates of `human reliability,' or `human error.' However, it is desirable to treat the concept of `human error' with caution and to avoid an approach in which the operator appears to be held solely responsible or even blamed; in reality the error arises out of a quite specific combination of conditions in the man-machine system and it is on the total system that attention should be centered. Problems such as performance of discrete tasks and vigilance tasks or behavior in emergencies have been widely studied. While some of this work appears applicable to process control, there remain significant gaps. A more comprehensive taxonomy of process operator error seems to be needed. Systems exist for reporting incidents in process plants, each with a particular objective such as improvement of personal safety, estimation of insurance rates, or reduction of maintenance costs. Data produced are specialized and often useless for other purposes. In particular, it is difficult to abstract data on malfunction involving the operator. A taxonomy suitable for process control is suggested and a scheme for collection of data on reliability in process plants is outlined. Attention is drawn to the relevance of the process and control system characteristics and to their variability. The importance of the process computer is emphasized, both in its impact on the control system and as a research tool.

Journal ArticleDOI
TL;DR: The problem of determining which of a large set of possible but improbable malfunctions gave rise to a given set of measurements is discussed, and a sequence of successive quasilinearizations and estimations is proved to converge to a minimum of the original objective function.
Abstract: This paper discusses the problem of determining which of a large set of possible but improbable malfunctions gave rise to a given set of measurements. The classes of systems under consideration generally lead to underdetermined sets of equations. Three methods of formulating and solving this class of problems are presented: 1) the pseudoinverse method: this leads to an easily-solved computational problem but it is not physically realistic and it tends to give poor results; 2) a pattern recognition approach based on a more realistic problem formulation: unfortunately, the computational problems associated with this formulation may be formoidable; and 3) a quadratic programming approack: this is based on minimization of a physically realistic objective function. A bmaosdification to eliminate discontinuities in the objetive function and a quasilinearization transform the original problem to an inequality-constrained quadratic minimization problem, which is readily solved by Lemke's complementary pivoting method. A sequence of successive quasilinearizations and estimations is defilned which is proved to converge to a minimum of the original objective function. In tests this convergence occurred very fast. Examples are given; very general classes of problems are discussed which can be handled in this way.

Journal ArticleDOI
TL;DR: In this article, the authors present a method for computer calculations of Pr {x ⩾ y} where X and Y are each from a three-parameter Weibull distribution and provide the moments and the probability density function of the difference.
Abstract: It is important in many reliability applications to determine the probability that the failure time of an element from one population will exceed that of an element from a second population. In this paper, we present a method for computer calculations of Pr {x ⩾ y} where X and Y are each from a three-parameter Weibull distribution. In addition, we provide the moments and the probability density function of the difference. Numerical examples are included.

Journal ArticleDOI
TL;DR: In this paper, the mean time to failure of an engine when a bum-in program is used and conditions under which such a program is justified are derived. But neither of these conditions necessarily has any statistical property that improves with bum-ins.
Abstract: Burn-in programs are often used for automotive or airplane engines in order to eliminate early failures due to ineffective adjustments and similar repairable sources of failure. We assume that there are two operating states: Good and Poor. Each has its own reliability characteristic, and neither necessarily has any statistical property that improves with bum-in. The purpose of the bum-in program is to uncover the Poor engines and then to repair them to the Good state. We calculate the mean time to failure of such engines when a burn-in program is used and derive conditions under which a burn-in program is justified.

Journal ArticleDOI
TL;DR: A definition of availability having a probabilistic guarantee for a finite interval is made and examples and asymptotic solution are presented.
Abstract: A definition of availability having a probabilistic guarantee for a finite interval is made. Examples and asymptotic solution are presented.


Journal ArticleDOI
TL;DR: In this paper, a smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing, and the reliability function is also estimated either by using the empirical estimate of the parameter, or by obtaining the expectation of the reliability functions.
Abstract: A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

Journal ArticleDOI
TL;DR: In this paper, a fuel charge/discharge system for a nuclear reactor with on-load fuel changing is considered, which consists of two fuel charging (fc) machines which operate in parallel.
Abstract: A fuel charge/discharge system for a nuclear reactor with on-load fuel changing is considered. The system consists of two fuel charging (fc) machines which operate in parallel. When both fc machines have been down for more than 28 days the reactor becomes subcritical and shuts down spontaneously. Using conditional probability arguments and renewal theory it is shown how reliability measures of a two unit redundant system can be extended to obtain the expected frequency and duration of reactor shut-downs due to fc system failures.

Journal ArticleDOI
TL;DR: In this paper, a competing risk model is developed for an individual who is subject to two risks of death or failure: failure of a single organ which has a constant hazard rate and failure of two organ systems in which one of the two organs must survive in order that the system not fail.
Abstract: A competing risk model is developed for an individual who is subject to two risks of death or failure. One risk is the failure of a single organ which has a constant hazard rate. The other risk is the failure of a two organ system in which one of the two organs must survive in order that the system not fail. Each organ in this two organ system has a constant hazard rate, and the hazard rate increases when only one organ is working. The two risks are assumed to operate independently. Maximum likelihood estimating equations are developed along with the formulas for the large sample variance-covariance matrix of the requisite parameters.

Journal ArticleDOI
Sheldon Baron1
TL;DR: In this article, a model for the human controller in continuous manual control tasks is described, based on modern control and human response theories, and techniques for predicting closed-loop system performance and relative workload for the task are presented.
Abstract: A model for the human controller in continuous manual control tasks is described. The model is based on modern control and human response theories. Techniques for predicting closed-loop system performance and relative workload for the task are presented. The implications for reliability assessment in corresponding man-machine problems are discussed. Finally, an example that involves prediction of pilot performance and workload, and their relation to mission success, for a STOL landing-approach is given.

Journal ArticleDOI
TL;DR: This paper approaches the study of human reliability by identifying numerous factors that tend to increase the probability that errors will occur in computer-based business information systems.
Abstract: The study of human reliability has traditionally taken an ``error-rate'' approach, with the emphasis on identifying and reporting supposedly consistent and basic error-rate levels for various manual activities. This paper approaches the study of human reliability by identifying numerous factors that tend to increase the probability that errors will occur in computer-based business information systems. The major causal factor categories are personal, design, documentation, training, source data, man-machine interface, and environment. This causal factor approach suggests that most human-generated errors can be prevented by eliminating the factors that cause errors to occur.

Journal ArticleDOI
TL;DR: In this article, the Laplace transforms of the reliability and mean time to system failure were derived for a 1-unit system with constant failure rate and repair rate, and the results were compared with those of Calabro.
Abstract: This paper considers a 1-unit system; the unit is repaired upon failure. The failure and repair rates need not be constant. The system fails if the unit is not repaired within a fixed time, or if the number of failures during the mission exceeds a fixed number. As a special case, that number is allowed to be ``infinite.'' The Laplace transforms of the reliability and mean time to system failure are derived; they are not easily solved. The special case of constant failure and repair rates is treated. The results are compared with those of Calabro.

Journal ArticleDOI
TL;DR: This paper aims to obtain the optimum cost allocation to a number of components connected in series (no redundancy) with a view to maximize the system reliability subject to a given total cost of the system.
Abstract: This paper aims to obtain the optimum cost allocation to a number of components connected in series (no redundancy) with a view to maximize the system reliability subject to a given total cost of the system. The reliability of each component is a function of its cost. The technique of Dynamic Programming has been employed to achieve the results.

Journal ArticleDOI
TL;DR: This paper presents a formal analysis of the problem of determining the inferential impact of the information in a composite report from a collection of unreliable observers or sensors and uses the notion of conditional independence to express these assumptions in a tractable form.
Abstract: This paper presents a formal analysis of the problem of determining the inferential impact of the information in a composite report from a collection of unreliable observers or sensors. Each sensor reports one of a finite number of possible states of a data system linked probabilistically with an ``objective system'' whose condition is to be inferred from the data state. The principal assumptions are that the sensors do not ``collaborate'' in making their reports and that their reports are conditioned only by the existing data state and not by the actual, unobservable state of the objective system. Use of the notion of conditional independence to express these assumptions gives the analytic expressions a tractable form which sheds light on various inference issues. The paper also briefly discusses current empirical research on the question of how well people actually adjust the impact of inferential evidence to correspond to the unreliability of the sources of information.

Journal ArticleDOI
TL;DR: Equations are derived which enable one to calculate the system reliability for parallel or triple modular redundant systems with standby spares and a comparison of the parallel and the TMR/Spares system configurations is given.
Abstract: Equations are derived which enable one to calculate the system reliability for parallel or triple modular redundant systems with standby spares. Software error detection is introduced into the TMR/Spares system configuration in order to utilize fully all of the units. An indication of the sensitivity of the system reliability to an increase in the number of spares, partitioning, switching, variations in the powered and unpowered failures rates, and time is presented. A comparison of the parallel and the TMR/Spares system configurations, under similar conditions, is given.

Journal ArticleDOI
TL;DR: Traditional approaches to human reliability are examined and a technique which permits the system designer to derive a mutually exclusive and exhaustive set of operator error categories in a man-computer system is presented.
Abstract: This paper briefly examines traditional approaches to human reliability and presents a technique which permits the system designer to derive a mutually exclusive and exhaustive set of operator error categories in a man-computer system. These error categories are defined in terms of process failures and provide the system designer with a qualitative index suitable for determining error causes and consequences. The technique is demonstrated, and the utility of the resulting error categories is evaluated in the context of two studies on a military information processing system. The paper concludes with a brief discussion of detectable and non-detectable errors and a suggestion for determining the impact of errors on ultimate system goals.