scispace - formally typeset
Search or ask a question

Showing papers on "Failure rate published in 1970"


Journal ArticleDOI
TL;DR: In this article, the minimal cut lower bound on the reliability of a coherent system, derived in Esary-Proschan [6] for the case of independent components not subject to maintenance, is shown to hold under a variety of component maintenance policies and in several typical cases of component dependence.
Abstract: In this article the minimal cut lower bound on the reliability of a coherent system, derived in Esary-Proschan [6] for the case of independent components not subject to maintenance, is shown to hold under a variety of component maintenance policies and in several typical cases of component dependence. As an example, the lower bound is obtained for the reliability of a “two out of three” system in which each component has an exponential life length and an exponential repair time. The lower bound is compared numerically with the exact system reliability; for realistic combinations of failure rate, repair rate, and mission time, the discrepancy is quite small.

97 citations


Journal ArticleDOI
TL;DR: In this paper, the authors obtained bounds for the mean life of series and parallel systems in the case of component life distributions that have properties such as a monotone failure rate, monotonous failure rate average, or decreasing density.
Abstract: Some inequalities are obtained which yield bounds for the mean life of series and of parallel systems in the case where component life distributions have properties such as a monotone failure rate, monotone failure rate average, or decreasing density. These bounds are based on comparisons with systems of exponential or uniform components. Similar comparisons are obtained when components have Weibull or Gamma distributions with different shape parameters. Some inequalities are also obtained for convolutions of life distributions helpful in the study of replacement policies.

44 citations


Journal ArticleDOI
TL;DR: The utilization of the “infant mortality” or decreasing failure rate effect to improve the reliability of repairable devices and reveals the profitability of testing increases with the complexity of the repairable device.
Abstract: The subject of this paper is the utilization of the “infant mortality” or decreasing failure rate effect to improve the reliability of repairable devices. Decreasing failure rate implies the possibility that devices which exhibit it can be improved by “burn-in testing” of each unit. Such a test serves to accumulate operating time while shielded from the full costs and consequences of failure. A general formulation of the burn-in test decision for repairable devices is presented and some special cases are solved. A class of models, indexed by the degree of partial replacement present in the repair process, is considered and numerical results for the optimal policy are given for several members of that class. A comparison of those results reveals the profitability of testing increases with the complexity of the repairable device.

18 citations


Journal ArticleDOI
TL;DR: In this paper, the authors clarified the fact that failure rate and conditional probability density are by no means the same and made more precise the distinction between the two concepts, and also briefly considered mean time between failures and dimensionality.
Abstract: Failure rate is the most commonly used term in reliability and related engineering interests, yet it is still not well understood by perhaps the majority of those using it. Indeed, more than one reliability treatise wrongly defines failure rate as a conditional probability density, i.e., wrong by the accepted criterion of what constitutes any probability density. The fact that these two concepts are by no means the same is clarified and made more precise. Mean time between failures and dimensionality are also briefly considered.

8 citations


01 Mar 1970
TL;DR: Dynamic Programming methodology is introduced for inspection models deal with operating systems whose stochastic failure is detected by observations carried out intermittently and can be utilized for any type of failure rate - increasing, decreasing or mixed.
Abstract: : Inspection models deal with operating systems whose stochastic failure is detected by observations carried out intermittently. Solutions of the problems under consideration using differentiation have previously been given by the authors. In the current study Dynamic Programming methodology is introduced for this purpose. The approach has many potential advantages - it can be utilized for any type of failure rate - increasing, decreasing or mixed. Furthermore, the method is applicable if additional types of costs are introduced, or if costs are time dependent. (Author)

3 citations


Proceedings ArticleDOI
01 Apr 1970
TL;DR: In this paper, a special test chip designed to evaluate the reliability of Cogar's high performance read-write memory system is described in detail, and a test pattern for each of the necessary tests is designed.
Abstract: The reliability of a large scale integrated circuit chip is an aggregate of the reliability of each of its constituents. Therefore, the total failure rate is determined by simply evaluating the contribution of each constituent individually and independently. This is achieved by subjecting each part (e.g. a devrice or a conductor) to stress testing and subsequently translating its failure rate to machine-use conditions. Such tests, however, cannot be performed on a complex product chip because it is impossible to isolate a desired area. The only answer to this problem is to design a test pattern for each of the necessary tests. To illustrate the effectiveness of this vehicle, a special test chip designed to evaluate the reliability of Cogar's high performance read-write memory system is described in detail.

3 citations


Proceedings ArticleDOI
01 Apr 1970
TL;DR: In this paper, a technique is introduced which optimizes the selection of parts for system application by reliability/quality levels through systematizing the compilation and processing of necessary data, and the comparative influence of systems cost or performance parameters such as repair cost, storage time, cost of failure, and mission duty cycle are discussed.
Abstract: A technique is introduced which optimizes the selection of parts for system application by reliability/quality levels through systematizing the compilation and processing of necessary data. The comparative influence of systems cost or performance parameters such as repair cost, storage time, cost of failure, and mission duty cycle are discussed. Promising approaches to parts optimization at the subsystem level are reviewed. A means for evaluating the system impact of items of uncertain failure rate is illustrated.

3 citations


Journal ArticleDOI
J.M. Grange1, J. Dorleans1

2 citations



Journal ArticleDOI
TL;DR: In this article, the reliability of a system is bounded by utilizing additional information on the behavior of the failure rate as a function of age, e.g., moments of the life distribution (mainly the first moment), life percentiles obtained experimentally, and general data.
Abstract: When bounds are sought for the reliability of a system, all available information is applied, e.g., moments of the life distribution (mainly the first moment), life percentiles obtained experimentally, and general data. Barlow et al. [1] and Epstein [2] presented bounds for a monotone-increasing or decreasing failure rate, and bounded the reliability by referring to a system with a constant failure rate. In this paper, the reliability is bounded by utilizing additional information on the behavior of the failure rate as a function of age.