scispace - formally typeset
Search or ask a question

Showing papers on "Reliability theory published in 1977"


Journal ArticleDOI
TL;DR: The usual constrained reliability optimization problem is extended to include determining the optimal level of component reliability and the number of redundancies in each stage and the heuristic approach by Aggarwal et al, is used to solve the problem.
Abstract: The usual constrained reliability optimization problem is extended to include determining the optimal level of component reliability and the number of redundancies in each stage. With cost, weight, and volume constraints, the problem is one in which the component reliability is a variable, and the optimal trade-off between adding components and improving individual component reliability is determined. This is a mixed integer nonlinear programming problem in which the system reliability is to be maximized as a function of component reliability level and the number of components used at each stage. The model is illustrated with three general non linear constraints imposed on the system. The Hooke and Jeeves pattern search technique in combination with the heuristic approach by Aggarwal et al, is used to solve the problem. The Hooke and Jeeves pattern search technique is a sequential search routine for maximizing the system reliability, RS (R, X). The argument in the Hooke and Jeeves pattern search is the component reliability, R, which is varied according to exploratory moves and pattern moves until the maximum of RS (R, X) is obtained. The heuristic approach is applied to each value of the component reliability, R, to obtain the optimal number of redundancies, X, which maximizes RS (R, X) for the stated R.

168 citations


Journal ArticleDOI
TL;DR: The paper presents a method for obtaining an optimal reliability allocation of an n-stage series system and is compared with other methods for obtaining optimal parallel redundancy under linear onstraints.
Abstract: The paper presents a method for obtaining an optimal reliability allocation of an n-stage series system. In each stage, redundant comnponents can be added (in parallel, stand-by, or k-out-of-n:G, etc.), or a more reliable component can be used in order to improve the system reliability. The solution is obtained by repeatedly using a more reliable candidate at each stage that has the greatest value of a `weighted sensitivity function'. The balance between the objective unction and the constraints is controlled by a `balancing coefficient'. The overall computational procedure is given and an example is presented. The computations are given for a set of randomly generated test problems in which the optimal parallel redundancy under linear onstraints is determined. The proposed method is then compared with other methods.

112 citations


Journal ArticleDOI
TL;DR: Some appropriate reliability measures of 1-unit systems, 2-unit standby systems, and a system with unrepairable spare units are given, and optimum preventive maintenance policies which maximize or minimize these measures are derived under suitable conditions.
Abstract: This paper summarizes my recent work in analyzing preventive maintenance of the following kinds of repairable systems: 1-unit systems, 2-unit standby systems, and a system with unrepairable spare units. Some appropriate reliability measures of such systems are given, and optimum preventive maintenance policies which maximize or minimize these measures are derived under suitable conditions.

54 citations


Journal ArticleDOI
TL;DR: The problem of estimating the reliability of a system which is undergoing development testing is considered from a Bayesian standpoint in this paper, where m sets of binomial trials are performed under conditions which lead to an ordering,?1
Abstract: The problem of estimating the reliability of a system which is undergoing development testing is considered from a Bayesian standpoint. Formally, m sets of binomial trials are performed under conditions which lead to an ordering, ?1 < ?2 < ... < ?m, of the binomial parameters. The parameter of interest is ?m, the final underlying reliability of the system. The marginal posterior pdf for ?m is easily obtained when uniform prior pdf's are assumed. The method is illustrated.

49 citations


Journal ArticleDOI
TL;DR: A Markov-chain model is used and the numerical difficulties associated with large transition-probability matrices are reduced by a systematic ordering of the system states.
Abstract: This paper presents a methodology for calculating the time-dependent reliability of a large system consisting of s-dependent components. A Markov-chain model is used and the numerical difficulties associated with large transition-probability matrices are reduced by a systematic ordering of the system states. A technique is also presented for the systematic merging of processes corresponding to systems exhibiting symmetries.

47 citations


Journal ArticleDOI
TL;DR: The paper considers a series system consisting of n components with constant failure rates and obtains a component testing procedure of minimum cost that accepts the system if and only if the number of component failures does not exceed k, where k is a given integer.
Abstract: The paper considers a series system consisting of n components with constant failure rates and obtains a component testing procedure of minimum cost such that a) the probability of accepting a system whose reliability is less than R0 is less than ?, and b) the probability of rejecting a system whose reliability is greater than R1 is less than s. A decision rule is used that accepts the system if and only if the number of component failures does not exceed k, where k is a given integer. The optimum value of k and the associated component testing times which will satisfy the above probability requirements are obtained. The optimum testing time is the same for each component, irrespective of the individual component testing costs.

43 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe some fundamental stage combinations that can be used to simulate a wide range of distributions, and refer to more-extensive explanations, such as more extensive explanations for more complex transition rates.
Abstract: Several techniques are available for solving reliability models containing non-exponential state residence times. All of these techniques, other than the method of stages, involve derivation of closed-fonn expressions in time domain or Laplace form. Though the closed-form solutions are useful, they are often impossible to derive due to complexity of the interstate transition rates. Stage combinations can be used either to fit the available data or approximate a known probability distribution. The main advantage with this method is that though an explicit mathematical expression does not emerge, the numerical solution can be generally obtained. This paper briefly describes some fundamental stage combinations that can be used to simulate a wide range of distributions, and refers to more-extensive explanations.

36 citations


Journal ArticleDOI
TL;DR: In this article, a new distribution having a finite range is proposed for life testing and reliability, and a uniformly powerful test for the shape parameter is derived assuming the location and scale parameters known.
Abstract: A new dass of distributions having a finite range that will be useful in life testing and reliability is proposed. Methods of estimating the unknown parameters are studied. Explicit expressions for lower moments of order statistics in complete random samples of any size are given. A uniformly most powerful test is derived for the `shape' parameter assuming the location and scale parameters known. An asymptotically optimal test procedure is also suggested when the location and scale parameters are unknown.

33 citations


Journal ArticleDOI
TL;DR: A substitutionary decomposition method for computing the reliability of a redundant system S given by a Boolean expression is proposed, System S is decomposed into two subsystems according to up-and down-states of its keystone variable x.
Abstract: A substitutionary decomposition method for computing the reliability of a redundant system S given by a Boolean expression is proposed, System S is decomposed into two subsystems S(x) and S(x?) according to up-and down-states of its keystone variable x. This is repeated until all terms become s-independent in each decomposed subsystem. A criterion for choosing the keystone variable and a property which saves computation time are obtained.

29 citations


Journal ArticleDOI
TL;DR: Shpilberg et al. as mentioned in this paper presented a sumary review of work in the area of modeling the probability distribution of fire loss amount, and tried to illustrate how probabilistic arguments relating to the physical nature of the fire growth process can aid analysts in their choice of an appropriate model for the probability distributions of fire losses amount.
Abstract: Theoretical distributions frequently used to model fire loss amount are discussed. The problem of selecting models solely on the basis of statistics is addressed. Use of probabilistic arguments applied in Reliability theory to infer the type of probability distribution, is explored. The concept of failure rate of a fire is discussed and used to explore implications of the Pareto and Lognormal models as to the fire growth phenomenon. It is concluded that probabilistic arguments, regarding the nature of the fire growth process can aid analysts in their choice of an appropriate model for the probability distribution of fire loss amount. It is a basic assumption in all actuarial research and risk theory studies that there is a probability distribution of loss amount underlying the risk process. In other words, if a loss occurs, there is the probability S(x) that the loss will be for an amount less than or equal to x. In theoretical studies this distribution often is presented as continuous, having a derivative S'(x) s(x), which is called the probability density function of fire loss amount.' At a certain point in time, the results of the theoretical work have to be applied to practical situations. For example, the distribution of actual losses experienced by an insurer is then considered as a sample from an underlying model without defining the corresponding distribution, the characteristics of which are taken to agree with the corresponding statistics of the observed distribution. Many results can be obtained simply by using these sample statistics. However, it is often more desirable to work with analytically defined loss distributions, and the statistics are then used to establish suitable values of the parameters involved in the analytical distributions. When working with these analytical distributions, the researcher must make use of properties of the distributions other than those covered by the statistics observed. There are two main aspects of general insurance in which a knowledge David Shpilberg, Ph.D., is Associate Professor of Operations Research and Insurance at the Instituto de Estudios Superiores de Administraci6n (IESA), a Graduate School of Management in Caracas, Venezuela. The research for this paper was partly financed by a grant of the Factory Mutual Research Corporation. The paper was presented at the 1976 annual meeting of The American Risk and Insurance Association. Dr. Shpilberg received the 1975 Journal of Risk and Insurance Award for the best paper published in the 1975 issues. ( 103) This content downloaded from 157.55.39.163 on Wed, 21 Sep 2016 05:15:04 UTC All use subject to http://about.jstor.org/terms 104 The Journal of Risk and Insurance of the structure of the elements of risk variation is needed:2 first, in the rate making process; and second, in dealing with the question of financial stability (monetary risk). Traditional methods of rate making are based only on an estimate of the mean expected loss. Financial stability studies (e.g., studies addressing the probability of ruin of an insurer or evaluating the risk of unbearable monetary loss for a corporation which chooses not to insure its property) usually are based on an estimate of the variance of the possible loss. However, in the area of industrial fire losses, the probability distributions involved are markedly skewed in character (very small probabilities of a very large loss). Knowledge of its higher moments (in essence, the shape of the tail of the distribution) becomes essential if meaningful quantitative estimates of risk are to be made. Most often, this step involves assumptions regarding the behavior of losses larger than those observed in the sample of available loss experience. Thus, unless there is some theoretical support (not merely observed statistics) for an inference that a particular type of probability distribution is a more reasonable model for the distribution of fire loss amounts as a function of size, inferences derived for any region of the distribution outside the available data will be no better than a straight extrapolation on the data. This paper presents a sumary review of work in the area of modeling the probability distribution of fire loss amount, and attempts to illustrate how probabilistic arguments relating to the physical nature of the phenomenon (an approach extensively used in life testing of material failure and in reliability analysis of systems' components) can effectively aid in the choice of an appropriate model. Fire Loss As a Stochastic Process The total amount of fire losses in a given period can be modeled as a risk process characterized by two stochastic variables: the number of fires and the amount of the losses. If, Pr (t) Probability of r losses in the observed period, t S(x) =Probability that, given a fire loss, its amount is ? x S*r(x) rth convolution of the distribution function of fire loss amount, S (x), then the probability (see Figure 1) that total loss in a period of length t, is -? x can be expressed as

28 citations


Journal ArticleDOI
TL;DR: The computer program presented in this paper applies to this type of graph and evaluates upper and lower bounds to three system-reliability characteristics: availability, mean time-to-fail, and mean time -to-repair.
Abstract: Reliability graphs having only one source node and one sink node, containing no feedback loops, and whose every event is directed are commonplace in the chemical process industry. The use of highly sophisticated techniques for evaluating the reliability characteristics of these graphs is not necessary in this situation; such graphs facilitate the use of simple techniques which are computationally very efficient. The computer program presented in this paper applies to this type of graph and evaluates upper and lower bounds to three system-reliability characteristics: availability, mean time-to-fail, and mean time-to-repair. Data required are simply a mean time-to-fail and a mean time-to-repair for each event together with structural information relating each event to the graph. The program is written in FORTRAN IV; a listing and instructions for its use are available from the author and in a Supplement.

Journal ArticleDOI
TL;DR: In this article, an interval reliability of a 1-unit system with repair was derived explicitly for the case in which T and x are distributed exponentially, and the optimum preventive maintenance policies were discussed for maximizing the limiting interval reliability.
Abstract: This paper discusses an interval reliability R(x, T) of a 1-unit system with repair. The R(x, T) is derived explicitly for the case in which T and x are distributed exponentially. This paper also discusses the optimum preventive maintenance policies maximizing the limiting interval reliability when T is constant and maximizing the interval reliability when T has an exponential distribution.

Journal ArticleDOI
TL;DR: The paper formulates an optimal reliability design problem for a series system made of parallel redundant subsystems with a cost-constraint and a solution method for the formulated problems is presented.
Abstract: The paper formulates an optimal reliability design problem for a series system made of parallel redundant subsystems. The variables for optimization are the number of redundant units in each subsystem and the reliability of each unit. There is a cost-constraint. The time for which the system reliability exceeds a specified value is to be maximized. Similarly the cost could be minimized for a constraint on the mission time and reliability. A solution method for the formulated problems is presented along with an example.

Journal ArticleDOI
TL;DR: In this article, the authors consider a 2-unit warm-standby redundant system with two switching failure modes, and derive the stochastic behavior of system failure, and show that the switch switches an operating unit out of operation when it should not, and does not switch the standby in when it would.
Abstract: The switch has two different failure modes: It switches an operating unit out of operation when it should not, and does not switch the standby in when it should. We consider a 2-unit warm-standby redundant system with two switching failure modes, and derive the stochastic behavior of system failure. Several reliability models are shown as special cases.

Journal ArticleDOI
TL;DR: In this paper, the authors examined small-sample techniques for estimating the change in reliability without the benefit of test data from the improved system, which can also be adapted to estimating reliability using data from a test program conducted in stages.
Abstract: Test programs are conducted to identify system failure modes and to estimate reliability. If the system can be changed so that some of the identified failure modes are eliminated and new failure modes are not introduced, the reliability of the system is improved. This paper examines small-sample techniques for estimating the change in reliability without the benefit of test data from the improved system. A decision to make the improvements can be based on the estimated increase in reliability. The procedures used for estimating changes can also be adapted to estimating reliability using data from a test program conducted in stages. For this case, a sample is taken at each stage and changes are made so that all newly identified failure modes which are correctable are eliminated from the system. Simulation is used to study reliability estimators both when just one sample is taken and when sampling is conducted in stages.

Journal ArticleDOI
TL;DR: For example, the reliability of a social measurement usually poses quite a different problem than the determination of the validity of a physical measurement, whereas the two cases are very similar as mentioned in this paper.
Abstract: One thing that differentiates measurement in the social sciences from measurement in the physical sciences is that most of the instruments used in the social sciences consist of "items" which are gathered together to form a "test." A measurement of a person's height is a single number which can be read off a scale, but a measurement of a person's intelligence is arrived at by combining scores obtained on various test items. Therefore, the determination of the reliability of a social measurement usually poses quite a different problem than the determination of the reliability of a physical measurement, whereas the determination of the validity of measurement is very similar in the two cases. (Whether a scale is "really" measuring height and whether a test is "really" measuring intelligence are quite comparable considerations.) Classical reliability theory from Spearman (1904) to Gulliksen (1950) assumed, explicitly or implicitly, that psychological tests were composed of relatively large numbers of items drawn from extremely large pools of items. Modifications of this theory (e.g., Lord and Novick, 1968; Cronbach et al., 1972), while substituting the notion of randomly parallel for rigorously parallel, made the same assumption. Even the currently-controversial theories on the reliability of criterion-referenced measurements (e.g., Livingston, 1972; Harris 1972; Hambleton and Novick, 1973) postulate the existence of several items which, taken together, constitute a test. But what if the test consists of just one dichotomously-scored item, such as "How much is two plus two?", "Who is the President of the United States?", etc. which may or may not be drawn from, much less be representative of, a larger item domain? With the exception of a few articles written many years ago (e.g., Holzinger, 1932; Guttman, 1946) and occasional references to the use of the generalized Spearman-Brown formula in reverse (e.g., Emrick, 1971), surprisingly little attention has been given in the educational and psychological literature to the reliability of an item.

Journal ArticleDOI
TL;DR: A thermal design approach emphasizing the coupling of heat transfer theory and experiment to basic reliability considerations is illustrated by discussing typical thermal problems on several system integration levels for a high reliability, multi-cabinet hybrid computer system.
Abstract: Thermal design and analysis are important in Reliability/Availability/Maintainability R/A/M programs for electronic systems. A thermal design approach emphasizing the coupling of heat transfer theory and experiment to basic reliability considerations is illustrated here by discussing typical thermal problems on several system integration levels for a high reliability, multi-cabinet hybrid computer system. This paper emphasizes a particular attitude toward the thermal aspects of equipment design, one that is flexible in the methods of analysis, comprehensive in its treatment of each integration level in a complementary fashion, and oriented toward the goals of the R/A/M program as a whole. The examples show that conventional analytic solutions, when coupled with reliability theory and a modicum of experimental results, lead to effective thermal design.

Journal ArticleDOI
TL;DR: In this article, a model for the s-expected cost of a development testing program is presented, where the total cost function consists of two terms: the first term is proportional to the duration of the testing program; the second term is a loss function that assesses additional costs for failure to meet reliability goals during the test program.
Abstract: A model for the s-expected cost of a development testing program is presented in this paper. The total cost function consists of two terms. The first term is proportional to the duration of the testing program; the second term is a loss function that assesses additional costs for failure to meet reliability goals during the testing program. The reliability growth model assumes that failures during the program occur according to a nonhomogeneous Poisson process having a power-law rate. An example shows how the duration of the test program can be chosen to minimize s-expected total cost.

Journal ArticleDOI
TL;DR: A method is presented for allocating reliability to each unit of a system with a view to minimizing the system cost and there is a need for manufacturers and users to make these data available to reliability theoreticians so that the derived results are usefully applied to practical systems.
Abstract: A method is presented for allocating reliability to each unit of a system with a view to minimizing the system cost. The practical utility of this method, as well as other methods, depends heavily on the availability of cost-reliability data for the 1 constituent units. Unfortunately, for most components such data are not readily available. There is a need for manufacturers and users to make these data available to reliability theoreticians so that the derived results are usefully applied to practical systems. So far, very little seems to have been done in this direction.

Journal ArticleDOI
TL;DR: In this paper, the authors developed three models for standby redundant systems consisting of dissimilar units, where priorities are assigned to operating the units as well as for repair in one repair facility.
Abstract: This paper develops 3 models for standby redundant systems consisting of dissimilar units. In an effort to increase system reliability, priorities are assigned to operating the units as well as for repair in one repair facility. A 1-out-of-3:G system is considered in models 1 and 2. The operating schedule is fixed in the order top, middle, and low prioritics which are assigned to the 3 units. The effect on system reliability of choosing two repair disciplines viz., 1) Head-of-line (Model 1), and 2) Preemptive-resume (Model 2), is studied. Markov renewal processes are used for the head-of-line, and the Supplementary variable technique for the preemptive-resume repair discipline. The distribution of time to system failure and its mean are derived. In model 3, a 1-out-of-2:G system is considered in which the priority unit (unit 1) is subject to both repair and preventive maintenance while only repair is considered for unit 2. Expressions for the Laplace transforms of the various state probabilities, the availability, and the steady-state availability have been obtained by using the supplementary variable technique.

Journal ArticleDOI
Thomas E. Case1
TL;DR: In this paper, a reduction technique is described for obtaining a simplified reliability expression (probability of success) when applied to the canonical form of minterms (having independent variables) such as those generated from a truth table format.
Abstract: A reduction technique is described for obtaining a simplified reliability expression (probability of success) when applied to the canonical form of minterms (having independent variables) such as those generated from a truth table format. The resulting terms are always mutually exclusive, which allows simple, direct transformation to a probability expression.

Proceedings ArticleDOI
01 Jan 1977
TL;DR: In this paper, a linear quadratic control problem is formulated which accounts for system effectiveness and gives an offline Procedure for comparing two linear-quadratic control systems on the basis of both reliability and performance.
Abstract: The linear quadratic optimal control method is used today to solve many complex systems problems. As system complexity increases, and as linear quadratic optimal control is used in more demanding situations, the extension of the design methodology to cover system failures, robustness and reliability is of crucial importance. This paper documents is the progress toward a theory which incorporates reliability in the performance index; a linear quadratic control problem is formulated which accounts for system effectiveness and gives an offline Procedure for comparing two linear quadratic control systems on the basis of both reliability and performance.

Journal ArticleDOI
TL;DR: A new recursive algorithm for computing bounds on the reliability of a directed, source-sink network whose arcs either function or fail with known probabilities that is always tighter than the well-known Esary-Proschan bounds.
Abstract: This paper presents a new recursive algorithm for computing bounds on the reliability of a directed, source-sink network whose arcs either function or fail with known probabilities. The reliability is the probability that a path (consisting only of functioning arcs) exists from the network's source to its sink. The algorithm is based on a partitioning of the nodes of the network into subsets S1, S2,..., SP such that all predecessors of a node belonging to Sp(2)

Journal ArticleDOI
TL;DR: In this paper, the authors considered multiple s-independent grouped censored samples with failure times unknown and derived the maximum likelihood estimates for the two-parameter Weibull distribution with and without failure times.
Abstract: Aircraft or missiles are flown for missions of varying durations. Data are collected at the end of each mission which indicate the mission duration and whether the equipment failed. The data are considered as multiple s-independent grouped censored samples with failure times unknown. The underlying failure model considered is the 2-parameter Weibull distribution. Maximum likelihood estimates are derived. The exponential distribution is used for comparison. Monte Carlo simulations are used to compare s-efficiency of estimates for grouped data with estimates if failure times were known. The asymptotic variance-covariance matrix was computed for the sampling conditions studied and was used to obtain lower s-confidence bounds on the system reliability.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate what happens to the Bayes estimates when the shape parameter in the failure model is incorrectly specified (e.g. when the model is assumed to be Poisson when it is not), and show that there is little change in the sefficiencies of the estimates as measured by s-expected squared error loss.
Abstract: A unit is placed on test for a fixed time, and the number of failures is observed. The stochastic process generating the failures is assumed to have s-independent, Erlang distributed times between failures. Bayes estimates of reciprocal MTBF (RMTBF) and reliability are given where the loss function is squared error and the prior distribution for RMTBF is gamma. We investigate what happens to the Bayes estimates when the shape parameter in the failure model is incorrectly specified (e.g., the failure model is assumed to be Poisson when it is not). This question is answered for parameters which are typical of a wide range of actual military equipment failure data. As the shape parameter in the failure model changes 1) there is only a small to moderate change in the estimates of RMTBF; 2) there is a small to moderate change in the estimate of reliability for small numbers of failures but a larger change for an unusually large number of failures; 3) there is little change in the s-efficiencies of the estimates as measured by s-expected squared error loss. For the range of parameters in this study, not much is lost in s-efficiency by restricting attention to the mathematically tractable Erlang failure model instead of using a more general gamma failure model.

Journal ArticleDOI
W. G. Schneeweiss1
TL;DR: In this article, the pdf of the duration of non-self-reporting faults is derived, and the pdf depends on the length of an item's life and the distance between checks.
Abstract: The pdf of the duration of hidden (non self-reporting) faults is derived. It depends on the pdf of item's life and the pdf of the distance between checks.

Journal ArticleDOI
TL;DR: In this article, a method for apportioning reliability growth to the subsystems that make up a system in order to achieve the required reliability at least cost is presented, which is handled as an s-expected cost minimization problem subject to the constraint of meeting a system reliability requirement.
Abstract: A method is presented for apportioning reliability growth to the subsystems that make up a system in order to achieve the required reliability at least cost. Reliability growth apportionment is handled as an s-expected cost minimization problem subject to the constraint of meeting a system reliability requirement. The problem is formulated in terms of Duane's reliability growth model, and is solved using geometric programming. The method can be useful in the early stages of system design to determine subsystem reliability growth that will allow a system reliability requirement to be met, and in the latter stages of system design when reliability has fallen short of the required goal and improvements are necessary.

Journal ArticleDOI
TL;DR: In this article, an approach to reliability design seeks to minimize the variance on system lifetime within s-expected life and economic constraints, and is illustrated by simple numerical examples which show that for a small increase in cost an appreciable decrease in the variance of the lifetime is obtained.
Abstract: This approach to reliability design seeks to minimize the variance on system lifetime within s-expected life and economic constraints. We consider the case where the parameters of the lifetime distribution are constant and discuss the more general case where the parameters of the lifetime distribution are random variables. The design procedure is illustrated by simple numerical examples which show that for a small increase in cost an appreciable decrease in the variance of the lifetime is obtained. We have restricted our attention to active redundancy but the extension to standby redundancy is straightforward.

Journal ArticleDOI
TL;DR: Policy iteration technique of dynamic programming is used to solve the problem to determine the maximum-reliability route and optimally distribute a given number, M, of parallel redundant links on this path.
Abstract: The network consists of imperfect nondirected links and perfect nodes. For each link, some i.i.d. parallel redundant links will be attached, thus improving the reliability of communication between that pair of nodes. The problem is to determine the maximum-reliability route and optimally distribute a given number, M, of parallel redundant links on this path. Each redundant link has the reliability of the original link between the two nodes. Policy iteration technique of dynamic programming is used to solve the problem.

Journal ArticleDOI
TL;DR: In this article, the problem of determining the optimal allocation of test effort among individual components of a system is addressed, using knowledge of the relationship between component uncertainty and system uncertainty, and component and system test costs.
Abstract: This paper is concerned with the problem of determining the optimal allocation of test effort among individual components of a system. Using knowledge of the relationship between component uncertainty and system uncertainty, and component and system test costs, the test allocation is determined so as to minimize the variance of an estimator of overall system reliability. The optimal allocations for a series system and a parallel system are examined as special cases. The sensitivity of the optimal allocation is examined with respect to differences in system configuraition.