scispace - formally typeset
Search or ask a question

Showing papers in "Naval Research Logistics in 1997"


Journal ArticleDOI
Albert Y. Ha1
TL;DR: In this paper, the problem of production control and stock rationing in a make-to-stock production system with two priority customer classes and backordering is formulated as a queueing control model.
Abstract: This article considers the problem of production control and stock rationing in a make-to-stock production system with two priority customer classes and backordering. The problem is formulated as a queueing control model. With Poisson arrivals and exponential production times, we show that the optimal production control and stock-rationing policies can be characterized by a single, monotone switching curve. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 457–472, 1997

222 citations


Journal ArticleDOI
TL;DR: An analytical model of preventive maintenance and safety stock strategies in a production environment subject to random machine breakdowns is formulated and optimality conditions under which either one or both strategies should be implemented to minimize the associated cost function are provided.
Abstract: In this article we formulate an analytical model of preventive maintenance and safety stock strategies in a production environment subject to random machine breakdowns. Traditionally, preventive maintenance and safety stocks have been independently studied as two separate strategies for coping with machine breakdowns. Our intent is to develop a unified framework so that the two are jointly considered. We illustrate the trade-off between investing in the two options. In addition, we provide optimality conditions under which either one or both strategies should be implemented to minimize the associated cost function. Specifically, cases with deterministic and exponential repair time distributions are analyzed in detail. We include numerical examples to illustrate the determination of optimal strategies for preventive maintenance and safety stocks. © 1997 John Wiley & Sons, Inc.

129 citations


Journal ArticleDOI
TL;DR: The results provide evidence that the normal approximation of lead-time demand is robust with respect to both cost and service for seven major industry groups.
Abstract: Previous studies criticize the general use of the normal approximation of lead-time demand on the grounds that it can lead to serious errors in safety stock. We reexaminethis issue for the distribution of fast-moving finished goods. We first determine the optimalreorder points and quantities by using the classical normal-approximation method and atheoretically correct procedure. We then evaluate the misspecification error of the normalapproximation solution with respect to safety stock, logistics-system costs, total costs (logis-tics costs, including acquisition costs), and fill rates. The results provide evidence that thenormal approximation is robust with respect to both cost and service for seven major industrygroups. q 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 165–186, 1997 INTRODUCTIONThe (s, Q) continuous-review inventory model has been widely used and extensivelystudied in the literature. In the stochastic version of this model, where demand (D) andlead time (L) are independent random variables, the analyst must know the distributionlead-time demands (X ) to assess stockout risks or expected shortages. The simplest wayto model lead-time demand is to determine the first two moments ofX and then assumethat X has a normal distribution. This classic approach, found in virtually every textbookon production-inventory, operations, and logistics management, rests on two premises:1. The underlying distribution of X is not important.2. The first two moments are the only relevant parameters.Many scholars, however, have criticized the use of this normal theory approach on twogrounds. First, the distribution of X is likely to have a nonnormal shape, which means thatthe normal approximation will produce errors in the estimates of replenishment levels andthus the amount of safety stock needed to support customer-service targets or to counterbal-ance stockout costs. Second, if the coefficient of variation ofX is large—say, greater than

102 citations


Journal ArticleDOI
TL;DR: In this paper, a general global shooting procedure for representing efficient sets in multiple objective mathematical programs is proposed and justified. But this procedure does not consider the problem of finding global representations of the efficient sets of the programs.
Abstract: We propose and justify the proposition that finding truly global representations of the efficient sets of multiple objective mathematical programs is a worthy goal. We summarize the essential elements of a general global shooting procedure that seeks such representations. This procedure illustrates the potential benefits to be gained from procedures for globally representing efficient sets in multiple objective mathematical programming. © 1997 John Wiley & Sons, Inc.

84 citations


Journal ArticleDOI
TL;DR: In this article, an optimization model for submarine berth planning is presented and demonstrated with a Naval Submarine Base, San Diego, California, and the effect on the solution of revisions in the input is kept small by incorporating a persistence incentive in the optimization model.
Abstract: Submarine berthing plans reserve mooring locations for inbound U.S. Navy nuclear submarines prior to their port entrance. Once in port, submarines may be shifted to different berthing locations to allow them to better receive services they require or to make way for other shifted vessels. However, submarine berth shifting is expensive, labor inten- sive, and potentially hazardous. This article presents an optimization model for submarine berth planning and demonstrates it with Naval Submarine Base, San Diego. After a berthing plan has been approved and published, changed requests for services, delays, and early arrival of inbound submarines are routine events, requiring frequent revisions. To encourage trust in the planning process, the effect on the solution of revisions in the input is kept small by incorporating a persistence incentive in the optimization model. q 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 301 - 318, 1997. Although the Cold War has ended, United States Navy submarines remain very capable and effective ships of war: A smaller number of submarines operated from fewer submarine bases will continue to play a significant role in national defense. The wise use of time and resources while submarines are in port will improve the state of readiness of a smaller fleet. While in port, a submarine completes preventive and corrective maintenance, replenishes stores, and conducts training and certification tests to maintain high material and personnel readiness. Ideally, a submarine in port should devote its time exclusively to these activities. However, submarines frequently spend time shifting berths. Some shifts are necessary and some are not. Services such as ordnance loading and the use of special maintenance equip- ment require that a submarine be moored at a specific location. During periodic maintenance upkeep, personnel from a submarine tender assist the submarine crew, and berthing near the tender is preferable. During training, inspection, and other periods, it is desirable to berth closer to shore, near squadron offices and training facilities. When conditions permit,

77 citations


Journal ArticleDOI
TL;DR: Two new models of the labor staffing and scheduling problems that avoid the limitations of existing models are introduced that are expected to have broad applicability in a wide range of organizations operating in both competitive and noncompetitive environments.
Abstract: The problems of labor staffing and scheduling have received substantial attention in the literature. We introduce two new models of the labor staffing and scheduling problems that avoid the limitations of existing models. Collectively, the models have five important attributes. First, both models ensure the delivery of a minimally acceptable level of service in all periods. Second, one model can identify the least expensive way of delivering a specified aggregate level of customer service (the labor staffing problem and a form of labor scheduling problem). Third, the other model can identify the highest level of service attainable with a fixed amount of labor (the other form of the labor scheduling problem). Fourth, the models enable managers to identify the pareto relationship between labor costs and customer service. Fifth, the models allow a degree of control over service levels that is unattainable with existing models. Because of these attributes, which existing models largely do not possess, we expect these models to have broad applicability in a wide range of organizations operating in both competitive and noncompetitive environments. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 719–740, 1997

77 citations


Journal ArticleDOI
TL;DR: In this article, a polynomial decomposition heuristic is developed for the parallel-machine tardiness problem (PsT) by extending the decomposition principle embedded in the 1&sol/T to a parallel machine setting.
Abstract: A polynomial decomposition heuristic is developed for the parallel-machine tardiness problem (PsT) by extending the decomposition principle embedded in the single-machine tardiness problem (1&sol/T) to a parallel-machine setting. The subproblems generated by the decomposition are solved by an effective heuristic that yields solutions such that the schedule on any individual machine satisfies the single-machine decomposition principle. A hybrid simulated annealing heuristic tailored to the P&sol/T problem is also presented. Computational results demonstrate the efficiency and effectiveness of the decomposition heuristic. © 1997 John Wiley & Sons, Inc.

76 citations


Journal ArticleDOI
TL;DR: A simple approximation approach to the continuous demand formulation is proposed and the simplicity of the discrete approach is combined with the approximated accuracy of the continuous-demand location solution.
Abstract: Location models commonly represent demand as discrete points rather than as continuously spread over an area. This modeling technique introduces inaccuracies to the objective function and consequently to the optimal location solution. In this article this inaccuracy is investigated by the study of a particular competitive facility location problem. First, the location problem is formulated over a continuous demand area. The optimal location for a new facility that optimizes the objective function is obtained. This optimal location solution is then compared with the optimal location obtained for a discrete set of demand points. Second, a simple approximation approach to the continuous demand formulation is proposed. The location problem can be solved by using the discrete demand algorithm while significantly reducing the inaccuracies. This way the simplicity of the discrete approach is combined with the approximated accuracy of the continuous-demand location solution. Extensive analysis and computations of the test problem are reported. It is recommended that this approximation approach be considered for implementation in other location models. © 1997 John Wiley & Sons, Inc.

70 citations


Journal ArticleDOI
TL;DR: It is shown that the optimal burn-in times that minimize the considered cost functions never exceed the first change point of the failure-rate function.
Abstract: Warranty is an important factor for consumer durable products in the marketplace. However, the warranty cost may drastically reduce profitability. Burn in is a common procedure to improve the quality of products after they have been produced, but it is also costly. By taking both the burn-in procedure and warranty policy into consideration, several cost functions can be formulated and optimized. Assuming that the failure-rate function of the product has a bathtub shape, it is shown that the optimal burn-in times that minimize the considered cost functions never exceed the first change point of the failure-rate function. The continuous dependence of the optimal burn-in times on the model parameters and the underlying distribution is also established. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 199–209, 1997

61 citations


Journal ArticleDOI
TL;DR: This research focuses on the use of insights gained from the solution of a relaxed optimization model in developing heuristic procedures to schedule projects with multiple constrained resources, and it is shown that a heuristic procedure with embedded priority rules that uses information from the revised solution of this model increases project net present value.
Abstract: Resource-constrained project scheduling with cash flows occurs in many settings, ranging from research and development to commercial and residential construction. Although efforts have been made to develop efficient optimal procedures to maximize the net present value of cash flows for resource-constrained projects, the inherent intractability of the problem has led to the development of a variety of heuristic methods to aid in the development of near-optimal schedules for large projects. This research focuses on the use of insights gained from the solution of a relaxed optimization model in developing heuristic procedures to schedule projects with multiple constrained resources. It is shown that a heuristic procedure with embedded priority rules that uses information from the revised solution of a relaxed optimization model increases project net present value. The heuristic procedure and nine different embedded priority rules are tested in a variety of project environments that account for different network structures, levels of resource constrainedness, and cash-flow parameters. Extensive testing with problems ranging in size from 21 to 1000 activities shows that the new heuristic procedures dominate heuristics using information from the critical path method (CPM), and in most cases outperform heuristics from previous research. The best performing heuristic rules classify activities into priority and secondary queues according to whether they lead to immediate progress payments, thus front loading the project schedule. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 365–381, 1997

54 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of determining the sample sizes in various strata when several characteristics are under study is formulated as a nonlinear multistage decision problem, and dynamic programming is used to obtain an integer solution to the problem.
Abstract: The problem of determining the sample sizes in various strata when several characteristics are under study is formulated as a nonlinear multistage decision problem. Dynamic programming is used to obtain an integer solution to the problem. © 1997 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: In this paper, the authors establish various inventory replenishment policies to solve the problem of determining the timing and number of replenishments, and analytically compare various models, and identify the best alternative among them based on minimizing total relevant costs.
Abstract: We establish various inventory replenishment policies to solve the problem of determining the timing and number of replenishments. We then analytically compare various models, and identify the best alternative among them based on minimizing total relevant costs. Furthermore, we propose a simple and computationally efficient optimal method in a recursive fashion, and provide two examples for illustration. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 791–806, 1997

Journal ArticleDOI
TL;DR: In this paper, a design procedure of the sampling plans for assuring the loss in the Taguchi's method is proposed, and some numerical results based on the proposed design procedures are illustrated.
Abstract: Taguchi has presented an approach to quality improvement in which reduction of deviation from the target value is the guiding principle. In this approach any measured value x of a product characteristic X brings a loss to consumer in general, where the loss is expressed as a quadratic form with respect to the difference between the measured value x and the target value T of a product characteristic. Then, it is natural to reject the lot which may bring a large loss to consumer. This concept induces us to construct new variable sampling plans based on the Taguchi's loss criterion. In this article, a design procedure of the sampling plans for assuring the loss in the Taguchi's method is proposed. Some numerical results based on the proposed design procedures are illustrated. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 591–603 (1997)

Journal ArticleDOI
TL;DR: In this paper, the authors consider the effect of a buyer-imposed acceptance sampling policy on the optimal batch size and optimal quality level delivered by an expected cost minimizing supplier, and derive the conditions under which zero defects (100% conformance) is the policy that minimizes the supplier's expected annual cost.
Abstract: Acceptance sampling is often used to monitor the quality of raw materials and components when product testing is destructive, time-consuming, or expensive. In this paper we consider the effect of a buyer-imposed acceptance sampling policy on the optimal batch size and optimal quality level delivered by an expected cost minimizing supplier. We define quality as the supplier's process capability, i.e., the probability that a unit conforms to all product specifications, and we assume that unit cost is an increasing function of the quality level. We also assume that the supplier faces a known and constant “pass-through” cost, i.e., a fixed cost per defective unit passed on to the buyer. We show that the acceptance sampling plan has a significant impact on the supplier's optimal quality level, and we derive the conditions under which zero defects (100% conformance) is the policy that minimizes the supplier's expected annual cost. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 515–530, 1997

Journal ArticleDOI
TL;DR: This work considers the problem of allocation of K active spares to a series system of independent and identical components in order to optimize the failure-rate function of the system.
Abstract: Allocation of spare components in a system in order to optimize the lifetime of the system with respect to a suitable criterion is of considerable interest in reliability, engineering, industry, and defense. We consider the problem of allocation of K active spares to a series system of independent and identical components in order to optimize the failure-rate function of the system.

Journal ArticleDOI
TL;DR: In this article, a stochastic counterpart of the well-known earliness-tardiness scheduling problem with a common due date is considered, in which n tasks are to be processed on a single machine and due dates of the jobs are random variables following a common probability distribution.
Abstract: We consider a stochastic counterpart of the well-known earliness-tardiness scheduling problem with a common due date, in which n stochastic jobs are to be processed on a single machine. The processing times of the jobs are independent and normally distributed random variables with known means and known variances that are proportional to the means. The due dates of the jobs are random variables following a common probability distribution. The objective is to minimize the expectation of a weighted combination of the earliness penalty, the tardiness penalty, and the flow-time penalty. One of our main results is that an optimal sequence for the problem must be V-shaped with respect to the mean processing times. Other characterizations of the optimal solution are also established. Two algorithms are proposed, which can generate optimal or near-optimal solutions in pseudo-polynomial time. The proposed algorithms are also extended to problems where processing times do not satisfy the assumption in the model above, and are evaluated when processing times follow different probability distributions, including general normal (without the proportional relation between variances and means), uniform, Laplace, and exponential.

Journal ArticleDOI
TL;DR: This article analyzes continuous-review versions of the classical obsolescence problem in inventory theory using continuous dynamic programming to investigate structural properties of the problem and propose explicit and workable solution techniques.
Abstract: Inventory models of modern production and service operations should take into consideration possible exogenous failures or the abrupt decline of demand resulting from obsolescence. This article analyzes continuous-review versions of the classical obsolescence problem in inventory theory. We assume a deterministic demand model and general continuous random times to obsolescence (“failure”). Using continuous dynamic programming, we investigate structural properties of the problem and propose explicit and workable solution techniques. These techniques apply to two fairly wide (and sometimes overlapping) classes of failure distributions: those which are increasing in failure rate and those which have finite support. Consequently, several specific failure processes in continuous time are given exact solutions. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 757–774, 1997

Journal ArticleDOI
TL;DR: In this paper, a cost-based assignment scheme is proposed whereby the cost of early completion may differ in form and/or degree from that of tardy behavior, and two classical approaches (OLS regression and mathematical programming) as well as a neural network methodology for solving this problem are developed and compared on three hypothetical shops using simulation techniques.
Abstract: Traditional methods of due-date assignment presented in the literature and used in practice generally assume cost-of-earliness and cost-of-tardiness functions that may bear little resemblance to true costs. For example, practitioners using ordinary least-squares (OLS) regression implicitly minimize a quadratic cost function symmetric about the due date, thereby assigning equal second-order costs to early completion and tardy behavior. In this article the consequences of such assumptions are pointed out, and a cost-based assignment scheme is suggested whereby the cost of early completion may differ in form and/or degree from the cost of tardiness. Two classical approaches (OLS regression and mathematical programming) as as well as a neural-network methodology for solving this problem are developed and compared on three hypothetical shops using simulation techniques. It is found for the cases considered that: (a) implicitly ignoring cost-based assignments can be very costly; (b) simpler regression-based rules cited in the literature are very poor cost performers; (c) if the earliness and tardiness cost functions are both linear, linear programming and neural networks are the methodologies of choice; and (d) if the form of the earliness cost function differs from that of the tardiness cost function, neural networks are statistically superior performers. Finally, it is noted that neural networks can be used for a wide range of cost functions, whereas the other methodologies are significantly more restricted.

Journal ArticleDOI
TL;DR: In this article, a method for computing the exact posterior probability density function, cumulative distribution function, and credible intervals for system reliability in a Bayesian setting, with the use of components' prior probability distributions and current test results, is proposed.
Abstract: System reliability is often estimated by the use of components' reliability test results when system test data are not available, or are very scarce. A method is proposed for computing the exact posterior probability density function, cumulative distribution function, and credible intervals for system reliability in a Bayesian setting, with the use of components' prior probability distributions and current test results. The method can be applied to series, parallel, and many mixed systems. Although in theory the method involves evaluating infinite series, numerical results show that a small number of terms from the infinite series are sufficient in practice to provide accurate estimates of system reliability. Furthermore, because the coefficients in the series follow some recurrence relations, our results allow us to calculate the reliability distribution of a large system from that of its subsystems. Error bounds associated with the proposed method are also given. Numerical comparisons with other existing approaches show that the proposed method is efficient and accurate.

Journal ArticleDOI
Ahmet Bolat1
TL;DR: In this article, the problem of sequencing jobs decomposed into identical and repeating sets to minimize the total amount of remaining work, or, equivalently, to maximize the total number of work completed, is addressed.
Abstract: For sequencing different models on a paced assembly line, the commonly accepted objective is to keep the operators within the boundaries of their stations. When the operators reach the right boundary, they terminate the operation prematurely. In this article we address the problem of sequencing jobs decomposed into identical and repeating sets to minimize the total amount of remaining work, or, equivalently, to maximize the total amount of work completed. We propose an optimum algorithm and a heuristic procedure that utilizes different priority functions based on processing times. Experimental results indicate that the proposed heuristic requires less computational effort and performs better than the existing procedures: On the average, 11–14% of improvements are obtained over real data mentioned in the literature (20 groups of 1000 jobs from a U.S. automobile manufacturer). © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 419–437, 1997

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of scheduling N jobs on M parallel machines so as to minimize the maximum earliness or tardiness cost incurred for each of the jobs, given by general (but job-independent) functions of the amount of time a job is completed prior to or after a common due date.
Abstract: We consider the problem of scheduling N jobs on M parallel machines so as to minimize the maximum earliness or tardiness cost incurred for each of the jobs. Earliness and tardiness costs are given by general (but job-independent) functions of the amount of time a job is completed prior to or after a common due date. We show that in problems with a nonrestrictive due date, the problem decomposes into two parts. Each of the M longest jobs is assigned to a different machine, and all other jobs are assigned to the machines so as to minimize their makespan. With these assignments, the individual scheduling problems for each of the machines are simple to solve. We demonstrate that several simple heuristics of low complexity, based on this characterization, are asymptotically optimal under mild probabilistic conditions. We develop attractive worst-case bounds for them. We also develop a simple closed-form lower bound for the minimum cost value. The bound is asymptotically accurate under the same probabilistic conditions. In the case where the due date is restrictive, the problem is more complex only in the sense that the set of initial jobs on the machines is not easily characterized. However, we extend our heuristics and lower bounds to this general case as well. Numerical studies exhibit that these heuristics perform excellently even for small- or moderate-size problems both in the restrictive and nonrestrictive due-date case. © 1997 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: In this paper, a simple Lanchesterian attrition model that reflects the capacity of a Blue force to discriminate non-combatants from armed and active Red opponents is proposed. But the model is not applicable to other situations involving decoys.
Abstract: : Under various operational conditions, in particular in operations other than war (OOTW) or peacekeeping, an intervening force, here Blue, must occasionally engage in attrition warfare with an opposing force, here Red, that is intermingled with non-combatants. Desirably, Red armed actives are targeted and not the unarmed non-combatants. This paper describes some simple Lanchesterian attrition models that reflect a certain capacity of Blue to discriminate non-combatants from armed and active Red opponents. An explicit extension of the 'Manchester square law' results: Blue's abstinence concerning the indiscriminate shooting of civilians mixed with Reds is essentially reflected in a lower Blue rate of fire and less advantageous exchange rate. The model applies to other situations involving decoys, and reflects the value of a discrimination capability.

Journal ArticleDOI
TL;DR: The model uses a variation of mean value analysis (MVA) to capture the effect of mean service times, resource levels, and network topology on performance measures including resource utilizations and the overall sortie generation rate.
Abstract: This article presents an approximate analytical method for evaluating an aircraft sortie generation process. The process is modeled as a closed network of multiserver queues and fork-join nodes that allow concurrent service activities. The model uses a variation of mean value analysis (MVA) to capture the effect of mean service times, resource levels, and network topology on performance measures including resource utilizations and the overall sortie generation rate. The quality of the analytical approximation is demonstrated through comparison with simulation results. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 153–164, 1997

Journal ArticleDOI
TL;DR: In this paper, the authors formulate the multiple-facility loading problem under capacity-based economies and diseconomies of scope (MFLS) as a nonlinear 0-1 mixed-integer programming problem, and discuss some useful properties.
Abstract: Multiple-facility loading (MFL) involves the allocation of products among a set of finite-capacity facilities. Applications of MFL arise naturally in a variety of production scheduling environments. MFL models typically assume that capacity is consumed as a linear function of products assigned to a facility. Product similarities and differences, however, result in capacity-based economies or diseconomies of scope, and thus the effective capacity of the facility is often a (nonlinear) function of the set of tasks assigned to the facility. This article addresses the multiple-facility loading problem under capacity-based economies (and diseconomies) of scope (MFLS). We formulate MFLS as a nonlinear 0–1 mixed-integer programming problem, and we discuss some useful properties. MFLS generalizes many well-known combinatorial optimization problems, such as the capacitated facility location problem and the generalized assignment problem. We also define a tabu-search heuristic and a branch-and-bound algorithm for MFLS. The tabu-search heuristic alternates between two search phases, a regional search and a diversification search, and offers a novel approach to solution diversification. We also report computational experience with the procedures. In addition to demonstrating MFLS problem tractability, the computational results indicate that the heuristic is an effective tool for obtaining high-quality solutions to MFLS. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 229–256, 1997

Journal ArticleDOI
TL;DR: Under this discrete batch Markovian arrival process and service time distribution the waiting time distribution for three queue disciplines: first in first out (FIFO), last in last out (LIFO, and service in random order (SIRO) is derived.
Abstract: A queueing system characterized by the discrete batch Markovian arrival process (D-BMAP) and a probability of phase type distribution for the service time is one that arises frequently in the area of telecommunications. Under this arrival process and service time distribution we derive the waiting time distribution for three queue disciplines: first in first out (FIFO), last in first out (LIFO), and service in random order (SIRO). We also outline efficient algorithmic procedures for computing the waiting time distributions under each discipline. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 559–576, 1997

Journal ArticleDOI
TL;DR: An air-defense engagement model to counter an attack by multiple antiship missiles, assuming perfect kill assessment, is presented, in which the probability of shooting down all incoming missiles is maximized.
Abstract: We present an air-defense engagement model to counter an attack by multiple antiship missiles, assuming perfect kill assessment. In this model, the probability of shooting down all incoming missiles is maximized. A generating function is employed to produce an algorithm which is used to evaluate the outcomes.

Journal ArticleDOI
TL;DR: In this article, the authors consider the component testing problem of a system where the main feature is that the component failure rates are not constant parameters, but they change in a dynamic fashion with respect to time.
Abstract: We consider the component testing problem of a system where the main feature is that the component failure rates are not constant parameters, but they change in a dynamic fashion with respect to time. More precisely, each component has a piecewise-constant failure-rate function such that the lifetime distribution is exponential with a constant rate over local intervals of time within the overall mission time. There are several such intervals, and the rates change dynamically from one interval to another. We note that these lifetime distributions can also be used in a more general setting to approximate arbitrary lifetime distributions. The optimal component testing problem is formulated as a semi-infinite linear program. We present an algorithmic procedure to compute optimal test times based on the column-generation technique and illustrate it with a numerical example.

Journal ArticleDOI
TL;DR: The following zero-sum game is considered in this paper, where Red chooses in integer interval [ 1, n ] two integer intervals consisting of k and m points where k + m < n, and Blue chooses an integer point in [1, n] if the point chosen by Blue is at least in one of the intervals chosen by Red and 0 otherwise.
Abstract: The following zero-sum game is considered. Red chooses in integer interval [ 1, n ] two integer intervals consisting of k and m points where k + m < n, and Blue chooses an integer point in [1, n]. The payoff to Red equals 1 if the point chosen by Blue is at least in one of the intervals chosen by Red, and 0 otherwise. This work complements the results obtained by Ruckle, Baston and Bostock, and Lee.

Journal ArticleDOI
TL;DR: This paper considers one of the two basic building blocks of many complex systems, namely a system of n parallel components, and develops minimum cost component test plans for evaluating the reliability of such a system when the component reliabilities are known to be high.
Abstract: One approach to evaluating system reliability is the use of system based component test plans. Such plans have numerous advantages over complete system level tests, primarily in terms of time and cost savings. This paper considers one of the two basic building blocks of many complex systems, namely a system of n parallel components, and develops minimum cost component test plans for evaluating the reliability of such a system when the component reliabilities are known to be high. Two different decision rules are considered and the corresponding optimization problems are formulated and solved using techniques from mathematical programming.

Journal ArticleDOI
TL;DR: In this article, the Wagner-Whitin algorithm is adapted to a finite-rate production process for single-product dynamic lot-sizing, and it is shown how these procedures can readily be adapted when the input is a finite rate production process.
Abstract: The basic single-product dynamic lot-sizing problem involves determining the optimal batch production schedule to meet a deterministic, discrete-in-time, varying demand pattern subject to linear setup and stockholding costs. The most widely known procedure for deriving the optimal solution is the Wagner-Whitin algorithm, although many other approaches have subsequently been developed for tackling the same problem. The objective of this note is to show how these procedures can readily be adapted when the input is a finite rate production process.