scispace - formally typeset
Search or ask a question

Showing papers in "Naval Research Logistics in 1998"


Journal ArticleDOI
TL;DR: This paper considers the resource-constrained project scheduling problem (RCPSP) with makespan minimization as objective and proposes a new genetic algorithm approach to solve this problem that makes use of a permutation based genetic encoding that contains problem-specific knowledge.
Abstract: In this paper we consider the resource-constrained project scheduling problem (RCPSP) with makespan minimization as objective. We propose a new genetic algorithm approach to solve this problem. Subsequently, we compare it to two genetic algorithm concepts from the literature. While our approach makes use of a permutation based genetic encoding that contains problem-specific knowledge, the other two procedures employ a priority value based and a priority rule based representation, respectively. Then we present the results of our thorough computational study for which standard sets of project instances have been used. The outcome reveals that our procedure is the most promising genetic algorithm to solve the RCPSP. Finally, we show that our genetic algorithm yields better results than several heuristic procedures presented in the literature. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 733–750, 1998

551 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider a general covering problem in which k subsets are to be selected such that their union covers as large a weight of objects from a universal set of elements as possible.
Abstract: In this paper, we consider a general covering problem in which k subsets are to be selected such that their union covers as large a weight of objects from a universal set of elements as possible. Each subset selected must satisfy some structural constraints. We analyze the quality of a k-stage covering algorithm that relies, at each stage, on greedily selecting a subset that gives maximum improvement in terms of overall coverage. We show that such greedily constructed solutions are guaranteed to be within a factor of 1 − 1/e of the optimal solution. In some cases, selecting a best solution at each stage may itself be difficult; we show that if a β-approximate best solution is chosen at each stage, then the overall solution constructed is guaranteed to be within a factor of 1 − 1/eβ of the optimal. Our results also yield a simple proof that the number of subsets used by the greedy approach to achieve entire coverage of the universal set is within a logarithmic factor of the optimal number of subsets. Examples of problems that fall into the family of general covering problems considered, and for which the algorithmic results apply, are discussed. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 615–627, 1998

202 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore the management of inventory for stochastic-demand systems, where the product's supply is randomly disrupted for periods of random duration, and demands that arrive when the inventory system is temporarily out of stock become a mix of backorders and lost sales.
Abstract: We explore the management of inventory for stochastic-demand systems, where the product's supply is randomly disrupted for periods of random duration, and demands that arrive when the inventory system is temporarily out of stock become a mix of backorders and lost sales. The stock is managed according to the following modified (s, S) policy: If the inventory level is at or below s and the supply is available, place an order to bring the inventory level up to S. Our analysis yields the optimal values of the policy parameters, and provides insight into the optimal inventory strategy when there are changes in the severity of supply disruptions or in the behavior of unfilled demands. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 687–703, 1998

167 citations


Journal ArticleDOI
TL;DR: This study considers general time to shift distributions and provides distribution-based and distribution-free bounds on the optimal cost for the exponential case and compares the optimal solutions to approximate solutions proposed in the literature.
Abstract: In this paper, we consider the economic production quantity problem in the presence of imperfect processes. In the literature, the time to shift from the in-control state to the out-of-control state is assumed to be exponentially distributed. In this study, we consider general time to shift distributions and provide distribution-based and distribution-free bounds on the optimal cost. For the exponential case, we compare the optimal solutions to approximate solutions proposed in the literature. A numerical example is used to illustrate the analysis presented and to conduct a sensitivity analysis in order to see the effect of the input parameters on the various solutions to the problem. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 423–433, 1998

91 citations


Journal ArticleDOI
TL;DR: A periodic review inventory system where emergency orders, which have a shorter supply lead time but are subject to higher ordering cost compared to regular orders, can be placed on a continuous basis is described and a dynamic programming model is developed and derived to derive optimal operation parameters.
Abstract: We describe a periodic review inventory system where emergency orders, which have a shorter supply lead time but are subject to higher ordering cost compared to regular orders, can be placed on a continuous basis. We consider the periodic review system in which the order cycles are relatively long so that they are possibly larger than the supply lead times. Study of such systems is important since they are often found in practice. We assume that the difference between the regular and emergency supply lead times is less than the order-cycle length. We develop a dynamic programming model and derive a stopping rule to end the computation and obtain optimal operation parameters. Computational results are included that support the contention that easily implemented policies can be computed with reasonable effort. q 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 187 - 204, 1998 Several studies in the literature address this problem. These studies divide into two groups: policy-evaluation studies and policy-optimization studies. The former assume a particular policy form and devise methods for evaluating it, while the latter compute the true optimal policy and solve specific instances of the problem under consideration. While the optimization results are stronger, the policy-evaluation studies typically use simpler policies and broader assumptions. This paper contributes to the optimization literature, and the model here is more general than those of early studies.

86 citations


Journal ArticleDOI
TL;DR: The case of bounded deterioration, where the processing time of a job grows no further if the job starts after a common maximum deterioration date D > d, is introduced and a pseudopolynomial time algorithm is given that solves instances with up to 100 jobs in a reasonable amount of time.
Abstract: We consider a single-machine problem of scheduling n independent jobs to minimize makespan, in which the processing time of job J j grows by W j with each time unit its start is delayed beyond a given common critical date d. This processing time is p if J j starts by d. We show that this problem is NP-hard, give a pseudopolynomial algorithm that runs in O(ndΣ j=1 n p j ) time and O(nd) space, and develop a branch-and-bound algorithm that solves instances with up to 100 jobs in a reasonable amount of time. We also introduce the case of bounded deterioration, where the processing time of a job grows no further if the job starts after a common maximum deterioration date D > d. For this case, we give two pseudopolynomial time algorithms: one runs in O(n 2 d(D - d) Σ j=1 n pj) time and O(nd(D - d)) space, the other runs in O(nd Σ j=1 n w j (Σ j=1 n pj) 2 ) time and O(nd Σ j=1 n w j Σ j=1 n pj) space.

83 citations


Journal ArticleDOI
TL;DR: In this paper, the matrix-geometric method was used to study the discrete time MAP/PH/1 priority queue with two types of jobs, and both preemptive and non-preemptive cases were considered.
Abstract: We use the matrix-geometric method to study the discrete time MAP/PH/1 priority queue with two types of jobs. Both preemptive and non-preemptive cases are considered. We show that the structure of the R matrix obtained by Miller for the Birth-Death system can be extended to our Quasi-Birth-Death case. For both preemptive and non-preemptive cases the distributions of the number of jobs of each type in the system are obtained and their waiting times are obtained for the non-preemptive. For the preemptive case we obtain the waiting time distribution for the high priority job and the distribution of the lower priority job's wait before it becomes the leading job of its priority class. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 23–50, 1998

71 citations


Journal ArticleDOI
TL;DR: An EMQ model with a production process subject to random deterioration is considered, and it is shown that the optimal inspection times can be found by solving a nonlinear equation.
Abstract: An EMQ model with a production process subject to random deterioration is considered. The process can be monitored through inspections, and both the lot size and the inspection schedule are subject to control. The “in-control” periods are assumed to be generally distributed and the inspections are imperfect, i.e., the true state of the process is not necessarily revealed through an inspection. The objective is the joint determination of the lot size and the inspection schedule, minimizing the long-run expected average cost per unit time. Both discrete and continuous cases are examined. A dynamic programming formulation is considered in the case where the inspections can be performed only at discrete times, which is typical for the parts industry. In the continuous case, an optimum inspection schedule is obtained for a given production time and given number of inspections by solving a nonlinear programming problem. A two-dimensional search procedure can be used to find the optimal policy. In the exponential case, the structure of the optimal inspection policy is established using Lagrange's method, and it is shown that the optimal inspection times can be found by solving a nonlinear equation. Numerical studies indicate that the optimal policy performs much better than the optimal policy with periodic inspections considered previously in the literature. The case of perfect inspections is discussed, and an extension of the results obtained previously in the literature is presented. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 165–186, 1998

68 citations


Journal ArticleDOI
TL;DR: A tabu search procedure which is based on a decomposition of the problem into a mode assignment phase and a resource‐constrained project scheduling phase with fixed mode assignments is presented.
Abstract: In this paper we consider the discrete time/resource trade-off problem in project networks. Given a project network consisting of nodes (activities) and arcs (technological precedence relations), in which the duration of the activities is a discrete, nonincreasing function of the amount of a single renewable resource committed to it, the discrete time/resource trade-off problem minimizes the project makespan subject to precedence constraints and a single renewable resource constraint. For each activity, a work content is specified such that all execution modes (duration/resource requirement pairs) for performing the activity are allowed as long as the product of the duration and the resource requirement is at least as large as the specified work content. We present a tabu search procedure which is based on a decomposition of the problem into a mode assignment phase and a resource-constrained project scheduling phase with fixed mode assignments. Extensive computational experience, including a comparison with other local search methods, is reported. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 553–578, 1998

62 citations


Journal ArticleDOI
TL;DR: A new appointment rule is presented as a mathematical function of four environmental parameters, namely, the coefficient of variation of the service time, the percentage of customers' no-shows, the number of appointments per service session, and the cost ratio between the server's idle and customers' waiting cost per unit time.
Abstract: This paper proposes a new appointment rule for the single-server, multiple-customer service system. Unlike previous appointment rules, which perform well only in specific service environments, the new rule can be parameterized to perform well in different service environments. The new appointment rule is presented as a mathematical function of four environmental parameters, namely, the coefficient of variation of the service time, the percentage of customers' no-shows, the number of appointments per service session, and the cost ratio between the server's idle and customers' waiting cost per unit time. Once the values of these environmental parameters are estimated, the new appointment rule can be parameterized to perform well. The results show that new rule performs either as well as or better than existing appointment rules in a wide range of service environments. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 313–326, 1998

58 citations


Journal ArticleDOI
TL;DR: This paper proposes an approach, which mimics the classical label-correcting approach, to compute the expected path cost, and develops stochastic versions of some well-known label-Correcting methods, including the first-in-first-out method, the two-queue methods, the threshold algorithms, and the small-label-first principle.
Abstract: We consider a routing policy that forms a dynamic shortest path in a network with independent, positive and discrete random arc costs. When visiting a node in the network, the costs for the arcs going out of this node are realized, and then the policy will determine which node to visit next with the objective of minimizing the expected cost from the current node to the destination node. This paper proposes an approach, which mimics the classical label-correcting approach, to compute the expected path cost. First, we develop a sequential implementation of this approach and establish some properties about the implementation. Next, we develop stochastic versions of some well-known label-correcting methods, including the first-in-first-out method, the two-queue method, the threshold algorithms, and the small-label-first principle. We perform numerical experiments to evaluate these methods and observe that fast methods for deterministic networks can become very slow for stochastic networks. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 769–789, 1998

Journal ArticleDOI
TL;DR: This article revisited the modeling by Bracken [3] of the Ardennes campaign of World War II using the Lanchester equations and showed that neither the linear or Lanchester square laws fit the data.
Abstract: This paper revisits the modeling by Bracken [3] of the Ardennes campaign of World War II using the Lanchester equations. It revises and extends that analysis in a number of ways: (1) It more accurately fits the model parameters using linear regression; (2) it considers the data from the entire campaign; and (3) it adds in air sortie data. In contrast to previous results, it concludes by showing that neither the Lanchester linear or Lanchester square laws fit the data. A new form of the Lanchester equations emerges with a physical interpretation. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 1–22, 1998

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the efficiency of branch and bound methods, with emphasis on the tradeoff between the accuracy of the bound employed and the time required to compute it.
Abstract: The problem of searching for randomly moving targets such as children and submarines is known to be fundamentally difficult, but finding efficient methods for generating optimal or near optimal solutions is nonetheless an important practical problem. This paper investigates the efficiency of Branch and Bound methods, with emphasis on the tradeoff between the accuracy of the bound employed and the time required to compute it. A variety of bounds are investigated, some of which are new. In most cases the best bounds turn out to be imprecise, but very easy to compute. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 243–257, 1998

Journal ArticleDOI
TL;DR: In this paper, the stationary distribution of the inventory level (stock on hand) in a continuous-review inventory system with compound Poisson demand, Erlang as well as hyperexponentially distributed lead times, and lost sales is derived.
Abstract: Using a system-point (SP) method of level crossings, we derive the stationary distribution of the inventory level (stock on hand) in a continuous-review inventory system with compound Poisson demand, Erlang as well as hyperexponentially distributed lead times, and lost sales. This distribution is then used to formulate long-run average cost functions with/without a service level constraint. Some numerical results are also presented, and compared with the Hadley and Whitin heuristic. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 259–278, 1998

Journal ArticleDOI
TL;DR: In this article, an extension allowing the smuggler to act more than once, treated by Sakaguchi in a special case, is solved, and a more natural version of Sakagauchi's problem is solved in the special case where the smugglers may act at each stage.
Abstract: The Inspection Game is a multistage game between a customs inspector and a smuggler, first studied by Melvin Dresher and Michael Maschler in the 1960s. An extension allowing the smuggler to act more than once, treated by Sakaguchi in a special case, is solved. Also, a more natural version of Sakaguchi's problem is solved in the special case where the smuggler may act at each stage.

Journal ArticleDOI
TL;DR: This paper incorporates stress information collected via sensors into the scheduling decision process by means of a partially observable Markov decision process model, and demonstrates the optimality of structured maintenance policies, which support practical maintenance schedules.
Abstract: This paper considers the maintenance of aircraft engine components that are subject to stress. We model the deterioration process by means of the cumulative jump process representation of crack growth. However, because in many cases cracks are not easily observable, maintenance decisions must be made on the basis of other information. We incorporate stress information collected via sensors into the scheduling decision process by means of a partially observable Markov decision process model. Using this model, we demonstrate the optimality of structured maintenance policies, which support practical maintenance schedules. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 335–352, 1998

Journal ArticleDOI
TL;DR: In this article, the authors consider the parallel replacement problem in which machine investment costs exhibit economy of scale, which is modeled through associating both fixed and variable costs with machine investment cost.
Abstract: We consider the parallel replacement problem in which machine investment costs exhibit economy of scale which is modeled through associating both fixed and variable costs with machine investment costs. Both finite- and infinite-horizon cases are investigated. Under the three assumptions made in the literature on the problem parameters, we show that the finite-horizon problem with time-varying parameters is equivalent to a shortest path problem and hence can be solved very efficiently, and give a very simple and fast algorithm for the infinite-horizon problem with time-invariant parameters. For the general finite-horizon problem without any assumption on the problem parameters, we formulate it as a zero-one integer program and propose an algorithm for solving it exactly based on Benders' decomposition. Computational results show that this solution algorithm is efficient, i.e., it is capable of solving large scale problems within a reasonable cpu time, and robust, i.e., the number of iterations needed to solve a problem does not increase quickly with the problem size. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 279–295, 1998

Journal ArticleDOI
TL;DR: It is proved that the problem of scheduling multiprocessor tasks with prespecified processor allocations to minimize the total completion time is NP-hard in the strong sense and an efficient heuristic is developed for this case.
Abstract: We consider the problem of scheduling multiprocessor tasks with prespecified processor allocations to minimize the total completion time. The complexity of both preemptive and nonpreemptive cases of the two-processor problem are studied. We show that the preemptive case is solvable in O(n log n) time. In the nonpreemptive case, we prove that the problem is NP-hard in the strong sense, which answers an open question mentioned in Hoogeveen, van de Velde, and Veltman (1994). An efficient heuristic is also developed for this case. The relative error of this heuristic is at most 100%.

Journal ArticleDOI
TL;DR: In this article, the authors considered the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence, and they gave a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value.
Abstract: The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998

Journal ArticleDOI
TL;DR: The development of a heuristic algorithm for determining efficient 2-dimensional packings in cargo aircraft where cargo placement constraints are critically important in determining the feasibility of packing locations is described.
Abstract: We describe the development of a heuristic algorithm for determining efficient 2-dimensional packings in cargo aircraft where cargo placement constraints are critically important in determining the feasibility of packing locations. We review the performance of a new algorithm versus some traditional ones for aircraft loading. The algorithm is also tested in a more generalized setting where there exist no additional constraints on items, to suggest applicability in other environments. The new algorithm has been used worldwide in the Automated Air Load Planning System (AALPS) for cargo aircraft loading, with much success. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 751–768, 1998

Journal ArticleDOI
TL;DR: In this article, the authors consider a simulation experiment consisting of v independent vector observations or replications across k systems, where in any given replication one and only one system is selected as the best performer based on some performance measure.
Abstract: : This report considers a simulation experiment consisting of v independent vector observations or replications across k systems, where in any given replication one and only one system is selected as the best performer (i.e., it wins) based on some performance measure. Each system has an unknown constant probability of winning in any replication and the numbers of wins for the individual systems follow a multinomial distribution. The classical multinomial selection procedure of Bechhofer, Elmaghraby, and Morse (Procedure BEM), prescribes a minimum number of replications, denoted as V*, 50 that the probability of correctly selecting the true best system meets or exceeds a prespecified probability. Assuming that larger is better, Procedure BEM selects as best the system having the largest value of the performance measure in more replications than any other system.

Journal ArticleDOI
TL;DR: In this article, the problem of scheduling a set of jobs on a single machine is considered and it is shown that the problem is strongly NP-hard, and approximate algorithms with both experimental and worst-case analysis are presented.
Abstract: The paper deals with a problem of scheduling a set of jobs on a single machine. Before a job is released for processing, it must undergo some preprocessing treatment that consumes resources. It is assumed that the release date of a job is a linear decreasing continuous function of the amount of a locally and globally constrained, continuously divisible resource (e.g., energy, catalyzer, financial outlay, gas). The problem is to find a sequence of jobs and a resource allocation that will minimize the maximum job completion time. Such a problem appears, for example, in the ingot preheating and hot-rolling process in steel mills. It is shown that the problem is strongly NP-hard. Some polynomially solvable cases of the problem and approximate algorithms with both experimental and worst-case analysis are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors considered a general repair process where the virtual age V i after the ith repair is given by V i = O(V i-1 + X i ), O(.) is a specified repair functional, and X i is the time between the (i - 1)th and ith repairs.
Abstract: We consider a general repair process where the virtual age V i after the ith repair is given by V i = O(V i-1 + X i ), O(.) is a specified repair functional, and X i is the time between the (i - 1)th and ith repair. Some monotonicity and dominance properties are derived, and an equilibrium process is considered. A computational method for evaluating the expected number/density of repairs is described together with an approximation method for obtaining some parameters of the equilibrium process.

Journal ArticleDOI
TL;DR: In this paper, a 2-dimensional consecutive-k-out-of-n : F system with equal component-failure probabilities is considered, and an analytic comparison is made with the upper bound of Barbour et al.
Abstract: Consider a 2-dimensional consecutive-k-out-of-n : F system, as described by Salvia and Lasher [9], whose components have independent, perhaps identical, failure probabilities. In this paper, we use Janson's exponential inequalities [5]; to derive improved upper bounds on such a system's reliability, and compare our results numerically to previously determined upper bounds. In the case of equal component-failure probabilities, we determine analytically, given k and n, those component-failure probabilities for which our bound betters the upper bounds found by Fu and Koutras [4] and Koutras et al. [6]. A different kind of analytic comparison is made with the upper bound of Barbour et al. [3]. We further generalize our upper bound, given identical component-failure probabilities, to suit d-dimensional systems for d ≤ 3. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 219–230, 1998

Journal ArticleDOI
TL;DR: In this paper, the authors consider a single-machine scheduling model in which the job processing times are controllable variables with linear costs, and they present a dynamic programming solution algorithm and a fully polynomial approximation scheme for the problem.
Abstract: We consider a single-machine scheduling model in which the job processing times are controllable variables with linear costs. The objective is to minimize the sum of the cost incurred in compressing job processing times and the cost associated with the number of late jobs. The problem is shown to be NP-hard even when the due dates of all jobs are identical. We present a dynamic programming solution algorithm and a fully polynomial approximation scheme for the problem. Several efficient heuristics are proposed for solving the problem. Computational experiments demonstrate that the heuristics are capable of producing near-optimal solutions quickly. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 67–82, 1998

Journal ArticleDOI
TL;DR: A class of decomposition algorithms for MCG are introduced that decompose MCG into a number of small CMCGs by adding vertices one at a time and building a partial graph and results show that these heuristics are very effective in reducing computation.
Abstract: Given a positive integer R and a weight for each vertex in a graph, the maximum-weight connected graph problem (MCG) is to find a connected subgraph with R vertices that maximizes the sum of their weights. MCG has applications to communication network design and facility expansion. The constrained MCG (CMCG) is MCG with a constraint that one predetermined vertex must be included in the solution. In this paper, we introduce a class of decomposition algorithms for MCG. These algorithms decompose MCG into a number of small CMCGs by adding vertices one at a time and building a partial graph. They differ in the ordering of adding vertices. Proving that finding an ordering that gives the minimum number of CMCGs is NP-complete, we present three heuristic algorithms. Experimental results show that these heuristics are very effective in reducing computation and that different orderings can significantly affect the number of CMCGs to be solved. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 817–837, 1998

Journal ArticleDOI
TL;DR: In this article, the authors investigated how a model based on n-player game theory concepts of Shapley value and nucleolus could be used as an alternative way of setting interchange fees.
Abstract: Banks have found it advantageous to connect their Automated Teller Machines (ATMs) in networks so that customers of one bank may use the ATMs of any bank in the network. When this occurs, an interchange fee is paid by the customer's bank to the one that owns the ATM. These have been set by historic interbank negotiation. The paper investigates how a model based on n-player game theory concepts of Shapley value and nucleolus could be used as an alternative way of setting such fees. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 407–417, 1998

Journal ArticleDOI
Jiefeng Xu, Leonard L. Lu1
TL;DR: In this paper, the authors proposed an optimal algorithm for the single-item dynamic lot size model with all-unit discount and showed that their algorithm fails to find the optimal solution for some special cases.
Abstract: Federgruen and Lee ([3]) proposed an optimal algorithm for the single-item dynamic lot size model with all-unit discount. In this note we show that their algorithm fails to find the optimal solution for some special cases. We also provide a modification to the algorithm to handle them.

Journal ArticleDOI
TL;DR: One of the heuristics, the base interval approach, in which replacement cycles for all components are restricted to be multiples of a specified interval, is shown to be robustly accurate and consistent with maintenance policies used by commercial airlines in which periodic maintenance checks are made at regular intervals.
Abstract: This paper considers the maintenance of aircraft engine components where economies exist for joint replacement because (a) the aircraft must be pulled from service for maintenance and (b) repair of some components requires removal and disassembly of the engine. It is well known that the joint replacement problem is difficult to solve exactly, because the optimal solution does not have a simple structured form. Therefore, we formulate three easy-to-implement heuristics and test their performance against a lower bound for various numerical examples. One of our heuristics, the base interval approach, in which replacement cycles for all components are restricted to be multiples of a specified interval, is shown to be robustly accurate. Moreover, this heuristic is consistent with maintenance policies used by commercial airlines in which periodic maintenance checks are made at regular intervals. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 435–458, 1998

Journal ArticleDOI
TL;DR: This article presents a stochastic model for a single-period production system composed of several assembly/processing and storage facilities in series that allows the optimal inventory levels at the start of the period to be determined from the solution to the well-known newsboy problem.
Abstract: This article presents a stochastic model for a single-period production system composed of several assembly/processing and storage facilities in series. The production system operates under a composite strategy of the assemble to order and assemble in advance policies. The developed mathematical model is simpler and more compact than the ones provided in earlier articles. Moreover, the formulation allows the optimal inventory levels at the start of the period to be determined from the solution to the well-known newsboy problem. We also analyze the problem under the free distribution approach which only assumes the knowledge of the first two moments of the demand distribution. The robustness of this approach is tested by carrying an extensive experimental comparison using different demand distributions. Finally, the composite model is extended by considering the effects of some budgetary constraints. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 599–614, 1998