scispace - formally typeset
Search or ask a question

Showing papers in "Naval Research Logistics in 2000"


Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate that the use of an exponential smoothing forecast by the retailer can cause the bullwhip effect and contrast these results with the increase in variability due to the using of a moving average forecast.
Abstract: An important phenomenon often observed in supply chain management, known as the bullwhip effect, implies that demand variability increases as one moves up the supply chain, i.e., as one moves away from customer demand. In this paper we quantify this effect for simple, two-stage, supply chains consisting of a single retailer and a single manufacturer. We demonstrate that the use of an exponential smoothing forecast by the retailer can cause the bullwhip effect and contrast these results with the increase in variability due to the use of a moving average forecast. We consider two types of demand processes, a correlated demand process and a demand process with a linear trend. We then discuss several important managerial insights that can be drawn from this research. c 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 269{286, 2000

451 citations


Journal ArticleDOI
TL;DR: The problem of processing a set of n jobs on m parallel machines where each machine must be maintained once during the planning horizon is studied, and branch and bound algorithms based on the column generation approach are proposed for solving both cases.
Abstract: Most machine scheduling models assume that the machines are available all of the time. However, in most realistic situations, machines need to be maintained and hence may become unavailable during certain periods. In this paper, we study the problem of processing a set of n jobs on m parallel machines where each machine must be maintained once during the planning horizon. Our objective is to schedule jobs and maintenance activities so that the total weighted completion time of jobs is minimized. Two cases are studied in this paper. In the first case, there are sufficient resources so that different machines can be maintained simultaneously if necessary. In the second case, only one machine can be maintained at any given time. In this paper, we first show that, even when all jobs have the same weight, both cases of the problem are NP-hard. We then propose branch and bound algorithms based on the column generation approach for solving both cases of the problem. Our algorithms are capable of optimally solving medium sized problems within a reasonable computational time. We note that the general problem where at most j machines, 1 ≤ j ≤ m, can be maintained simultaneously, can be solved similarly by the column generation approach proposed in this paper. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 145–165, 2000

196 citations


Journal ArticleDOI
TL;DR: In this article, the integer multiple criteria knapsack problem is studied and a dynamic programming-based approach is proposed to find all the non-nominal solutions, including time-dependent models.
Abstract: We study the integer multiple criteria knapsack problem and propose dynamic-programming-based approaches to finding all the nondominated solutions. Different and more complex models are discussed, including the binary multiple criteria knapsack problem, problems with more than one constraint, and multiperiod as well as time-dependent models. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 57–76, 2000

126 citations


Journal ArticleDOI
TL;DR: The opportunistic maintenance of a k‐out‐of‐n:G system with imperfect preventive maintenance (PM) with application to aircraft engine maintenance is studied in this paper, where partial failure is allowed.
Abstract: The opportunistic maintenance of a k-out-of-n:G system with imperfect preventive maintenance (PM) is studied in this paper, where partial failure is allowed. In many applications, the optimal maintenance actions for one component often depend on the states of the other components and system reliability requirements. Two new (τ, T) opportunistic maintenance models with the consideration of reliability requirements are proposed. In these two models, only minimal repairs are performed on failed components before time τ and the corrective maintenance (CM) of all failed components are combined with PM of all functioning but deteriorated components after τ; if the system survives to time T without perfect maintenance, it will be subject to PM at time T. Considering maintenance time, asymptotic system cost rate and availability are derived. The results obtained generalize and unify some previous research in this area. Application to aircraft engine maintenance is presented. © 2000 John Wiley & Sons;, Inc. Naval Research Logistics 47: 223–239, 2000

117 citations


Journal ArticleDOI
TL;DR: In this article, a case-based reasoning (CBR) algorithm is proposed to improve the performance of a wide class of scheduling heuristics, including parametrized biased random sampling and priority rule-based methods.
Abstract: Most scheduling problems are notoriously intractable, so the majority of algorithms for them are heuristic in nature. Priority rule-based methods still constitute the most important class of these heuristics. Of these, in turn, parametrized biased random sampling methods have attracted particular interest, due to the fact that they outperform all other priority rule-based methods known. Yet, even the “best” such algorithms are unable to relate to the full range of instances of a problem: Usually there will exist instances on which other algorithms do better. We maintain that asking for the one best algorithm for a problem may be asking too much. The recently proposed concept of control schemes, which refers to algorithmic schemes allowing to steer parametrized algorithms, opens up ways to refine existing algorithms in this regard and improve their effectiveness considerably. We extend this approach by integrating heuristics and case-based reasoning (CBR), an approach that has been successfully used in artificial intelligence applications. Using the resource-constrained project scheduling problem as a vehicle, we describe how to devise such a CBR system, systematically analyzing the effect of several criteria on algorithmic performance. Extensive computational results validate the efficacy of our approach and reveal a performance similar or close to state-of-the-art heuristics. In addition, the analysis undertaken provides new insight into the behaviour of a wide class of scheduling heuristics. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 201–222, 2000

87 citations


Journal ArticleDOI
TL;DR: In this article, a generalized parallel replacement problem with both fixed and variable replacement costs, capital budgeting, and demand constraints is considered, and a deterministic, integer programming formulation is presented as replacement decisions must be integer.
Abstract: A generalized parallel replacement problem is considered with both fixed and variable replacement costs, capital budgeting, and demand constraints. The demand constraints specify that a number of assets, which may vary over time, are required each period over a finite horizon. A deterministic, integer programming formulation is presented as replacement decisions must be integer. However, the linear programming relaxation is shown to have integer extreme points if the economies of scale binary variables are fixed. This allows for the efficient computation of large parallel replacement problems as only a limited number of 0–1 variables are required. Examples are presented to provide insight into replacement rules, such as the “no-splitting-rule” from previous research, under various demand scenarios. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 40–56, 2000

61 citations


Journal ArticleDOI
TL;DR: This paper considers a practical scheduling problem commonly arising from batch production in a flexible manufacturing environment and finds the optimal batch composition and the optimal schedule of the batches so that the makespan is minimized.
Abstract: In this paper we consider a practical scheduling problem commonly arising from batch production in a flexible manufacturing environment. Different part-types are to be produced in a flexible manufacturing cell organized into a two-stage production line. The jobs are processed in batches on the first machine, and the completion time of a job is defined as the completion time of the batch containing it. When processing of all jobs in a batch is completed on the first machine, the whole batch of jobs is transferred intact to the second machine. A constant setup time is incurred whenever a batch is formed on any machine. The tradeoff between the setup times and batch processing times gives rise to the batch composition decision. The problem is to find the optimal batch composition and the optimal schedule of the batches so that the makespan is minimized. The problem is shown to be strongly NP-hard. We identify some special cases by introducing their corresponding solution methods. Heuristic algorithms are also proposed to derive approximate solutions. We conduct computational experiments to study the effectiveness of the proposed heuristics. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 128–144, 2000

55 citations


Journal ArticleDOI
TL;DR: In a direct comparison with an alternative method, this approach yields significant improvements both in cpu time and in the number of problem instances solved to optimality, particularly marked for problems involving larger numbers of feasible shifts.
Abstract: We present a branch-and-price technique for optimal staff scheduling with multiple rest breaks, meal break, and break windows. We devise and implement specialized branching rules suitable for solving the set covering type formulation implicitly, using column generation. Our methodology is more widely applicable and computationally superior to the alternative methods in the literature. We tested our methodology on 365 test problems involving between 1728 and 86400 shift variations, and 20 demand patterns. In a direct comparison with an alternative method, our approach yields significant improvements both in cpu time and in the number of problem instances solved to optimality. The improvements were particularly marked for problems involving larger numbers of feasible shifts. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 185–200, 2000

48 citations


Journal ArticleDOI
TL;DR: Computational results indicate that the best versions of the latter heuristics consistently produce optimal or near optimal solutions on test problems.
Abstract: This article considers the preventive flow interception problem (FIP) on a network. Given a directed network with known origin-destination path flows, each generating a certain amount of risk, the preventive FIP consists of optimally locating m facilities on the network in order to maximize the total risk reduction. A greedy search heuristic as well as several variants of an ascent search heuristic and of a tabu search heuristic are presented for the FIP. Computational results indicate that the best versions of the latter heuristics consistently produce optimal or near optimal solutions on test problems. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 287–303, 2000

45 citations


Journal ArticleDOI
TL;DR: The technique uses a master linear program to determine allocations among a set of control policies, and uses partially observable Markov decision processes (POMDPs) to determine improving policies using dual prices from the master LP.
Abstract: Anewtechniqueforsolvinglarge-scaleallocationproblemswithpartiallyobservable states and constrained action and observation resources is introduced. The technique uses a master linear program (LP) to determine allocations among a set of control policies, and uses partially observable Markov decision processes (POMDPs) to determine improving policies using dual prices from the master LP. An application is made to a military problem where aircraft attack targets in a sequence of stages, with information acquired in one stage being used to plan attacks in the next. c 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 607{619, 2000

39 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of finding an object hidden in an integer interval is modeled as a two-person non-zero-sum game, where the searcher and the protector can also allocate resources to the points.
Abstract: A classic problem in Search Theory is one in which a searcher allocates resources to the points of the integer interval [1, n] in an attempt to find an object which has been hidden in them using a known probability function. In this paper we consider a modification of this problem in which there is a protector who can also allocate resources to the points; allocating these resources makes it more difficult for the searcher to find an object. We model the situation as a two-person non-zero-sum game so that we can take into account the fact that using resources can be costly. It is shown that this game has a unique Nash equilibrium when the searcher's probability of finding an object located at point i is of the form (1 - exp (-ixi)) exp (-iyi) when the searcher and protector allocate resources xi and yi respectively to point i. An algorithm to find this Nash equilibrium is given.

Journal ArticleDOI
TL;DR: In this article, a nonparametric bootstrap methodology for setting inventory reorder points and a simple inequality for identifying existing re-order points that are unreasonably high is presented.
Abstract: This paper develops and applies a nonparametric bootstrap methodology for setting inventory reorder points and a simple inequality for identifying existing reorder points that are unreasonably high. We demonstrate that an empirically based bootstrap method is both feasible and calculable for large inventories by applying it to the 1st Marine Expeditionary Force General Account, an inventory consisting of $20–30 million of stock for 10–20,000 different types of items. Further, we show that the bootstrap methodology works significantly better than the existing methodology based on mean days of supply. In fact, we demonstrate performance equivalent to the existing system with a reduced inventory at one-half to one-third the cost; conversely, we demonstrate significant improvement in fill rates and other inventory performance measures for an inventory of the same cost. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 459–478, 2000


Journal ArticleDOI
TL;DR: In this article, the M/G/1 retrial queue with repeated attempts is considered, where a customer who finds the server busy, leaves the service area and joins a pool of unsatisfied customers.
Abstract: The M/G/1 queue with repeated attempts is considered. A customer who finds the server busy, leaves the service area and joins a pool of unsatisfied customers. Each customer in the pool repeats his demand after a random amount of time until he finds the server free. We focus on the busy period L of the M/G/1$ retrial queue. The structure of the busy period and its analysis in terms of Laplace transforms have been discussed by several authors. However, this solution has serious limitations in practice. For instance, we cannot compute the first moments of L by direct differentiation. This paper complements the existing work and provides a direct method of calculation for the second moment of L.

Journal ArticleDOI
TL;DR: This paper considers the problem of locating one or more new facilities on a continuous plane, where the destinations or customers, and even the facilities, may be represented by areas and not points, and finds that the relevant distances are the distances from the closest points in the facility to the closest point in the demand areas.
Abstract: This paper considers the problem of locating one or more new facilities on a continuous plane, where the destinations or customers, and even the facilities, may be represented by areas and not points. The objective is to locate the facilities in order to minimize a sum of transportation costs. What is new in this study is that the relevant distances are the distances from the closest point in the facility to the closest point in the demand areas. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 77–84, 2000



Journal ArticleDOI
TL;DR: The results show that the Lagrangian approach to the maximum dispersion problems is reasonably fast, that it yields heuristic solutions which provide good lower bounds on the optimum solution values for both the sum and the minimum problems, and further that it produces decent upper bounds in the case of the sum problem.
Abstract: We address the so-called maximum dispersion problems where the objective is to maximize thesumor theminimumof interelement distances amongst a subset chosen from a given set. The problems arise in a variety of contexts including the location of obnoxious facilities, the selection of diverse groups, and the identification of dense subgraphs. They are known to be computationally difficult. In this paper, we propose a Lagrangian approach toward their solution and report the results of an extensive computational experimentation. Our results show that our Lagrangian approach is reasonably fast, that it yields heuristic solutions which provide good lower bounds on the optimum solution values for both the sum and the minimum problems, and further that it produces decent upper bounds in the case of the sum problem. For the sum problem, the results also show that the Lagrangian heuristic compares favorably against several existing heuristics. c 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 97{114, 2000


Journal ArticleDOI
TL;DR: In this article, the first known bounds to the fill-rate constrained (Q;r) inventory problem were derived. But this problem is hard to solve because the fill rate constraint is not convex in (Q, r) unless additional assumptions are made about the distribution of demand during the lead time.
Abstract: A classical and important problem in stochastic inventory theory is to determine the order quantity (Q) and the reorder level (r) to minimize inventory holding and backorder costs subject to a service constraint that the fill rate, i.e., the fraction of demand satisfied by inventory in stock, is at least equal to a desired value. This problem is often hard to solve because the fill rate constraint is not convex in (Q;r) unless additional assumptions are made about the distribution of demand during the lead-time. As a consequence, there are no known algorithms, other than exhaustive search, that are available for solving this problem in its full generality. Our paper derives the first known bounds to the fill-rate constrained (Q;r) inventory problem. We derive upper and lower bounds for the optimal values of the order quantity and the reorder level for this problem that are independent of the distribution of demand during the lead time and its variance. We show that the classical economic order quantity is a lower bound on the optimal ordering quantity. We present an efficient solution procedure that exploits these bounds and has a guaranteed bound on the error. When the Lagrangian of the fill rate constraint is convex or when the fill rate constraint does not exist, our bounds can be used to enhance the efficiency of existing algorithms. c 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 635{656, 2000

Journal ArticleDOI
TL;DR: An approximation model is developed from which a lower bound on the per period expected cost of a single item, two-echelon production-inventory system is obtained.
Abstract: The system under study is a single item, two-echelon production-inventory system consisting of a capacitated production facility, a central warehouse, and M regional distribution centers that satisfy stochastic demand. Our objective is to determine a system base-stock level which minimizes the long run average system cost per period. Central to the approach are (1) an inventory allocation model and associated convex cost function designed to allocate a given amount of system inventory across locations, and (2) a characterization of the amount of available system inventory using the inventory shortfall random variable. An exact model must consider the possibility that inventories may be imbalanced in a given period. By assuming inventory imbalances cannot occur, we develop an approximation model from which we obtain a lower bound on the per period expected cost. Through an extensive simulation study, we analyze the quality of our approximation, which on average performed within 0.50% of the lower bound. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 377–398, 2000


Journal ArticleDOI
TL;DR: The results partly extend the classic N or T policy to a more practical (N, T)-policy and the conclusions obtained for single server system to a system consisting of m (m ≥ 1) servers.
Abstract: In this paper, two different kinds of (N, T)-policies for an M/M/m queueing system are studied. The system operates only intermittently and is shut down when no customers are present any more. A fixed setup cost of K > 0 is incurred each time the system is reopened. Also, a holding cost of h > 0 per unit time is incurred for each customer present. The two (N, T)-policies studied for this queueing system with cost structures are as follows: (1) The system is reactivated as soon as N customers are present or the waiting time of the leading customer reaches a predefined time T, and (2) the system is reactivated as soon as N customers are present or the time units after the end of the last busy period reaches a predefined time T. The equations satisfied by the optimal policy (N*, T*) for minimizing the long-run average cost per unit time in both cases are obtained. Particularly, we obtain the explicit optimal joint policy (N*, T*) and optimal objective value for the case of a single server, the explicit optimal policy N* and optimal objective value for the case of multiple servers when only predefined customers number N is measured, and the explicit optimal policy T* and optimal objective value for the case of multiple servers when only predefined time units T is measured, respectively. These results partly extend (1) the classic N or T policy to a more practical (N, T)-policy and (2) the conclusions obtained for single server system to a system consisting of m (m ≥ 1) servers. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 240–258, 2000

Journal ArticleDOI
TL;DR: A new model on optimal software testing such that testing is done sequentially using a set of test cases, consisting of testing costs and failure rates, all depend on the cases used and the operations performed.
Abstract: One of the important features of any software system is its operational profile. This is simply the set of all operations that a software is designed to perform and the occurence probabilities of these operations. We present a new model on optimal software testing such that testing is done sequentially using a set of test cases. There may be failures due to the operations in each of these cases. The model parameters, consisting of testing costs and failure rates, all depend on the cases used and the operations performed. Our aim is to find the optimal testing durations in all of the cases in order to minimize the total expected cost. This problem leads to interesting decision models involving nonlinear programming formulations that possess explicit analytical solutions under reasonable assumptions. © 2000 John Wiley & Sons, Inc., Naval Research Logistics 47: 620–634, 2000

Journal ArticleDOI
TL;DR: In inference for a stochastic form of the Lanchester combat model, the type of battle that occurred and whether or not it makes any difference to the number of casualties if an army is attacking or defending is assessed.
Abstract: We undertake inference for a stochastic form of the Lanchester combat model. In particular, given battle data, we assess the type of battle that occurred and whether or not it makes any difference to the number of casualties if an army is attacking or defending. Our approach is Bayesian and we use modern computational techniques to fit the model. We illustrate our method using data from the Ardennes campaign. We compare our results with previous analyses of these data by Bracken and Fricker. Our conclusions are somewhat different to those of Bracken. Where he suggests that a linear law is appropriate, we show that the logarithmic or linear-logarithmic laws fit better. We note however that the basic Lanchester modeling assumptions do not hold for the Ardennes data. Using Fricker's modified data, we show that although his “super-logarithmic” law fits best, the linear, linear-logarithmic, and logarithmic laws cannot be ruled out. We suggest that Bayesian methods can be used to make inference for battles in progress. We point out a number of advantages: Prior information from experts or previous battles can be incorporated; predictions of future casualties are easily made; more complex models can be analysed using stochastic simulation techniques. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 541–558, 2000

Journal ArticleDOI
TL;DR: This short note studies a two-machine flowshop scheduling problem with the additional no-idle feasibility constraint and the total completion time criterion function and shows that one of the few papers which deal with this special problem contains incorrect claims.
Abstract: In this short note we study a two-machine flowshop scheduling problem with the additional no-idle feasibility constraint and the total completion time criterion function. We show that one of the few papers which deal with this special problem contains incorrect claims and suggest a way how these claims can be rectified. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47:353–358, 2000

Journal ArticleDOI
TL;DR: The authors present two compound algorithms which consist both of three linear greedylike algorithms running independently and it is shown that the worst‐case performance of the heuristic for the ordinary partitioning problem is 12/11, while the second procedure for partitioning with kernels has a bound of 8/7.
Abstract: For a given set S of nonnegative integers the partitioning problem asks for a partition of S into two disjoint subsets S1 and S2 such that the sum of elements in S1 is equal to the sum of elements in S2 If additionally two elements (the kernels) r1, r2 ∈ S are given which must not be assigned to the same set Si, we get the partitioning problem with kernels For these NP‐complete problems the authors present two compound algorithms which consist both of three linear greedylike algorithms running independently It is shown that the worst‐case performance of the heuristic for the ordinary partitioning problem is 12/11, while the second procedure for partitioning with kernels has a bound of 8/7 © 2000 John Wiley & Sons, Inc Naval Research Logistics 47: 593–601, 2000


Journal ArticleDOI
TL;DR: In this paper, a necessary and sufficient condition for the existence of an undominated core was proposed, as well as a sufficient and necessary condition for coincidence of the intersection core and its undominated counterpart.
Abstract: This note proposes a necessary and sufficient condition for the existence of an undominated core and a necessary and sufficient condition for coincidence of the intersection core and the undominated core. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 455–458, 2000

Journal ArticleDOI
TL;DR: This work studies discrete-time, parallel queues with two identical servers with a no-jockeying queue and obtains the workload distribution in steady state in form of pgf.
Abstract: We study discrete-time, parallel queues with two identical servers. Customers arrive randomly at the system and join the queue with the shortest workload that is defined as the total service time required for the server to complete all the customers in the queue. The arrivals are assumed to follow a geometric distribution and the service times are assumed to have a general distribution. It is a no-jockeying queue. The two-dimensional state space is truncated into a banded array. The resulting modified queue is studied using the method of probability generating function (pgf) The workload distribution in steady state is obtained in form of pgf. A special case where the service time is a deterministic constant is further investigated. Numerical examples are illustrated. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 440–454, 2000