scispace - formally typeset
Search or ask a question

Showing papers in "Operations Research in 1998"


Journal ArticleDOI
TL;DR: In this paper, column generation methods for integer programs with a huge number of variables are discussed, including implicit pricing of nonbasic variables to generate new columns or to prove LP optimality at a node of the branch-and-bound tree.
Abstract: We discuss formulations of integer programs with a huge number of variables and their solution by column generation methods, i.e., implicit pricing of nonbasic variables to generate new columns or to prove LP optimality at a node of the branch-and-bound tree. We present classes of models for which this approach decomposes the problem, provides tighter LP relaxations, and eliminates symmetry. We then discuss computational issues and implementation of column generation, branch-and-bound algorithms, including special branching rules and efficient ways to solve the LP relaxation. We also discuss the relationship with Lagrangian duality.

2,248 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a model that takes into account the capacities of the National Airspace System (NAS) as well as the capacities at the airports, and showed that the resulting formulation is rather strong as some of the proposed inequalities are facet defining for the convex hull of solutions.
Abstract: Massachusetts Institute of Technology, Cambridge, Massachusetts (Received August 1994; revision received October 1995; accepted May 1996) Throughout the United States and Europe, demand for airport use has been increasing rapidly, while airport capacity has been stagnating. Over the last ten years the number of passengers has increased by more than 50 percent and is expected to continue increasing at this rate. Acute congestion in many major airports has been the unfortunate result. For U.S. airlines, the expected yearly cost of the resulting delays is currently estimated at $3 billion. In order to put this number in perspective, the total reported losses of all U.S. airlines amounted to approximately $2 billion in 1991 and $2.5 billion in 1990. Furthermore, every day 700 to 1100 flights are delayed by 15 minutes or more. European airlines are in a similar plight. Optimally controlling the flow of aircraft either by adjusting their release times into the network (ground-holding) or their speed once they are airborne is a cost effective method to reduce the impact of congestion on the air traffic system. This paper makes the following contributions: (a) we build a model that takes into account the capacities of the National Airspace System (NAS) as well as the capacities at the airports, and we show that the resulting formulation is rather strong as some of the proposed inequalities are facet defining for the convex hull of solutions; (b) we address the complexity of the problem; (c) we extend that model to account for several variations of the basic problem, most notably, how to reroute flights and how to handle banks in the hub and spoke system; (d) we show that by relaxing some of our constraints we obtain a previously addressed problem and that the LP relaxation bound of our formulation is at least as strong when compared to all others proposed in the literature for this problem; and (e) we solve large scale, realistic size problems with several thousand flights.

465 citations


Journal ArticleDOI
TL;DR: A stochastic version of the interdictor's problem: Minimize the expected maximum flow through the network when interdiction successes are binary random variables is formulated and solved.
Abstract: Using limited assets, an interdictor attempts to destroy parts of a capacitated network through which an adversary will subsequently maximize flow. We formulate and solve a stochastic version of the interdictor's problem: Minimize the expected maximum flow through the network when interdiction successes are binary random variables. Extensions are made to handle uncertain arc capacities and other realistic variations. These two-stage stochastic integer programs have applications to interdicting illegal drugs and to reducing the effectiveness of a military force moving materiel, troops, information, etc., through a network in wartime. Two equivalent model formulations allow Jensen's inequality to be used to compute both lower and upper bounds on the objective, and these bounds are improved within a sequential approximation algorithm. Successful computational results are reported on networks with over 100 nodes, 80 interdictable arcs, and 180 total arcs.

367 citations


Journal ArticleDOI
TL;DR: DRIVE (Dynamic Routing of Independent VEhicles), a planning module to be incorporated in a decision support system for the direct transportation at Van Gend and Loos BV, has produced very encouraging results.
Abstract: We present DRIVE (Dynamic Routing of Independent VEhicles), a planning module to be incorporated in a decision support system for the direct transportation at Van Gend and Loos BV. Van Gend and Loos BV is the largest company providing road transportation in the Benelux, with about 1400 vehicles transporting 160,000 packages from thousands of senders to tens of thousands of addressees per day. The heart of DRIVE is a branch-and-price algorithm. Approximation and incomplete optimization techniques as well as a sophisticated column management scheme have been employed to create the right balance between solution speed and solution quality. DRIVE has been tested by simulating a dynamic planning environment with real-life data and has produced very encouraging results.

358 citations


Journal ArticleDOI
TL;DR: An overview of the prevailing models for hazardous materials transport is provided, and the question "Does it matter how the authors quantify transport risk" is addressed.
Abstract: The transport of hazardous materials is an important strategic and tactical decision problem. Risks associated with this activity make transport planning difficult. Although most existing analytical approaches for hazardous materials transport account for risk, there is no agreement among researchers on how to model the associated risks. This paper provides an overview of the prevailing models, and addresses the question "Does it matter how we quantify transport risk?" Our empirical analysis on the U.S. road network suggests that different risk models usually select different "optimal" paths for a hazmat shipment between a given origin-destination pair. Furthermore, the optimal path for one model could perform very poorly under another model. This suggests that researchers and practitioners must pay considerable attention to the modeling of risks in hazardous materials transport.

343 citations


Journal ArticleDOI
TL;DR: It is shown that the posterior probabilities derived from Bayes theorem are part of the Analytic Hierarchy Process (AHP) framework, and hence thatBayes theorem is a sufficient condition of a solution in the sense of the AHP.
Abstract: Judgments are needed in medical diagnosis to determine what tests to perform given certain symptoms. For many diseases, what information to gather on symptoms and what combination of symptoms lead to a given disease are not well known. Even when the number of symptoms is small, the required number of experiments to generate adequate statistical data can be unmanageably large. There is need in diagnosis for an integrative model that incorporates both statistical data and expert judgment. When statistical data are present but no expert judgment is available, one property of this model should be to reproduce results obtained through time honored procedures such as Bayes theorem. When expert judgment is also present, it should be possible to combine judgment with statistical data to identify the disease that best describes the observed symptoms. Here we are interested in the Analytic Hierarchy Process (AHP) framework that deals with dependence among the elements or clusters of a decision structure to combine statistical and judgmental information. It is shown that the posterior probabilities derived from Bayes theorem are part of this framework, and hence that Bayes theorem is a sufficient condition of a solution in the sense of the AHP. An illustration is given as to how a purely judgment-based model in the AHP can be used in medical diagnosis. The application of the model to a case study demonstrates that both statistics and judgment can be combined to provide diagnostic support to medical practitioner colleagues with whom we have interacted in doing this work.

288 citations


Journal ArticleDOI
TL;DR: A hierarchical model framework is presented for the analysis of the stabilizing effect of inventories in multiechelon manufacturing/distribution supply chains and defines sufficient conditions for the existence of stabilization and relates these conditions to the optimality of myopic control policies.
Abstract: This paper presents a hierarchical model framework for the analysis of the stabilizing effect of inventories in multiechelon manufacturing/distribution supply chains. This framework is used to compare the variance of demand to the variance of replenishment orders at different echelons of the system (e.g., retailer, wholesaler). Empirical data and experience with management games suggest that, in most industries, inventory management policies can have a destabilizing effect by increasing the volatility of demand as it passes up through the chain (the "bullwhip" effect). Our model helps to explain these observations and indicates mechanisms that can promote stabilization. The analysis results also define sufficient conditions for the existence of stabilization and relate these conditions to the optimality of myopic control policies.

283 citations


Journal ArticleDOI
TL;DR: This paper develops and analyzes a model of an oil property, where production rates and oil prices both vary stochastically over time and the decision maker may terminate production or accelerate production by drilling additional wells.
Abstract: There are two major competing procedures for evaluating risky projects where managerial flexibility plays an important role: one is decision analytic, based on stochastic dynamic programming, and the other is option pricing theory (or contingent claims analysis), based on the no-arbitrage theory of financial markets. In this paper, we show how these two approaches can be profitably integrated to evaluate oil properties. We develop and analyze a model of an oil property-either a developed property or a proven but undeveloped reserve-where production rates and oil prices both vary stochastically over time and, at any time, the decision maker may terminate production or accelerate production by drilling additional wells. The decision maker is assumed to be risk averse and can hedge price risks by trading oil futures contracts. We also describe extensions of this model that incorporate additional uncertainties and options, discuss its use in exploration decisions and in evaluating a portfolio of properties rather than a single property, and briefly describe other potential applications of this integrated methodology.

243 citations


Journal ArticleDOI
TL;DR: The focus issue on Manufacturing Logistics as mentioned in this paper was created by pulling together previously accepted papers from all topical areas of the journal and achieved state-of-the-art thinking on manufacturing logistics.
Abstract: I am pleased to present this Focus Issue on Manufacturing Logistics. Unlike a special issue, where new papers are solicited that deal with a theme, this issue was created by pulling together previously accepted papers from all topical areas of the journal. Thus, this focused issue achieves two goals. First, it creates a volume that presents state-of-the-art thinking on manufacturing logistics. Second, it reduces the backlog of accepted papers thereby reducing the publication delay-i.e., the time from acceptance to publication-which had been as long as 24 months. Our goal is, through the use of this supplement, to reduce the publication delay to 12 months over the next year. I hope that you enjoy this focused issue and I look forward to your comments and suggestions.

242 citations


Journal ArticleDOI
TL;DR: The DSKP has applications in freight transportation, in scheduling of batch processors, in selling of assets, and in selection of investment projects, and recursive algorithms for computing them are developed.
Abstract: The Dynamic and Stochastic Knapsack Problem (DSKP) is defined as follows. Items arrive according to a Poisson process in time. Each item has a demand (size) for a limited resource (the knapsack) and an associated reward. The resource requirements and rewards are jointly distributed according to a known probability distribution and become known at the time of the item's arrival. Items can be either accepted or rejected. If an item is accepted, the item's reward is received; and if an item is rejected, a penalty is paid. The problem can be stopped at any time, at which time a terminal value is received, which may depend on the amount of resource remaining. Given the waiting cost and the time horizon of the problem, the objective is to determine the optim al policy that maximizes the expected value (rewards minus costs) accumulated. Assuming that all items have equal sizes but random rewards, optimal solutions are derived for a variety of cost structures and time horizons, and recursive algorithms for computing them are developed. Optimal closed-form solutions are obtained for special cases. The DSKP has applications in freight transportation, in scheduling of batch processors, in selling of assets, and in selection of investment projects.>

226 citations


Journal ArticleDOI
TL;DR: A new model for studying requirements planning in multistage production-inventory systems by using a model for a single production stage as a building block for modeling a network of stages.
Abstract: This paper develops a new model for studying requirements planning in multistage production-inventory systems. We first characterize how most industrial planning systems work, and we then develop a mathematical model to capture some of the key dynamics in the planning process. Our approach is to use a model for a single production stage as a building block for modeling a network of stages. We show how to analyze the single-stage model to determine the production smoothness and stability for a production stage and the inventory requirements. We also show how to optimize the tradeoff between production capacity and inventory for a single stage. We then can model the multistage supply chain using the single stage as a building block. We illustrate the multistage model with an industrial application, and we conclude with some thoughts on a research agenda.

Journal ArticleDOI
TL;DR: In this paper, the authors describe the formulation of the Russell-Yasuda Kasai financial planning model, including the motivation for the model, and discuss the technical details of the financial modeling process and the managerial impact of its use to help allocate the firm's assets over time.
Abstract: This paper describes the formulation of the Russell-Yasuda Kasai financial planning model, including the motivation for the model. The presentation complements the discussion of the technical details of the financial modeling process and the managerial impact of its use to help allocate the firm's assets over time discussed in Carin~o et al. (1994, 1998, respectively). The multistage stochastic linear program incorporates Yasuda Kasai's asset and liability mix over a five-year horizon followed by an infinite horizon steady-state end-effects period. The objective is to maximize expected long-run profits less expected penalty costs from constraint violations over the infinite horizon. Scenarios are used to represent the uncertain parameter distributions. The constraints represent the institutional, cash flow, legal, tax, and other limitations on the asset and liability mix over time.

Journal ArticleDOI
TL;DR: In this paper, a generalized insertion heuristic for the Traveling Salesman Problem with Time Windows is proposed, which gradually builds a route by inserting at each step a vertex in its neighbourhood on the current route, and performing a local reoptimization.
Abstract: This article describes a generalized insertion heuristic for the Traveling Salesman Problem with Time Windows in which the objective is the minimization of travel times. The algorithm gradually builds a route by inserting at each step a vertex in its neighbourhood on the current route, and performing a local reoptimization. This is done while checking the feasibility of the remaining part of the route. Backtracking is sometimes necessary. Once a feasible route has been determined, an attempt is made to improve it by applying a post-optimization phase based on the successive removal and reinsertion of all vertices. Tests performed on 375 instances indicate that the proposed heuristic compares very well with alternative methods and very often produces optimal or near-optimal solutions.

Journal ArticleDOI
TL;DR: Technical aspects of the Russell-Yasuda Kasai financial planning model are discussed, including the models for the discrete distribution scenario generation processes for the uncertain parameters of the model, and a comparison of algorithms used in the model's solution.
Abstract: This paper discusses technical aspects of the Russell-Yasuda Kasai financial planning model. These include the models for the discrete distribution scenario generation processes for the uncertain parameters of the model, the mathematical approach used to develop the infinite-horizon end-effects part of the model, a comparison of algorithms used in the model's solution, and a comparison of the multistage stochastic linear programming model with the previous technology, static mean-variance analysis. Experience and benefits of the model in Yasuda-Kasai's financial planning process is also discussed.

Journal ArticleDOI
TL;DR: This work investigates nonuniform direction choice and shows that, under regularity conditions on the region S and target distribution G, there exists a unique direction choice distribution, characterized by necessary and sufficient conditions depending on S and G, which optimizes the Doob bound on rate of convergence.
Abstract: Hit-and-Run algorithms are Monte Carlo procedures for generating points that are asymptotically distributed according to general absolutely continuous target distributions G over open bounded regions S. Applications include nonredundant constraint identification, global optimization, and Monte Carlo integration. These algorithms are reversible random walks that commonly incorporate uniformly distributed step directions. We investigate nonuniform direction choice and show that, under regularity conditions on the region S and target distribution G, there exists a unique direction choice distribution, characterized by necessary and sufficient conditions depending on S and G, which optimizes the Doob bound on rate of convergence. We include computational results demonstrating greatly accelerated convergence for this optimizing direction choice as well as for more easily implemented adaptive heuristic rules.

Journal ArticleDOI
TL;DR: It is shown that the order fill rate can be computed through a series of convolutions of one-dimensional compound Poisson distributions and the batch-size distributions, which makes the exact calculation faster and much more tractable.
Abstract: A customer order to a multi-item inventory system typically consists of several different items in different amounts. The probability of satisfying an arbitrary demand within a prespecified time window, termed the order fill rate, is an important measure of customer satisfaction in industry. This measure, however, has received little attention in the inventory literature, partly because its evaluation is considered a hard problem. In this paper, we study this performance measure for a base-stock system in which the demand process forms a multivariate compound Poisson process and the replenishment leadtimes are constant. We show that the order fill rate can be computed through a series of convolutions of one-dimensional compound Poisson distributions and the batch-size distributions. This procedure makes the exact calculation faster and much more tractable. We also develop simpler bounds to estimate the order fill rate. These bounds require only partial order-based information or merely the item-based information. Finally, we investigate the impact of the standard independent demand assumption when the demand is actually correlated across items.

Journal ArticleDOI
TL;DR: This work gives fast and simple polynomial-time algorithms for finding a routing of aircraft in a graph whose routings during the day are fixed, that satisfies both the three-day maintenance as well as the balance-check visit requirements under two different models: a static infinite-horizon model and a dynamic finite-Horizon model.
Abstract: Federal aviation regulations require that all aircraft undergo maintenance after flying a certain number of hours. To ensure high aircraft utilization, maintenance is done at night, and these regulations translate into requiring aircraft to overnight at a maintenance station every three to four days (depending on the fleet type), and to visit a balance-check station periodically. After the schedule is fleeted, the aircraft are routed to satisfy these maintenance requirements. We give fast and simple polynomial-time algorithms for finding a routing of aircraft in a graph whose routings during the day are fixed, that satisfies both the three-day maintenance as well as the balance-check visit requirements under two different models: a static infinite-horizon model and a dynamic finite-horizon model. We discuss an implementation where we embed the static infinite-horizon model into a three-stage procedure for finding a maintenance routing of aircraft.

Journal ArticleDOI
TL;DR: For a single product, single-stage capacitated production-inventory model with stochastic, periodic (cyclic) demand, the optimal policy is found and some of its properties are characterized and several insights into the model are provided.
Abstract: For a single product, single-stage capacitated production-inventory model with stochastic, periodic (cyclic) demand, we find the optimal policy and characterize some of its properties. We study the finite-horizon, the discounted infinite-horizon and the infinite-horizon average cases. A simulation based optimization method is provided to compute the optimal parameters. Based on a numerical study, several insights into the model are also provided.

Journal ArticleDOI
TL;DR: A general stochastic search procedure is proposed, which develops the concept of the branch-and-bound method, and the main idea is to process large collections of possible solutions and to devote more attention to the most promising groups.
Abstract: The optimal allocation of indivisible resources is formalized as a stochastic optimization problem involving discrete decision variables. A general stochastic search procedure is proposed, which develops the concept of the branch-and-bound method. The main idea is to process large collections of possible solutions and to devote more attention to the most promising groups. By gathering more information to reduce the uncertainty and by narrowing the search area, the optimal solution can be found with probability one. Special techniques for calculating stochastic lower and upper bounds are discussed. The results are illustrated by a computational experiment.

Journal ArticleDOI
TL;DR: A basic model for CRP is proposed, and a Lagrangian lower bound based on the solution of an assignment problem on a suitably defined graph is described, which is used to drive an effective algorithm for finding a tight approximate solution to the problem.
Abstract: The Crew Rostering Problem (CRP) aims at determining an optimal sequencing of a given set of duties into rosters satisfying operational constraints deriving from union contract and company regulations. Previous work on CRP addresses mainly urban mass-transit systems, in which the minimum number of crews to perform the duties can easily be determined, and the objective is to evenly distribute the workload among the crews. In typical railway applications, however, the roster construction has to take into account more involved sequencing rules, and the main objective is the minimization of the number of crews needed to perform the duties. In this paper we propose a basic model for CRP, and describe a Lagrangian lower bound based on the solution of an assignment problem on a suitably defined graph. The information obtained through the lower bound computation is used to drive an effective algorithm for finding a tight approximate solution to the problem. Computational results for real-world instances from railway applications involving up to 1,000 duties are presented, showing that the proposed approach yields, within short computing time, lower and upper bound values that are typically very close. The code based on the approach we propose won the FARO competition organized by the Italian railway company, Ferrovie dello Stato SpA, in 1995.

Journal ArticleDOI
TL;DR: In this article, a stochastic dynamic programming approach is proposed to set prices of perishable items in the context of a retail chain with coordinated prices among its stores and compare its performance with actual practice in a real case study.
Abstract: In this paper we propose a methodology to set prices of perishable items in the context of a retail chain with coordinated prices among its stores and compare its performance with actual practice in a real case study. We formulate a stochastic dynamic programming problem and develop heuristic solutions that approximate optimal solutions satisfactorily. To compare this methodology with current practices in the industry, we conducted two sets of experiments using the expertise of a product manager of a large retail company in Chile. In the first case, we contrast the performance of the proposed methodology with the revenues obtained during the 1995 autumn-winter season. In the second case, we compare it with the performance of the experienced product manager in a "simulation-game" setting. In both cases, our methodology provides significantly better results than those obtained by current practices.

Journal ArticleDOI
TL;DR: This work presents a Lagrangean heuristic within a branch-and-bound framework as a method for finding the exact optimal solution of the uncapacitated network design problem with single origins and destinations for each commodity, outperforming a state-of-the-art mixed-integer code both with respect to problem size and solution time.
Abstract: The network design problem is a multicommodity minimal cost network flow problem with fixed costs on the arcs, i.e., a structured linear mixed-integer programming problem. It has various applications, such as construction of new links in transportation networks, topological design of computer communication networks, and planning of empty freight car transportation on railways. We present a Lagrangean heuristic within a branch-and-bound framework as a method for finding the exact optimal solution of the uncapacitated network design problem with single origins and destinations for each commodity (the simplest problem in this class, but still NP-hard). The Lagrangean heuristic uses a Lagrangean relaxation as subproblem, solving the Lagrange dual with subgradient optimization, combined with a primal heuristic (the Benders subproblem) yielding primal feasible solutions. Computational tests on problems of various sizes (up to 1000 arcs, 70 nodes and 138 commodities or 40 nodes and 600 commodities) and of several different structures lead to the conclusion that the method is quite powerful, outperforming for example a state-of-the-art mixed-integer code, both with respect to problem size and solution time.

Journal ArticleDOI
TL;DR: A new hierarchy of relaxations is presented that provides a unifying framework for constructing a spectrum of continuous relaxations spanning from the linear programming relaxation to the convex hull representation for linear mixed integer 0-1 problems.
Abstract: A new hierarchy of relaxations is presented that provides a unifying framework for constructing a spectrum of continuous relaxations spanning from the linear programming relaxation to the convex hull representation for linear mixed integer 0-1 problems. This hierarchy is an extension of the Reformulation-Linearization Technique (RLT) of Sherali and Adams (1990, 1994a), and is particularly designed to exploit special structures. Specifically, inherent special structures are exploited by identifying particular classes of multiplicative factors that are applied to the original problem to reformulate it as an equivalent polynomial programming problem, and subsequently, this resulting problem is linearized to produce a tighter relaxation in a higher dimensional space. This general framework permits us to generate an explicit hierarchical sequence of tighter relaxations leading up to the convex hull representation. (A similar hierarchy can be constructed for polynomial mixed integer 0-1 problems.) Additional ideas for further strengthening RLT-based constraints by using conditional logical implications, as well as relationships with sequential lifting, are also explored. Several examples are presented to demonstrate how underlying special structures, including generalized and variable upper bounding, covering, partitioning, and packing constraints, as well as sparsity, can be exploited within this framework. For some types of structures, low level relaxations are exhibited to recover the convex hull of integer feasible solutions.

Journal ArticleDOI
TL;DR: This work considers a distribution system consisting of a single warehouse and many geographically dispersed retailers to identify a combined inventory policy and a routing strategy minimizing system-wide infinite horizon costs and constructs a very effective algorithm which is very effective on a set of randomly generated problems.
Abstract: We consider a distribution system consisting of a single warehouse and many geographically dispersed retailers. Each retailer faces demands for a single item which arise at a deterministic, retailer specific rate. The retailers? stock is replenished by a fleet of vehicles of limited capacity, departing and returning to the warehouse and combining deliveries into efficient routes. The cost of any given route consists of a fixed component and a component which is proportional with the total distance driven. Inventory costs are proportional with the stock levels. The objective is to identify a combined inventory policy and a routing strategy minimizing system-wide infinite horizon costs. We characterize the asymptotic effectiveness of the class of so-called Fixed Partition policies and those employing Zero Inventory Ordering. We provide worst case as well as probabilistic bounds under a variety of probabilistic assumptions. This insight is used to construct a very effective algorithm resulting in a Fixed Partition policy which is asymptotically optimal within its class. Computational results show that the algorithm is very effective on a set of randomly generated problems.

Journal ArticleDOI
TL;DR: This paper analyzes network and schedule choice using an "idealized" model that permits derivation of analytic, closed form expressions for airline and passenger costs and demonstrates that schedule reliability is highest for direct routing.
Abstract: The goal of this paper is to understand choices of networks and schedules by a profit maximizing airline. By "network" we mean the routing pattern for planes and by "schedule" we mean the frequency of service between cities and the amount of time put into the schedule to assure on-time arrival. This paper analyzes network and schedule choice using an "idealized" model that permits derivation of analytic, closed form expressions for airline and passenger costs. Many important conclusions are obtained. It is optimal for a profit maximizing airline to design its network and schedule to minimize the sum of airline and passenger costs. Profit maximizing choice of schedule frequency depends on the network. Direct service has lower schedule frequency than other networks. Parametric studies are performed on the effect of distance between cities, demand rate, and the number of cities served on the choice of the network. Some conclusions are: (1) If the distance between cities is very small, then direct service is optimal; otherwise, other networks, such as hub and spoke are optimal. (2) Similarly, for very high demand rates, direct service is optimal; and for intermediate values, hub and spoke is optimal. (3) If the number of cities is small, direct service dominates; and if it is large, hub and spoke is optimal. We note that any airline's schedule includes safety time as a buffer against delays, and we demonstrate that schedule reliability is highest for direct routing. Surprisingly, the amount of time that is added to the schedule to buffer delays is relatively less in direct networks than in other networks. This can explain the superior on-time performance and high equipment utilization of direct carriers such as Southwest Airlines.

Journal ArticleDOI
TL;DR: A new bounding procedure for the Quadratic Assignment Problem (QAP) is described which extends the Hungarian method for the Linear assignment Problem (LAP) to QAPs, operating on the four dimensional cost array of the QAP objective function.
Abstract: A new bounding procedure for the Quadratic Assignment Problem (QAP) is described which extends the Hungarian method for the Linear Assignment Problem (LAP) to QAPs, operating on the four dimensional cost array of the QAP objective function. The QAP is iteratively transformed in a series of equivalent QAPs leading to an increasing sequence of lower bounds for the original problem. To this end, two classes of operations which transform the four dimensional cost array are defined. These have the property that the values of the transformed objective function Z' are the corresponding values of the "old" objective function Z, shifted by some amount C. In the case that all entries of the transformed cost array are nonnegative, then C is a lower bound for the initial QAP. If, moreover, there exists a feasible solution U to the QAP, such that its value in the transformed problem is zero, then C is the optimal value of Z and U is an optimal solution for the original QAP. The transformations are iteratively applied until no significant increase in constant C as above is found, resulting in the so called Dual Procedure (DP). Several strategies are listed for appropriately determining C, or equivalently, transforming the cost array. The goal is the modification of the elements in the cost array to obtain new equivalent problems that bring the QAP closer to solution. In some cases the QAP is actually solved, though solution is not guaranteed. The close relationship between the DP and the Linear Programming formulation of Adams and Johnson is presented. The DP attempts to solve Adams and Johnson's CLP, a continuous relaxation of a linearization of the QAP. This explains why the DP produces bounds close to the optimum values for CLP calculated by Johnson in her dissertation and by Resende, et al. in their Interior Point Algorithm for Linear Programming. The benefit of using DP within a branch-and-bound algorithm is described. Then, two versions of DP are tested on the Nugent test instances from size 5 to size 30, as well as several other test instances from QAPLIB. These compare favorably with earlier bounding methods.

Journal ArticleDOI
TL;DR: This paper attempts to unify the two streams of research by developing a general model that considers replacement of capacity as well as expansion and disposal, together with scale economy effects, and demonstrates the robustness of this approach by showing how other realistic features can be incorporated.
Abstract: Businesses frequently have to decide which of their existing equipment to replace, taking into account future changes in capacity requirements. The significance of this decision becomes clear when one notes that expenditure on new plant and equipment is a significant proportion of the GDP in the United States. The equipment replacement literature has focused on the replacement issue, usually ignoring aspects such as future demand changes and economies of scale. On the other hand, the capacity expansion literature has focused on the expansion of equipment capacity to meet demand growth, considering economies of scale but ignoring the replacement aspect. This paper attempts to unify the two streams of research by developing a general model that considers replacement of capacity as well as expansion and disposal, together with scale economy effects. Even special cases of the problems discussed here, such as the parallel machine replacement problem, have been considered difficult so far. However, we show that the problem can be solved efficiently by formulating it in a novel, disaggregate manner and using a dual-based solution procedure that exploits the structure of the problem. We also provide computational results to affirm that optimal or near-optimal solutions to large, realistic problems can be determined efficiently. We demonstrate the robustness of this approach by showing how other realistic features such as quantity discounts in purchases, alternative technology types or suppliers, and multiple equipment types can be incorporated.

Journal ArticleDOI
TL;DR: This work shows that any Nash equilibrium point consists of threshold decision rules and establishes the existence and uniqueness of a symmetric equilibrium point, and considers a reasonable dynamic learning scheme, which converges to the symmetric Nash equilibrium Point.
Abstract: We consider a processor-sharing service system, where the service rate to individual customers decreases as the load increases. Each arriving customer may observe the current load and should then choose whether to join the shared system. The alternative is a constant-cost option, modeled here for concreteness as a private server (e.g., a personal computer that serves as an alternative to a central mainframe computer). The customers wish to minimize their individual service times (or an increasing function thereof). However, the optimal choice for each customer depends on the decisions of subsequent ones, through their effect on the future load in the shared server. This decision problem is analyzed as a noncooperative dynamic game among the customers. We first show that any Nash equilibrium point consists of threshold decision rules and establish the existence and uniqueness of a symmetric equilibrium point. Computation of the equilibrium threshold is demonstrated for the case of Poisson arrivals, and some of its properties are delineated. We next consider a reasonable dynamic learning scheme, which converges to the symmetric Nash equilibrium point. In this model customers simply choose the better option based on available performance history. Convergence of this scheme is illustrated here via a simulation example and is established analytically in subsequent work.

Journal ArticleDOI
TL;DR: It is shown that for a class of assemble-to-order models with stochastic demands and production intervals there is a simple linear trade-off between inventory and delivery leadtime, in a limiting sense, at high fill rates.
Abstract: This paper studies the trade-off between inventory levels and the delivery leadtime offered to customers in achieving a target level of service. It addresses the question of how much a delivery leadtime can be reduced, per unit increase in inventory, at a fixed fill rate. We show that for a class of assemble-to-order models with stochastic demands and production intervals there is a simple linear trade-off between inventory and delivery leadtime, in a limiting sense, at high fill rates. The limiting slope is easy to calculate and can be interpreted as the approximate marginal rate for trading off inventory against leadtime at a constant level of service. We also investigate how various model features affect the trade-off-in particular, the impact of orders for multiple units of a single item and of orders for multiple units of different items.

Journal ArticleDOI
TL;DR: This paper details a new simulation and optimisation based system for personnel scheduling (rostering) of customs staff at the Auckland International Airport, New Zealand, and charts the development of this system, outlines failures where they have occurred, and summarises the ongoing impacts of this work on the organisation.
Abstract: This paper details a new simulation and optimisation based system for personnel scheduling (rostering) of customs staff at the Auckland International Airport, New Zealand. An integrated approach using simulation, heuristic descent, and integer programming techniques has been developed to determine near-optimal staffing levels. The system begins by using a new simulation system embedded within a heuristic search to determine minimum staffing levels for arrival and departure work areas. These staffing r equirements are then used as the input to an integer programming model, which optimally allocates full- and part-time staff to each period of the working day. These shifts are then assigned to daily work schedules having a six-day-on, three-day-off structure. The application of these techniques has resulted in significantly lower staffing levels, while at the same time creating both high-quality rosters and ensuring that all passenger processing targets are met. This paper charts the development of this system, outlines failures where they have occurred, and summarises the ongoing impacts of this work on the organisation.