scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Operations Research in 2013"


Journal ArticleDOI
TL;DR: This work provides mathematical models and optimization algorithms that integrate the restoration and planning decisions and specifically account for the interdependencies between the infrastructure systems.
Abstract: We consider the problem faced by managers of critical civil interdependent infrastructure systems of restoring essential public services after a non-routine event causes disruptions to these services. In order to restore the services, we must determine the set of components (or tasks) that will be temporarily installed or repaired, assign these tasks to work groups, and then determine the schedule of each work group to complete the tasks assigned to it. These restoration planning and scheduling decisions are often undertaken in an independent, sequential manner. We provide mathematical models and optimization algorithms that integrate the restoration and planning decisions and specifically account for the interdependencies between the infrastructure systems. The objective function of this problem provides a measure of how well the services are being restored over the horizon of the restoration plan, rather than just focusing on the performance of the systems after all restoration efforts are complete. We test our methods on realistic data representing infrastructure systems in New York City. Our computational results demonstrate that we can provide integrated restoration and scheduling plans of high quality with limited computational resources. We also discuss the benefits of integrating the restoration and scheduling decisions.

157 citations


Journal ArticleDOI
TL;DR: This work investigates both mathematical programming and combinatorial approaches to this scheduling problem with a restriction on peak power consumption, and test these approaches with instances arising from the manufacturing of cast iron plates.
Abstract: We study scheduling as a means to address the increasing energy concerns in manufacturing enterprises. In particular, we consider a flow shop scheduling problem with a restriction on peak power consumption, in addition to the traditional time-based objectives. We investigate both mathematical programming and combinatorial approaches to this scheduling problem, and test our approaches with instances arising from the manufacturing of cast iron plates.

153 citations


Journal ArticleDOI
TL;DR: The techniques and tools of operational research and management science that are used for scheduling aircraft landings and take-offs are reviewed, including dynamic programming, branch and bound, heuristics and meta-heuristics.
Abstract: Airport runway optimization is an ongoing challenge for air traffic controllers. Since demand for air-transportation is predicted to increase, there is a need to realize additional take-off and landing slots through better runway scheduling. In this paper, we review the techniques and tools of operational research and management science that are used for scheduling aircraft landings and take-offs. The main solution techniques include dynamic programming, branch and bound, heuristics and meta-heuristics.

120 citations


Journal ArticleDOI
TL;DR: This survey presents compact extended formulations for several graph problems involving cuts, trees, cycles and matchings, and for the mixing set, and presents the proof of Fiorini, Massar, Pokutta, Tiwary and de Wolf of an exponential lower bound for the cut polytope.
Abstract: This survey is concerned with the size of perfect formulations for combinatorial optimization problems. By “perfect formulation”, we mean a system of linear inequalities that describes the convex hull of feasible solutions, viewed as vectors. Natural perfect formulations often have a number of inequalities that is exponential in the size of the data needed to describe the problem. Here we are particularly interested in situations where the addition of a polynomial number of extra variables allows a formulation with a polynomial number of inequalities. Such formulations are called “compact extended formulations”. We survey various tools for deriving and studying extended formulations, such as Fourier’s procedure for projection, Minkowski–Weyl’s theorem, Balas’ theorem for the union of polyhedra, Yannakakis’ theorem on the size of an extended formulation, dynamic programming, and variable discretization. For each tool that we introduce, we present one or several examples of how this tool is applied. In particular, we present compact extended formulations for several graph problems involving cuts, trees, cycles and matchings, and for the mixing set. We also present Bienstock’s approximate compact extended formulation for the knapsack problem, Goemans’ result on the size of an extended formulation for the permutahedron, and the Faenza-Kaibel extended formulation for orbitopes.

119 citations


Journal ArticleDOI
TL;DR: The library serves not only as a suggestion of standard expressions of and available data for open pit mining problems, but also as encouragement for the development of increasingly sophisticated algorithms.
Abstract: Similar to the mixed-integer programming library (MIPLIB), we present a library of publicly available test problem instances for three classical types of open pit mining problems: the ultimate pit limit problem and two variants of open pit production scheduling problems. The ultimate pit limit problem determines a set of notional three-dimensional blocks containing ore and/or waste material to extract to maximize value subject to geospatial precedence constraints. Open pit production scheduling problems seek to determine when, if ever, a block is extracted from an open pit mine. A typical objective is to maximize the net present value of the extracted ore; constraints include precedence and upper bounds on operational resource usage. Extensions of this problem can include (i) lower bounds on operational resource usage, (ii) the determination of whether a block is sent to a waste dump, i.e., discarded, or to a processing plant, i.e., to a facility that derives salable mineral from the block, (iii) average grade constraints at the processing plant, and (iv) inventories of extracted but unprocessed material. Although open pit mining problems have appeared in academic literature dating back to the 1960s, no standard representations exist, and there are no commonly available corresponding data sets. We describe some representative open pit mining problems, briefly mention related literature, and provide a library consisting of mathematical models and sets of instances, available on the Internet. We conclude with directions for use of this newly established mining library. The library serves not only as a suggestion of standard expressions of and available data for open pit mining problems, but also as encouragement for the development of increasingly sophisticated algorithms.

118 citations


Journal ArticleDOI
TL;DR: A stochastic unit commitment model which takes into account various uncertainties affecting thermal energy demand and two types of power generators, i.e., quick-start and non-quick-start generators is discussed.
Abstract: The unit commitment problem has been a very important problem in the power system operations, because it is aimed at reducing the power production cost by optimally scheduling the commitments of generation units. Meanwhile, it is a challenging problem because it involves a large amount of integer variables. With the increasing penetration of renewable energy sources in power systems, power system operations and control have been more affected by uncertainties than before. This paper discusses a stochastic unit commitment model which takes into account various uncertainties affecting thermal energy demand and two types of power generators, i.e., quick-start and non-quick-start generators. This problem is a stochastic mixed integer program with discrete decision variables in both first and second stages. In order to solve this difficult problem, a method based on Benders decomposition is applied. Numerical experiments show that the proposed algorithm can solve the stochastic unit commitment problem efficiently, especially those with large numbers of scenarios.

117 citations


Journal ArticleDOI
TL;DR: This paper surveys and analyzes ten inconsistency indices from the numerical point of view and investigates degrees of agreement between them to check how similar they are.
Abstract: Evaluating the level of inconsistency of pairwise comparisons is often a crucial step in multi criteria decision analysis. Several inconsistency indices have been proposed in the literature to estimate the deviation of expert’s judgments from a situation of full consistency. This paper surveys and analyzes ten indices from the numerical point of view. Specifically, we investigate degrees of agreement between them to check how similar they are. Results show a wide range of behaviors, ranging from very strong to very weak degrees of agreement.

116 citations


Journal ArticleDOI
TL;DR: This article reviews some selected optimization models and algorithms for Political Districting which gave rise to the main lines of research on this topic in the Operations Research literature of the last five decades.
Abstract: The Political Districting problem has been studied since the 60’s and many different models and techniques have been proposed with the aim of preventing districts’ manipulation which may favor some specific political party (gerrymandering). A variety of Political Districting models and procedures was provided in the Operations Research literature, based on single- or multiple-objective optimization. Starting from the forerunning papers published in the 60’s, this article reviews some selected optimization models and algorithms for Political Districting which gave rise to the main lines of research on this topic in the Operations Research literature of the last five decades.

112 citations


Journal ArticleDOI
TL;DR: A capacitated supply chain network design model under random disruptions both in facility and transportation is proposed, which seeks to determine the optimal location and types of distribution centers (DC) and also the best plan to assign customers to each opened DC.
Abstract: Disruptions rarely occur in supply chains, but their negative financial and technical impacts make the recovery process very slow. In this paper, we propose a capacitated supply chain network design (SCND) model under random disruptions both in facility and transportation, which seeks to determine the optimal location and types of distribution centers (DC) and also the best plan to assign customers to each opened DC. Unlike other studies in the extent literature, we use new concepts of reliability to model the strategic behavior of DCs and customers at the network: (1) Failure of DCs might be partial, i.e. a disrupted DC might still be able to serve with a portion of its initial capacity (2) The lost capacity of a disrupted DC shall be provided from a non-disrupted one and (3) The lost capacity fraction of a disrupted DC depends on its initial investment amount in the design phase.

109 citations


Journal ArticleDOI
TL;DR: A completely new approach is proposed for solving the Limited Asset Markowitz (LAM) model based on a reformulation as a Standard Quadratic Program, on a new lower bound that is established, and on other recent theoretical and computational results for such problem.
Abstract: Several portfolio selection models take into account practical limitations on the number of assets to include and on their weights in the portfolio. We present here a study of the Limited Asset Markowitz (LAM) model, where the assets are limited with the introduction of quantity and cardinality constraints. We propose a completely new approach for solving the LAM model based on a reformulation as a Standard Quadratic Program, on a new lower bound that we establish, and on other recent theoretical and computational results for such problem. These results lead to an exact algorithm for solving the LAM model for small size problems. For larger problems, such algorithm can be relaxed to an efficient and accurate heuristic procedure that is able to find the optimal or the best-known solutions for problems based on some standard financial data sets that are used by several other authors. We also test our method on five new data sets involving real-world capital market indices from major stock markets. We compare our results with those of CPLEX and with those obtained with very recent heuristic approaches in order to illustrate the effectiveness of our method in terms of solution quality and of computation time. All our data sets and results are publicly available for use by other researchers.

104 citations


Journal ArticleDOI
Isi Mitrani1
TL;DR: The question of how to choose the number of reserves, and the up and down thresholds, is answered by analyzing a suitable queueing model and minimizing an appropriate cost function.
Abstract: We examine the problem of managing a server farm in a way that attempts to satisfy the conflicting objectives of high performance and low power consumption. A subset of servers is designated as ‘reserve’. The reserves are powered up when the number of jobs in the system is sufficiently high, and are powered down when that number is sufficiently low. Powering up takes an interval of time during which the reserves consume power but do not serve jobs. The question of how to choose the number of reserves, and the up and down thresholds, is answered by analyzing a suitable queueing model and minimizing an appropriate cost function. Heuristics and numerical results are also presented.

Journal ArticleDOI
TL;DR: In this article, a multicut version of the Benders decomposition method for solving two-stage stochastic linear programming problems was proposed, where the main idea is to add one cut per realization of uncertainty to the master problem in each iteration.
Abstract: In this paper, we present a multicut version of the Benders decomposition method for solving two-stage stochastic linear programming problems, including stochastic mixed-integer programs with only continuous recourse (two-stage) variables. The main idea is to add one cut per realization of uncertainty to the master problem in each iteration, that is, as many Benders cuts as the number of scenarios added to the master problem in each iteration. Two examples are presented to illustrate the application of the proposed algorithm. One involves production-transportation planning under demand uncertainty, and the other one involves multiperiod planning of global, multiproduct chemical supply chains under demand and freight rate uncertainty. Computational studies show that while both the standard and the multicut versions of the Benders decomposition method can solve large-scale stochastic programming problems with reasonable computational effort, significant savings in CPU time can be achieved by using the proposed multicut algorithm.

Journal ArticleDOI
TL;DR: A multistage DEA network model is proposed, using a set of performance indicators that combine customer satisfaction, employee evaluation, and business performance indices, to evaluate the relative efficiency of each customer service delivery step, in the environment of a bank branch.
Abstract: The linkage among customer satisfaction, employee evaluation, and business performance data is very important in modern business organizations. Several previous research efforts have studied this linkage, focusing mainly on the financial or business performance in order to analyze the efficiency of an organization. However, recent studies have tried to consider other important performance indicators, which are able to affect business operations and future growth (e.g., external and internal customer satisfaction). In the case of the banking industry, studying the relations among the aforementioned variables is able to give insight in the performance evaluation of bank branches and the viability analysis of the banking organization. This paper presents a real-world study for measuring the relative efficiency of a set of bank branches using a Data Envelopment Analysis (DEA) approach. In particular, a multistage DEA network model is proposed, using a set of performance indicators that combine customer satisfaction, employee evaluation, and business performance indices. The main aim of the presented study is to evaluate the relative efficiency of each customer service delivery step, in the environment of a bank branch. The results are also able to estimate the contribution of the assessed performance indicators to the branch’s overall efficiency, and to determine potential improvement actions.

Journal ArticleDOI
TL;DR: More realistic solutions are obtained for the scheduling problem in yoghurt industry by using the iterative hybrid optimization-simulation procedure.
Abstract: In this paper we address the production scheduling and distribution planning problem in a yoghurt production line of the multi-product dairy plants. A mixed integer linear programming model is developed for the considered problem. The objective function aims to maximize the benefit by considering the shelf life dependent pricing component and costs such as processing, setup, storage, overtime, backlogging, and transportation costs. Key features of the model include sequence dependent setup times, minimum and maximum lot sizes, overtime, shelf life requirements, machine speeds, dedicated production lines, typically arising in the dairy industry. The model obtains the optimal production plan for each product type, on each production line, in each period together with the delivery plan. The hybrid modelling approach is adopted to explore the dynamic behavior of the real world system. In the hybrid approach, operation time is considered as a dynamic factor and it is adjusted by the results of the simulation and optimization model iteratively. Thus, more realistic solutions are obtained for the scheduling problem in yoghurt industry by using the iterative hybrid optimization-simulation procedure. The efficiency and applicability of the proposed model and approach are demonstrated in a case study for a leading dairy manufacturing company in Turkey.

Journal ArticleDOI
TL;DR: A real-life ship routing and scheduling problem from the LNG business, with both inventory and berth capacity constraints at the liquefaction port, is described and the proposed solution method is well suited to solve this LNG shipping problem.
Abstract: Liquefied natural gas (LNG) is natural gas that has been transformed to liquid form for the purpose of transportation, which is mainly done by specially built LNG vessels travelling from the production site to the consumers. We describe a real-life ship routing and scheduling problem from the LNG business, with both inventory and berth capacity constraints at the liquefaction port. We propose a solution method where the routing and scheduling decisions are decomposed. The routing decisions consist of deciding which vessels should service which cargoes and in what sequence. The scheduling decisions are then to decide when to start servicing the cargoes while satisfying inventory and berth capacity constraints. The proposed solution method has been tested on several problem instances based on the real-life problem. The results show that the proposed solution method is well suited to solve this LNG shipping problem.

Journal ArticleDOI
TL;DR: A multi-echelon joint inventory-location model that simultaneously determines the location of warehouses and inventory policies at the warehouses and retailers using a Lagrangian relaxation-based approach is studied.
Abstract: We study a multi-echelon joint inventory-location model that simultaneously determines the location of warehouses and inventory policies at the warehouses and retailers. The model is formulated as a nonlinear mixed-integer program, and is solved using a Lagrangian relaxation-based approach. The efficiency of the algorithm and benefits of integration are evaluated through a computational study.

Journal ArticleDOI
TL;DR: This paper focuses on the Vehicle Routing Problem with Stochastic Demands (VRPSD) and discusses how Parallel and Distributed Computing Systems can be employed to efficiently solve the VRPSD.
Abstract: This paper focuses on the Vehicle Routing Problem with Stochastic Demands (VRPSD) and discusses how Parallel and Distributed Computing Systems can be employed to efficiently solve the VRPSD. Our approach deals with uncertainty in the customer demands by considering safety stocks, i.e. when designing the routes, part of the vehicle capacity is reserved to deal with potential emergency situations caused by unexpected demands. Thus, for a given VRPSD instance, our algorithm considers different levels of safety stocks. For each of these levels, a different scenario is defined. Then, the algorithm solves each scenario by integrating Monte Carlo simulation inside a heuristic-randomization process. This way, expected variable costs due to route failures can be naturally estimated even when customers’ demands follow a non-normal probability distribution. Use of parallelization strategies is then considered to run multiple instances of the algorithm in a concurrent way. The resulting concurrent solutions are then compared and the one with the minimum total costs is selected. Two numerical experiments allow analyzing the algorithm’s performance under different parallelization schemas.

Journal ArticleDOI
TL;DR: This work considers a two-echelon supply chain involving one manufacturer and one supplier who collaborate on improving both design and conformance quality and suggests a reward-based extension to the revenue-sharing contract to ensure system-wide optimal quality performance.
Abstract: We consider a two-echelon supply chain involving one manufacturer and one supplier who collaborate on improving both design and conformance quality. Design quality is supposed to increase product desirability, and therefore market demand, while conformance quality should reduce the proportion of defective items, and therefore increase the manufacturer’s sales revenue. We investigate how the supply chain parties allocate effort between design and conformance quality under both cooperative and non-cooperative settings in an intertemporal framework. Furthermore, we evaluate wholesale price contracts and revenue-sharing contracts in terms of their performance and coordination power. We show that although a revenue-sharing contract enables the manufacturer to effectively involve the supplier in quality improvement, neither contract type allows for perfect coordination resulting in the quality that can be achieved by a cooperative supply chain. We thus suggest a reward-based extension to the revenue-sharing contract, to ensure system-wide optimal quality performance. Importantly, we find that the supplier would be better off adopting a reward-based revenue sharing contract and refusing a standard revenue-sharing contract, while the opposite would be true for the manufacturer.

Journal ArticleDOI
TL;DR: This paper relies on the synthesis of “artificial trajectories” from the given sample of trajectories, and shows that this idea opens new avenues for designing and analyzing algorithms for batch mode reinforcement learning.
Abstract: In this paper, we consider the batch mode reinforcement learning setting, where the central problem is to learn from a sample of trajectories a policy that satisfies or optimizes a performance criterion. We focus on the continuous state space case for which usual resolution schemes rely on function approximators either to represent the underlying control problem or to represent its value function. As an alternative to the use of function approximators, we rely on the synthesis of “artificial trajectories” from the given sample of trajectories, and show that this idea opens new avenues for designing and analyzing algorithms for batch mode reinforcement learning.

Journal ArticleDOI
TL;DR: This paper develops a modeling and computational framework for supply chain networks with global outsourcing and quick-response production under demand and cost uncertainty and formulate the governing equilibrium conditions of the competing decision-makers who are faced with two-stage stochastic programming problems.
Abstract: This paper develops a modeling and computational framework for supply chain networks with global outsourcing and quick-response production under demand and cost uncertainty. Our model considers multiple off-shore suppliers, multiple manufacturers, and multiple demand markets. Using variational inequality theory, we formulate the governing equilibrium conditions of the competing decision-makers (the manufacturers) who are faced with two-stage stochastic programming problems but who also have to cooperate with the other decision-makers (the off-shore suppliers). Our theoretical and analytical results shed light on the value of outsourcing from novel real option perspectives. Moreover, our simulation studies reveal important managerial insights regarding how demand and cost uncertainty affects the profits, the risks, as well as the global outsourcing and quick-production decisions of supply chain firms under competition.

Journal ArticleDOI
TL;DR: An algorithmic strategy that utilizes a preemptively small perturbation of the right-hand-side of the Benders subproblem to generate maximal nondominated Benders cuts, as well as a complimentary strategy that generates an additional cut in each iteration via an alternative emphasis on decision variable weights are proposed.
Abstract: In this paper, we explore certain algorithmic strategies for accelerating the con- vergence of Benders decomposition method via the generation of maximal nondominated cuts. Based on interpreting the seminal work of Magnanti and Wong (Operations Research, 29(3), 464-484, 1981) for generating nondominated cuts within a multiobjective frame- work, we propose an algorithmic strategy that utilizes a preemptively small perturbation of the right-hand-side of the Benders subproblem to generate maximal nondominated Benders cuts, as well as a complimentary strategy that generates an additional cut in each iteration via an alternative emphasis on decision variable weights. We also examine the computa- tional effectiveness of solving a secondary subproblem using an objective cut as proposed by Magnanti and Wong versus identifying the Pareto-optimality region for cut generation by utilizing complementary slackness conditions. In addition, we exhibit how a standard fea- sibility cut can be extracted from the solution of subproblems that generate only optimality cuts through the use of artificial variables. With Magnanti and Wong's baseline procedure approximated during implementation via the use of a core point estimation technique (Pa- padakos in Computers and Operations Research, 36(1), 176-195, 2009), these algorithmic strategies are tested on instances from the literature concerning the fixed charge network flow program.

Journal ArticleDOI
TL;DR: A comparison with existing techniques shows that the proposed decomposition method is generally more accurate, especially in the estimation of the average buffer levels, and the generality of the approach allows for modeling and studying many different system configurations within a unique framework.
Abstract: In this paper, a decomposition method for evaluating the performance of continuous flow lines with machines characterized by general Markovian fluid models and finite capacity buffers is proposed. This study uses the exact solution of general two-stage Markovian fluid models as a building block. Decomposition equations are provided to propagate the effect of partial and complete blocking and starvation phenomena throughout the system. A decomposition algorithm that solves the new decomposition equations is proposed. Numerical results prove the good accuracy of the developed method. In particular, a comparison with existing techniques shows that our method is generally more accurate, especially in the estimation of the average buffer levels. Moreover, additional information can be collected by the application of our approach which enables a deeper analysis of the system behavior. Finally, the generality of the approach allows for modeling and studying many different system configurations within a unique framework, also including several previously uninvestigated layouts.

Journal ArticleDOI
TL;DR: The improved Benders decomposition algorithm is advantageous in decreasing the total number of iterations and CPU time when compared to the standard Benders algorithm and optimization solver CPLEX, especially for large-scale instances.
Abstract: We investigate a logistics facility location problem to determine whether the existing facilities remain open or not, what the expansion size of the open facilities should be and which potential facilities should be selected. The problem is formulated as a mixed integer linear programming model (MILP) with the objective to minimize the sum of the savings from closing the existing facilities, the expansion costs, the fixed setup costs, the facility operating costs and the transportation costs. The structure of the model motivates us to solve the problem using Benders decomposition algorithm. Three groups of valid inequalities are derived to improve the lower bounds obtained by the Benders master problem. By separating the primal Benders subproblem, different types of disaggregated cuts of the primal Benders cut are constructed in each iteration. A high density Pareto cut generation method is proposed to accelerate the convergence by lifting Pareto-optimal cuts. Computational experiments show that the combination of all the valid inequalities can improve the lower bounds significantly. By alternately applying the high density Pareto cut generation method based on the best disaggregated cuts, the improved Benders decomposition algorithm is advantageous in decreasing the total number of iterations and CPU time when compared to the standard Benders algorithm and optimization solver CPLEX, especially for large-scale instances.

Journal ArticleDOI
TL;DR: This model and the solution algorithm provide an analytical decision support tool for the hybrid power system design problem and show significant improvement in the ability to solve this type of problem in comparison to a state-of-the-art professional solver.
Abstract: This paper presents a stochastic mixed integer programming model for a comprehensive hybrid power system design problem, including renewable energy generation, storage device, transmission network, and thermal generators, for remote areas. Given the complexity of the model, we developed a Benders’ decomposition algorithm with two additional types of cutting planes: Pareto-optimal cuts generated using a modified Magnanti-Wong method and cuts generated from a maximum feasible subsystem. Computational results show significant improvement in our ability to solve this type of problem in comparison to a state-of-the-art professional solver. This model and the solution algorithm provide an analytical decision support tool for the hybrid power system design problem.

Journal ArticleDOI
TL;DR: A Markovian clearing queueing system, where the customers are accumulated according to a Poisson arrival process and the server removes all present customers at the completion epochs of exponential service cycles, is considered.
Abstract: We consider a Markovian clearing queueing system, where the customers are accumulated according to a Poisson arrival process and the server removes all present customers at the completion epochs of exponential service cycles. This system may represent the visits of a transportation facility with unlimited capacity at a certain station. The system evolves in an alternating environment that influences the arrival and the service rates. We assume that the arriving customers decide whether to join the system or balk, based on a natural linear reward-cost structure. We study the balking behavior of the customers and derive the corresponding Nash equilibrium strategies under various levels of information.

Journal ArticleDOI
TL;DR: An improved statistical approach to test the consistency of the pair-wise comparison matrix is proposed, which combines hypothesis test and maximum likelihood estimation.
Abstract: The pair-wise comparison matrix (PCM) is widely used in multi-criteria decision making methods. If the PCM is inconsistent, the resulting priority vector is not reliable. Hence, it is necessary to measure the level of the inconsistency of the PCM. There are two approaches for testing the consistency of the PCM: deterministic approaches and statistical or stochastic approaches. In this paper, an improved statistical approach to test the consistency of the PCM is proposed, which combines hypothesis test and maximum likelihood estimation. The proposed statistical approach is flexible and reliable because it sets a suitable significance level according to different situations. Two numerical examples are introduced to illustrate the proposed statistical approach.

Journal ArticleDOI
TL;DR: An interior-point branch-and-cut algorithm for structured integer programs based on Benders decomposition and the analytic center cutting plane method (ACCPM) is presented and it is shown that the ACCPM based Benders cuts are both pareto-optimal and valid for any node of the branch- and-bound tree.
Abstract: We present an interior-point branch-and-cut algorithm for structured integer programs based on Benders decomposition and the analytic center cutting plane method (ACCPM). We show that the ACCPM based Benders cuts are both pareto-optimal and valid for any node of the branch-and-bound tree. The valid cuts are added to a pool of cuts that is used to warm-start the solution of the nodes after branching. The algorithm is tested on two classes of problems: the capacitated facility location problem and the multicommodity capacitated fixed charge network design problem. For the capacitated facility location problem, the proposed approach was on average 2.5 times faster than Benders-branch-and-cut and 11 times faster than classical Benders decomposition. For the multicommodity capacitated fixed charge network design problem, the proposed approach was 4 times faster than Benders-branch-and-cut while classical Benders decomposition failed to solve the majority of the tested instances.

Journal ArticleDOI
TL;DR: An integer linear formulation is presented for the Multi-Depot Multiple Traveling Salesman Problem and it is shown that instances involving up to 255 customers and 25 possible depots can be solved optimally using the proposed methodology.
Abstract: We study the Multi-Depot Multiple Traveling Salesman Problem (MDMTSP), which is a variant of the very well-known Traveling Salesman Problem (TSP). In the MDMTSP an unlimited number of salesmen have to visit a set of customers using routes that can be based on a subset of available depots. The MDMTSP is an NP-hard problem because it includes the TSP as a particular case when the distances satisfy the triangular inequality. The problem has some real applications and is closely related to other important multi-depot routing problems, like the Multi-Depot Vehicle Routing Problem and the Location Routing Problem. We present an integer linear formulation for the MDMTSP and strengthen it with the introduction of several families of valid inequalities. Certain facet-inducing inequalities for the TSP polyhedron can be used to derive facet-inducing inequalities for the MDMTSP. Furthermore, several inequalities that are specific to the MDMTSP are also studied and proved to be facet-inducing. The partial knowledge of the polyhedron has been used to implement a Branch-and-Cut algorithm in which the new inequalities have been shown to be very effective. Computational results show that instances involving up to 255 customers and 25 possible depots can be solved optimally using the proposed methodology.

Journal ArticleDOI
TL;DR: This paper develops both exact and heuristic solution approaches for the crane scheduling problem for a vessel after the vessel is moored on a terminal and develops two heuristics for large-sized instances.
Abstract: In this paper, we study the crane scheduling problem for a vessel after the vessel is moored on a terminal and develop both exact and heuristic solution approaches for the problem. For small-sized instances, we develop a time-space network flow formulation with non-crossing constraints for the problem and apply an exact solution approach to obtain an optimal solution. For medium-sized instances, we develop a Lagrangian relaxation approach that allows us to obtain tight lower bounds and near-optimal solutions. For large-sized instances, we develop two heuristics and show that the error bounds of our heuristics are no more than 100%. Finally, we perform computational studies to show the effectiveness of our proposed solution approaches.

Journal ArticleDOI
TL;DR: A multi-criteria decision aiding model is developed through the use of the Choquet integral, an extension of the TODIM method, which is based on nonlinear Cumulative Prospect Theory.
Abstract: In this paper a multi-criteria decision aiding model is developed through the use of the Choquet integral. The proposed model is an extension of the TODIM method, which is based on nonlinear Cumulative Prospect Theory. The paper starts by reviewing the first steps of behavioral decision theory. A presentation of the TODIM method follows. The basic concepts of the Choquet integral as related to multi-criteria decision aiding are reviewed. It is also shown how the measures of dominance of the TODIM method can be rewritten through the application of the Choquet integral. From the ordering of decision criteria the fuzzy measures of criteria interactions are computed, which leads to the ranking of alternatives. A case study on the forecasting of property values for rent in a Brazilian city illustrates the proposed model. Results obtained from the use of the Choquet integral are then compared against a previously made usage of the TODIM method. It is concluded that significant advantages exist derived from the use of the Choquet integral. The paper closes with recommendations for future research.