scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Operations Research in 1993"


Journal ArticleDOI
TL;DR: Approximate methods based on descent, hybrid simulated annealing/tabu search, and tabu search algorithms are developed and different search strategies are investigated and an estimate for the tabu list size is statistically derived.
Abstract: The vehicle routing problem (VRP) under capacity and distance restrictions involves the design of a set of minimum cost delivery routes, originating and terminating at a central depot, which services a set of customers. Each customer must be supplied exactly once by one vehicle route. The total demand of any vehicle must not exceed the vehicle capacity. The total length of any route must not exceed a pre-specified bound. Approximate methods based on descent, hybrid simulated annealing/tabu search, and tabu search algorithms are developed and different search strategies are investigated. A special data structure for the tabu search algorithm is implemented which has reduced notably the computational time by more than 50%. An estimate for the tabu list size is statistically derived. Computational results are reported on a sample of seventeen bench-mark test problems from the literature and nine randomly generated problems. The new methods improve significantly both the number of vehicles used and the total distances travelled on all results reported in the literature.

1,051 citations


Journal ArticleDOI
TL;DR: This presentation demonstrates that a well-tuned implementation of tabu search makes it possible to obtain solutions of high quality for difficult problems, yielding outcomes in some settings that have not been matched by other known techniques.
Abstract: We describe the main features of tabu search, emphasizing a perspective for guiding a user to understand basic implementation principles for solving combinatorial or nonlinear problems. We also identify recent developments and extensions that have contributed to increasing the efficiency of the method. One of the useful aspects of tabu search is the ability to adapt a rudimentary prototype implementation to encompass additional model elements, such as new types of constraints and objective functions. Similarly, the method itself can be evolved to varying levels of sophistication. We provide several examples of discrete optimization problems to illustrate the strategic concerns of tabu search, and to show how they may be exploited in various contexts. Our presentation is motivated by the emergence of an extensive literature of computational results, which demonstrates that a well-tuned implementation makes it possible to obtain solutions of high quality for difficult problems, yielding outcomes in some settings that have not been matched by other known techniques.

941 citations


Journal ArticleDOI
TL;DR: A hierarchical algorithm for the flexible job shop scheduling problem is described, based on the tabu search metaheuristic, which allows to adapt the same basic algorithm to different objective functions.
Abstract: A hierarchical algorithm for the flexible job shop scheduling problem is described, based on the tabu search metaheuristic. Hierarchical strategies have been proposed in the literature for complex scheduling problems, and the tabu search metaheuristic, being able to cope with different memory levels, provides a natural background for the development of a hierarchical algorithm. For the case considered, a two level approach has been devised, based on the decomposition in a routing and a job shop scheduling subproblem, which is obtained by assigning each operation of each job to one among the equivalent machines. Both problems are tackled by tabu search. Coordination issues between the two hierarchical levels are considered. Unlike other hierarchical schemes, which are based on a one-way information flow, the one proposed here is based on a two-way information flow. This characteristic, together with the flexibility of local search strategies like tabu search, allows to adapt the same basic algorithm to different objective functions. Preliminary computational experience is reported.

874 citations


Journal ArticleDOI
TL;DR: This paper applies the tabu-search technique to the job-shop scheduling problem, a notoriously difficult problem in combinatorial optimization and shows that the implementation of this method dominates both a previous approach with tabu search and the other heuristics based on iterative improvements.
Abstract: In this paper, we apply the tabu-search technique to the job-shop scheduling problem, a notoriously difficult problem in combinatorial optimization. We show that our implementation of this method dominates both a previous approach with tabu search and the other heuristics based on iterative improvements.

605 citations


Journal ArticleDOI
TL;DR: A general approach to analyzing the convergence and the rate of convergence of feasible descent methods that does not require any nondegeneracy assumption on the problem is surveyed and extended.
Abstract: We survey and extend a general approach to analyzing the convergence and the rate of convergence of feasible descent methods that does not require any nondegeneracy assumption on the problem. This approach is based on a certain error bound for estimating the distance to the solution set and is applicable to a broad class of methods.

477 citations


Journal ArticleDOI
TL;DR: The goals of the paper are to demonstrate that although non-standard, many of the important quantitative and qualitative properties of ordinary differential equations that hold under the standard conditions apply here as well, and to prove convergence for a class of numerical schemes designed to approximate solutions to a given variational inequality.
Abstract: The variational inequality problem has been utilized to formulate and study a plethora of competitive equilibrium problems in different disciplines, ranging from oligopolistic market equilibrium problems to traffic network equilibrium problems. In this paper we consider for a given variational inequality a naturally related ordinary differential equation. The ordinary differential equations that arise are nonstandard because of discontinuities that appear in the dynamics. These discontinuities are due to the constraints associated with the feasible region of the variational inequality problem. The goals of the paper are two-fold. The first goal is to demonstrate that although non-standard, many of the important quantitative and qualitative properties of ordinary differential equations that hold under the standard conditions, such as Lipschitz continuity type conditions, apply here as well. This is important from the point of view of modeling, since it suggests (at least under some appropriate conditions) that these ordinary differential equations may serve as dynamical models. The second goal is to prove convergence for a class of numerical schemes designed to approximate solutions to a given variational inequality. This is done by exploiting the equivalence between the stationary points of the associated ordinary differential equation and the solutions of the variational inequality problem. It can be expected that the techniques described in this paper will be useful for more elaborate dynamical models, such as stochastic models, and that the connection between such dynamical models and the solutions to the variational inequalities will provide a deeper understanding of equilibrium problems.

420 citations


Journal ArticleDOI
Mark Broadie1
TL;DR: The tradeoff between estimation error and stationarity is investigated and a method for adjusting for the bias is suggested and a statistical test is proposed to check for nonstationarity in historical data.
Abstract: The mean-variance model for portfolio selection requires estimates of many parameters. This paper investigates the effect of errors in parameter estimates on the results of mean-variance analysis. Using a small amount of historical data to estimate parameters exposes the model to estimation errors. However, using a long time horizon to estimate parametes increasers the possibility of nonstationarity in the parameters. This paper investigates the tradeoff between estimation error and stationarity. A simulation study shows that the effects of estimation error can be surprisingly large. The magnitude of the errors increase with the number of securities in the analysis. Due to the error maximization property of mean-variance analysis, estimates of portfolio performance are optimistically biased predictors of actual portfolio performance. It is important for users of mean-variance analysis to recognize and correct for this phenomenon in order to develop more realistic expectations of the future performance of a portfolio. This paper suggests a method for adjusting for the bias. A statistical test is proposed to check for nonstationarity in historical data.

278 citations


Journal ArticleDOI
TL;DR: A scheme based on a blending of classical Benders decomposition techniques and a special technique, called importance sampling, is used to solve this general class of multi-stochastic linear programs.
Abstract: The paper demonstrates how multi-period portfolio optimization problems can be efficiently solved as multi-stage stochastic linear programs. A scheme based on a blending of classical Benders decomposition techniques and a special technique, called importance sampling, is used to solve this general class of multi-stochastic linear programs. We discuss the case where stochastic parameters are dependent within a period as well as between periods. Initial computational results are presented.

249 citations


Journal ArticleDOI
TL;DR: This paper presents a tabu search based method for finding good solutions to a real-life vehicle routing problem that takes the heterogeneous character of the fleet into account and obtains solutions that are significantly better than those previously developed and implemented in practice.
Abstract: This paper presents a tabu search based method for finding good solutions to a real-life vehicle routing problem. The problem considered deals with some new features beyond those normally associated with the classical problems of the literature: in addition to capacity constraints for vehicles and time windows for deliveries, it takes the heterogeneous character of the fleet into account, in the sense that utilization costs are vehicle-dependent and that some accessibility restrictions have to be fulfilled. It also deals with the use of trailers. In spite of the intricacy of the problem, the proposed tabu search approach is easy to implement and can be easily adapted to many other applications. An emphasis is placed on means that have to be used to speed up the search. In a few minutes of computation on a personal workstation, our approach obtains solutions that are significantly better than those previously developed and implemented in practice.

246 citations


Journal ArticleDOI
TL;DR: This paper proposes a practical scheme to obtain a portfolio with a large third moment under the constraints on the first and second moment and solves the problem is a linear programming problem, so that a large scale model can be optimized without difficulty.
Abstract: It is assumed in the standard portfolio analysis that an investor is risk averse and that his utility is a function of the mean and variance of the rate of the return of the portfolio or can be approximated as such. It turns out, however, that the third moment (skewness) plays an important role if the distribution of the rate of return of assets is asymmetric around the mean. In particular, an investor would prefer a portfolio with larger third moment if the mean and variance are the same. In this paper, we propose a practical scheme to obtain a portfolio with a large third moment under the constraints on the first and second moment. The problem we need to solve is a linear programming problem, so that a large scale model can be optimized without difficulty. It is demonstrated that this model generates a portfolio with a large third moment very quickly.

232 citations


Journal ArticleDOI
John G. Klincewicz1
TL;DR: New heuristics for thep-hub location problem are described, based on tabu search and on a greedy randomized adaptive search procedure (GRASP), capable of examining several local optima, so that, overall, superior solutions are found.
Abstract: In the discretep-hub location problem, various nodes interact with each other by sending and receiving given levels of traffic (such as telecommunications traffic, data transmissions, airline passengers, packages, etc.). It is necessary to choosep of the given nodes to act as hubs, which are fully interconnected; it is also necessary to connect each other node to one of these hubs so that traffic can be sent between any pair of nodes by using the hubs as switching points. The objective is to minimize the sum of the costs for sending traffic along the links connecting the various nodes. Like many combinatorial problems, thep-hub location problem has many local optima. Heuristics, such as exchange methods, can terminate once such a local optimum is encountered. In this paper, we describe new heuristics for thep-hub location problem, based on tabu search and on a greedy randomized adaptive search procedure (GRASP). These recently developed approaches to combinatorial optimization are capable of examining several local optima, so that, overall, superior solutions are found. Computational experience is reported in which both tabu search and GRASP found “optimal” hub locations (subject to the assumption that nodes must be assigned to the nearest hub) in over 90% of test problems. For problems for which such optima are not known, tabu search and GRASP generated new best-known solutions.

Journal ArticleDOI
TL;DR: In this article, Benders decomposition techniques and Monte Carlo sampling (importance sampling) are used for solving two-stage stochastic linear programs with recourse, a method first introduced by Dantzig and Glynn.
Abstract: This paper focuses on Benders decomposition techniques and Monte Carlo sampling (importance sampling) for solving two-stage stochastic linear programs with recourse, a method first introduced by Dantzig and Glynn [7]. The algorithm is discussed and further developed. The paper gives a complete presentation of the method as it is currently implemented. Numerical results from test problems of different areas are presented. Using small test problems, we compare the solutions obtained by the algorithm with universe solutions. We present the solutions of large-scale problems with numerous stochastic parameters, which in the deterministic formulation would have billions of constraints. The problems concern expansion planning of electric utilities with uncertainty in the availabilities of generators and transmission lines and portfolio management with uncertainty in the future returns.

Journal ArticleDOI
TL;DR: A meta-heuristic to embed deterministic local search techniques into simulated annealing so that the chain explores only local optima makes large, global changes, even at low temperatures, thus overcoming large barriers in configuration space.
Abstract: We introduce a meta-heuristic to combine simulated annealing with local search methods for CO problems. This new class of Markov chains leads to significantly more powerful optimization methods that wither simulated annealing or local search. The main idea is to embed deterministic local search techniques into simulated annealing so that the chain explores only local optima. It makes large, global changes, even at low temperatures, thus overcoming large barriers in configuration space. We have tested this meta-heuristic for the traveling salesman and graph partitioning problems. Tests on instances from public libraries and random ensembles quantify the power of the method. Our algorithm is able to solve large instances to optimality, improving upon state of the art local search methods very significantly. For the traveling sales man problem with randomly distributed cities in a square, the procedure improves on 3-opt by 1.6% an d on Lin-Kernighan local search by 1.3%. For the partitioning of sparse random graphs of average degree equal to 5, the improvement over Kernighan-Lin local searches 8.9%. For both CO problems, we obtain new champion heuristics.

Journal ArticleDOI
TL;DR: Several Linear Programming (LP) and Mixed Integer Programming (MIP) models for the production and capacity planning problems with uncertainty in demand are proposed and scenario-based models for formalizing implementable policies are presented.
Abstract: Several Linear Programming (LP) and Mixed Integer Programming (MIP) models for the production and capacity planning problems with uncertainty in demand are proposed. In contrast to traditional mathematical programming approaches, we use scenarios to characterize the uncertainty in demand. Solutions are obtained for each scenario and then these individual scenario solutions are aggregated to yield a nonanticipative or implementable policy. Such an approach makes it possible to model nonstationarity in demand as well as a variety of recourse decision types. Two scenario-based models for formalizing implementable policies are presented. The first model is a LP model for multi-product, multi-period, single-level production planning to determine the production volume and product inventory for each period, such that the expected cost of holding inventory and lost demand is minimized. The second model is a MIP model for multi-product, multi-period, single-level production planning to help in sourcing decisions for raw materials supply. Although these formulations lead to very large scale mathematical programming problems, our computational experience with LP models for real-life instances is very encouraging.

Journal ArticleDOI
TL;DR: It is shown how to transform this problem into a general mean-variance optimization problem, hence the Critical Line Algorithm is applicable and is applicable.
Abstract: The general mean-semivariance portfolio optimization problem seeks to determine the efficient frontier by solving a parametric non-quadratic programming problem. In this paper it is shown how to transform this problem into a general mean-variance optimization problem, hence the Critical Line Algorithm is applicable. This paper also discusses how to implement the critical line algorithm to save storage and reduce execution time.

Journal ArticleDOI
TL;DR: Extensive computational results show that tabu search is a competitive approach for this class of problems and comparisons with an efficient dual ascent procedure are reported.
Abstract: We propose a tabu search heuristic for the location/allocation problem with balancing requirements. This problem typically arises in the context of the medium term management of a fleet of containers of multiple types, where container depots have to be selected, the assignment of customers to depots has to be established for each type of container, and the interdepot container traffic has to be planned to account for differences in supplies and demands in various zones of the geographical territory served by a container shipping company. It is modeled as a mixed integer program, which combines zero-one location variables and a multicommodity network flow structure. Extensive computational results on a set of benchmark problems and comparisons with an efficient dual ascent procedure are reported. These show that tabu search is a competitive approach for this class of problems.

Journal ArticleDOI
TL;DR: The various pivot rules of the simplex method and its variants that have been developed in the last two decades are discussed, starting from the appearance of the minimal index rule of Bland.
Abstract: The purpose of this paper is to discuss the various pivot rules of the simplex method and its variants that have been developed in the last two decades, starting from the appearance of the minimal index rule of Bland. We are mainly concerned with finiteness properties of simplex type pivot rules. Well known classical results concerning the simplex method are not considered in this survey, but the connection between the new pivot methods and the classical ones, if there is any, is discussed. In this paper we discuss three classes of recently developed pivot rules for linear programming. The first and largest class is the class of essentially combinatorial pivot rules including minimal index type rules and recursive rules. These rules only use labeling and signs of the variables. The second class contains those pivot rules which can actually be considered as variants or generalizations or specializations of Lemke's method, and so they are closely related to parametric programming. The last class has the common feature that the rules all have close connections to certain interior point methods. Finally, we mention some open problems for future research.

Journal ArticleDOI
TL;DR: Analytical results for distribution to a continuous demand show that nearly optimal costs can be achieved with suboptimal locations, and this paper compares the transportation cost for suboptical location and allocation schemes to the optimal cost to determine ifSuboptimal location and allocate schemes can produce nearly optimal transportation costs.
Abstract: Locating transshipment facilities and allocating origins and destinations to transshipment facilities are important decisions for many distribution and logistic systems. Models that treat demand as a continuous density over the service region often assume certain facility locations or a certain allocation of demand. It may be assumed that facility locations lie on a rectangular grid or that demand is allocated to the nearest facility or allocated such that each facility serves an equal amount of demand. These assumptions result in suboptimal distribution systems. This paper compares the transportation cost for suboptimal location and allocation schemes to the optimal cost to determine if suboptimal location and allocation schemes can produce nearly optimal transportation costs. Analytical results for distribution to a continuous demand show that nearly optimal costs can be achieved with suboptimal locations. An example of distribution to discrete demand points indicates the difficulties in applying these results to discrete demand problems.

Journal ArticleDOI
TL;DR: This paper proposes that hash functions be used to record the solutions encountered during recent iterations of the search in a long list to free the algorithm designer of the need to consider cycling when creating tabu restrictions based on move attributes.
Abstract: Tabu search as proposed by Glover [3,4] has proven to be a very effective metaheuristic for hard problems. In this paper we propose that hash functions be used to record the solutions encountered during recent iterations of the search in a long list. Hash values of potential solutions can be compared to the values on the list for the purpose of avoiding cycling. This frees the algorithm designer of the need to consider cycling when creating tabu restrictions based on move attributes. We suggest specific functions that result in very good performance.

Journal ArticleDOI
TL;DR: In this article, two variants of a tabu search heuristic, a deterministic one and a probabilistic one, for the maximum clique problem are described and compared with those of other approximate methods.
Abstract: We describe two variants of a tabu search heuristic, a deterministic one and a probabilistic one, for the maximum clique problem. This heuristic may be viewed as a natural alternative implementation of tabu search for this problem when compared to existing ones. We also present a new random graph generator, the\(\hat p\)-generator, which produces graphs with larger clique sizes than comparable ones obtained by classical random graph generating techniques. Computational results on a large set of test problems randomly generated with this new generator are reported and compared with those of other approximate methods.

Journal ArticleDOI
TL;DR: A dynamic strategy, the reverse elimination method, for tabu list management, is described and directions on improving its computational effort are given.
Abstract: Tabu search is a metastrategy for guiding known heuristics to overcome local optimality. Successful applications of this kind of metaheuristic to a great variety of problems have been reported in the literature. However, up to now mainly static tabu list management ideas have been applied. In this paper we describe a dynamic strategy, the reverse elimination method, and give directions on improving its computational effort. The impact of the method will be shown with respect to a multiconstraint version of the zero-one knapsack problem. Numerical results are presented comparing it with a simulated annealing approach.

Journal ArticleDOI
TL;DR: A new approach to cutting a rectangular sheet of material into pieces of arbitrary shapes is presented and a tabu search is applied for finding a final cutting pattern.
Abstract: In this paper, a new approach to cutting a rectangular sheet of material into pieces of arbitrary shapes is presented. The proposed method consists of two stages. After the generation of an initial solution, a tabu search is applied for finding a final cutting pattern. The presentation of the main ideas of the method is followed by a description of an implementation and some experimental results.

Journal ArticleDOI
TL;DR: A light traffic heuristic for anM/G/1 queue with limited inventory that gives rise to a closed form expression for average delay in terms of basic system parameters is developed.
Abstract: Motivated by solving a stylized location problem, we develop a light traffic heuristic for anM/G/1 queue with limited inventory that gives rise to a closed form expression for average delay in terms of basic system parameters. Simulation experiments show that the heuristic works well. The inventory operates as follows: the inventory level drops by one unit after each service completion and whenever it drops to a pre-specified levelu, an order is placed with replenishment time ∼ exp(γ). Upon replenishment the inventory is restocked to a pre-specified levels and any arrivals when there is no inventory are placed in queue. Suggestions are given to cover the more general case of a New Better than Used (NBU) replenishment time distribution. Applications to inventory management problems are also discussed.

Journal ArticleDOI
TL;DR: A new heuristic algorithm to perform tabu search on the Quadratic Assignment Problem (QAP) is developed and a new intensification strategy based on intermediate term memory is proposed and shown to be promising especially while solving large QAPs.
Abstract: A new heuristic algorithm to perform tabu search on the Quadratic Assignment Problem (QAP) is developed. A massively parallel implementation of the algorithm on the Connection Machine CM-2 is provided. The implementation usesn2 processors, wheren is the size of the problem. The elements of the algorithm, calledPar_tabu, include dynamically changing tabu list sizes, aspiration criterion and long term memory. A new intensification strategy based on intermediate term memory is proposed and shown to be promising especially while solving large QAPs. The combination of all these elements gives a very efficient heuristic for the QAP: the best known or improved solutions are obtained in a significantly smaller number of iterations than in other comparative studies. Combined with the implementation on CM-2, this approach provides suboptimal solutions to QAPs of bigger dimensions in reasonable time.

Journal ArticleDOI
TL;DR: A new approach for quantifying a bank's managerial efficiency is presented, using a data-envelopment-analysis model that combines multiple inputs and outputs to compute a scalar measure of efficiency and quality.
Abstract: The dramatic rise in bank failures over the last decade has led to a search for leading indicators so that costly bailouts might be avoided. While the quality of a bank's management is generally acknowledged to be a key contributor to institutional collapse, it is usually excluded from early warning models for lack of a metric. This paper presents a new approach for quantifying a bank's managerial efficiency, using a data-envelopment-analysis model that combines multiple inputs and outputs to compute a scalar measure of efficiency and quality. An analysis of 930 banks over a five-year period shows significant differences in management-quality scores between surviving and failing institutions. These differences are detectable long before failure occurs and increase as the failure date approaches. Hence this new metric provides an important, yet previously missing, modelling element for the early identification of troubled banks.

Journal ArticleDOI
TL;DR: An integrated simulation/optimization model for managing portfolios of mortgage-backed securities using amean-absolute deviation model which is consistent with the asymmetric distribution of returns of mortgage securities and derivative products is developed.
Abstract: We develop an integrated simulation/optimization model for managing portfolios of mortgage-backed securities. The mortgage portfolio problem is viewed in the same spirit of models used for the management of portfolios of equities. That is, it trades off rates of return with a suitable measure of risk. In this respect we employ amean-absolute deviation model which is consistent with the asymmetric distribution of returns of mortgage securities and derivative products. We develop a simulation procedure to compute holding period returns of the mortgage securities under a range of interest rate scenarios. The simulation explicitly takes into account the stylized facts of mortgage securities: the propensity of homeowners to prepay their mortgages, and theoption adjusted premia associated with these securities. Details of both the simulation and optimization models are presented. The model is then applied to the funding of a typical insurance liability stream, and it is shown to generate superior results than the standardportfolio immunization approach.

Journal ArticleDOI
TL;DR: A rigorous mathematical programming framework for the scheduling of multipurpose batch plants operated in a cyclic mode is presented and it is shown that it is sufficient for the formulation to characterize a single cycle of the periodic schedule despite the existence of tasks that span two successive cycles.
Abstract: A rigorous mathematical programming framework for the scheduling of multipurpose batch plants operated in a cyclic mode is presented. The proposed formulation can deal with batch operations described by complex processing networks, involving shared intermediates, material recycles, and multiple processing routes to the same end-product or intermediate. Batch aggregation and splitting are also allowed. The formulation permits considerable flexibility in the utilisation of processing equipment and storage capacity, and accommodates problems with limited availability of utilities. The scheduling problem is formulated as a large mixed integer linear program (MILP). For a given cycle time, it is shown that it is sufficient for the formulation to characterize a single cycle of the periodic schedule despite the existence of tasks that span two successive cycles. The optimal cycle time is determined by solving a sequence of fixed cycle time problems. The MILP is solved by a branch-and-bound algorithm modified so as to avoid exploring branches that are cyclic permutations of others already fathomed. The resulting implementation permits the solution of problems of realistic size within reasonable computational effort. Several examples are used to illustrate the applicability of the algorithm.

Journal ArticleDOI
Alan J. King1
TL;DR: The Levy-Markowitz argument is extended to account for asymmetric risk by basing the local approximation onpiecewise linear-quadratic risk measures, which can be tuned to express a wide range of preferences and adjusted to reject outliers in the data.
Abstract: Traditional asset allocation of the Markowitz type defines risk to be the variance of the return, contradicting the common-sense intuition that higher returns should be preferred to lower. An argument of Levy and Markowitz justifies the mean/variance selection criteria by deriving it from a local quadratic approximation to utility functions. We extend the Levy-Markowitz argument to account for asymmetric risk by basing the local approximation onpiecewise linear-quadratic risk measures, which can be tuned to express a wide range of preferences and adjusted to reject outliers in the data. The implications of this argument lead us to reject the commonly proposed asymmetric alternatives, the mean/lower partial moment efficient frontiers, in favor of the “risk tolerance” frontier. An alternative model that allows for asymmetry is the tracking model, where a portfolio is sought to reproduce a (possibly) asymmetric distribution at lowest cost.

Journal ArticleDOI
TL;DR: A multiobjective model to depict the tradeoffs involved when locating one or more undesirable facilities to service a region, and generates the set of efficient solutions using an enumeration algorithm.
Abstract: In this paper, we develop a multiobjective model to depict the tradeoffs involved when locating one or more undesirable facilities to service a region. We assume that the region requires a certain capacity of service, and that this capacity can be met by building a combination of different-sized facilities. Examples could include sanitary landfills, incinerators, and power-generating stations. Our objectives are to minimize the total cost of the facilities located, the total opposition to the facilities, and the maximum disutility imposed on any individual. Opposition and disutility are assumed to be nonlinearly decreasing functions of distance, and increasing functions of facility size. We formulate our model as a multiobjective mixed-integer program, and generate the set of efficient solutions using an enumeration algorithm. Our code can solve realistically sized problems on a microcomputer. We give an example to illustrate the tradeoffs between the three objectives, which are inevitable in such a location problem.

Journal ArticleDOI
TL;DR: While simulating the original Markov chain with the original cooling schedule implicitly, this work speed up both stand-alone simulated annealing and the combination by a factor going to infinity as the number of transitions generated goes to infinity.
Abstract: We integrate tabu search, simulated annealing, genetic algorithms, and random restarting. In addition, while simulating the original Markov chain (defined on a state space tailored either to stand-alone simulated annealing or to the hybrid scheme) with the original cooling schedule implicitly, we speed up both stand-alone simulated annealing and the combination by a factor going to infinity as the number of transitions generated goes to infinity. Beyond this, speedup nearly linear in the number of independent parallel processors often can be expected.