scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Operations Research in 2010"


Journal ArticleDOI
TL;DR: The Generalized Nash Equilibrium Problem is an important model that has its roots in the economic sciences but is being fruitfully used in many different fields and its main properties and solution algorithms are discussed.
Abstract: The Generalized Nash Equilibrium Problem is an important model that has its roots in the economic sciences but is being fruitfully used in many different fields. In this survey paper we aim at discussing its main properties and solution algorithms, pointing out what could be useful topics for future research in the field.

838 citations


Journal ArticleDOI
TL;DR: Variable neighbourhood search (VNS) is a metaheuristic, or a framework for building heuristics, based upon systematic changes of neighbourhoods both in descent phase, to find a local minimum, and in perturbation phase to emerge from the corresponding valley as mentioned in this paper.
Abstract: Variable neighbourhood search (VNS) is a metaheuristic, or a framework for building heuristics, based upon systematic changes of neighbourhoods both in descent phase, to find a local minimum, and in perturbation phase to emerge from the corresponding valley It was first proposed in 1997 and has since then rapidly developed both in its methods and its applications In the present paper, these two aspects are thoroughly reviewed and an extensive bibliography is provided Moreover, one section is devoted to newcomers It consists of steps for developing a heuristic for any particular problem Those steps are common to the implementation of other metaheuristics

718 citations


Journal ArticleDOI
TL;DR: The main advances regarding the use of the Choquet and Sugeno integrals in multi-criteria decision aid over the last decade are reviewed, which concern mainly a bipolar extension of both theChoquet integral and the Sugeno integral.
Abstract: The main advances regarding the use of the Choquet and Sugeno integrals in multi-criteria decision aid over the last decade are reviewed. They concern mainly a bipolar extension of both the Choquet integral and the Sugeno integral, interesting particular submodels, new learning techniques, a better interpretation of the models and a better use of the Choquet integral in multi-criteria decision aid. Parallel to these theoretical works, the Choquet integral has been applied to many new fields, and several softwares and libraries dedicated to this model have been developed.

439 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to introduce the area of Green Logistics and to describe some of the problems that arise in this subject which can be formulated as combinatorial optimization problems.
Abstract: The purpose of this paper is to introduce the area of Green Logistics and to describe some of the problems that arise in this subject which can be formulated as combinatorial optimization problems. The paper particularly considers the topics of reverse logistics, waste management and vehicle routing and scheduling.

266 citations


Journal ArticleDOI
TL;DR: This paper presents a systematic investigation on model building of DEA without transferring undesirable data, and describes the disposability assumptions and a number of different performance measures in the presence of undesirable inputs and outputs.
Abstract: Data Envelopment Analysis (DEA) models with undesirable inputs and outputs have been frequently discussed in DEA literature, e.g., via data transformation. These studies were scatted in the literature, and often confined to some particular applications. In this paper we present a systematic investigation on model building of DEA without transferring undesirable data. We first describe the disposability assumptions and a number of different performance measures in the presence of undesirable inputs and outputs, and then discuss different combinations of the disposability assumptions and the metrics. This approach leads to a unified presentation of several classes of DEA models with undesirable inputs and/or outputs.

241 citations


Journal ArticleDOI
TL;DR: A survey of recent contributions to robust portfolio strategies from operations research and finance to the theory of portfolio selection in terms of the standard mean-variance objective and optimal estimation methods and Bayesian robust approaches is provided.
Abstract: In this paper we provide a survey of recent contributions to robust portfolio strategies from operations research and finance to the theory of portfolio selection. Our survey covers results derived not only in terms of the standard mean-variance objective, but also in terms of two of the most popular risk measures, mean-VaR and mean-CVaR developed recently. In addition, we review optimal estimation methods and Bayesian robust approaches.

227 citations


Journal ArticleDOI
TL;DR: A common feature among the several models is that the efficiency evaluation of the DMU depends on the efficiency values of its subunits thereby increasing the discrimination power of DEA methodology with respect to the black box approach.
Abstract: We classify the contributions of DEA literature assessing Decision Making Units (DMUs) whose internal structure is known. Starting from an elementary framework, we define the main research areas as shared flow, multilevel and network models, depending on the assumptions they are subject to. For each model category, the principal mathematical formulations are introduced along with their main variants, extensions and applications. We also discuss the results of aggregating efficiency measures and of considering DMUs as submitted to a central authority that imposes constraints or targets on them. A common feature among the several models is that the efficiency evaluation of the DMU depends on the efficiency values of its subunits thereby increasing the discrimination power of DEA methodology with respect to the black box approach.

193 citations


Journal ArticleDOI
TL;DR: This paper identifies properties of an optimal schedule with heterogeneous patients and proposes a local search algorithm to find local optimal schedules and performs a set of numerical experiments to provide managerial insights for health care practitioners.
Abstract: Clinical overbooking is intended to reduce the negative impact of patient no-shows on clinic operations and performance. In this paper, we study the clinical scheduling problem with overbooking for heterogeneous patients, i.e. patients who have different no-show probabilities. We consider the objective of maximizing expected profit, which includes revenue from patients and costs associated with patient waiting times and physician overtime. We show that the objective function with homogeneous patients, i.e. patients with the same no-show probability, is multimodular. We also show that this property does not hold when patients are heterogeneous. We identify properties of an optimal schedule with heterogeneous patients and propose a local search algorithm to find local optimal schedules. Then, we extend our results to sequential scheduling and propose two sequential scheduling procedures. Finally, we perform a set of numerical experiments and provide managerial insights for health care practitioners.

178 citations


Journal ArticleDOI
TL;DR: This paper provides a review of the most recent developments that had a major impact in the current state-of-the-art of exact algorithms for vehicle routing problems under capacity constraints, with a focus on the basic Capacitated Vehicle Routing Problem (CVRP) and on heterogeneous vehicle routing problem.
Abstract: The solution of a vehicle routing problem calls for the determination of a set of routes, each performed by a single vehicle which starts and ends at its own depot, such that all the requirements of the customers are fulfilled and the global transportation cost is minimized. The routes have to satisfy several operational constraints which depend on the nature of the transported goods, on the quality of the service level, and on the characteristics of the customers and of the vehicles. One of the most common operational constraint addressed in the scientific literature is that the vehicle fleet is capacitated and the total load transported by a vehicle cannot exceed its capacity.

156 citations


Journal ArticleDOI
TL;DR: This paper developed a decision support system, based on the Erlang loss model, which can be used to evaluate the current size of nursing units and validated this model with hospital data over the years 2004–2006.
Abstract: How many beds must be allocated to a specific clinical ward to meet production targets? When budgets get tight, what are the effects of downsizing a nursing unit? These questions are often discussed by medical professionals, hospital consultants, and managers. In these discussions the occupancy rate is of great importance and often used as an input parameter. Most hospitals use the same target occupancy rate for all wards, often 85%. Sometimes an exception is made for critical care and intensive care units. In this paper we demonstrate that this equity assumption is unrealistic and that it might result in an excessive number of refused admissions, particularly for smaller units. Queuing theory is used to quantify this impact. We developed a decision support system, based on the Erlang loss model, which can be used to evaluate the current size of nursing units. We validated this model with hospital data over the years 2004–2006. Finally, we demonstrate the efficiency of merging departments.

143 citations


Journal ArticleDOI
TL;DR: A variation of the p-center problem with an additional assumption that the facility at a node fails to respond to demands from the node is developed and an efficient algorithm for optimal locations on a general network is developed.
Abstract: In the p-center problem, it is assumed that the facility located at a node responds to demands originating from the node. This assumption is suitable for emergency and health care services. However, it is not valid for large-scale emergencies where most of facilities in a whole city may become functionless. Consequently, residents in some areas cannot rely on their nearest facilities. These observations lead to the development of a variation of the p-center problem with an additional assumption that the facility at a node fails to respond to demands from the node. We use dynamic programming approach for the location on a path network and further develop an efficient algorithm for optimal locations on a general network.

Journal ArticleDOI
TL;DR: The paper shows that the market model is non-linear in general and that the sensitivity of asset returns to return on the market portfolio is not the same as the conventional beta, although this measure does arise in special cases.
Abstract: The returns on most financial assets exhibit kurtosis and many also have probability distributions that possess skewness as well. In this paper a general multivariate model for the probability distribution of assets returns, which incorporates both kurtosis and skewness, is described. It is based on the multivariate extended skew-Student-t distribution. Salient features of the distribution are described and these are applied to the task of asset pricing. The paper shows that the market model is non-linear in general and that the sensitivity of asset returns to return on the market portfolio is not the same as the conventional beta, although this measure does arise in special cases. It is shown that the variance of asset returns is time varying and depends on the squared deviation of market portfolio return from its location parameter. The first order conditions for portfolio selection are described. Expected utility maximisers will select portfolios from an efficient surface, which is an analogue of the familiar mean-variance frontier, and which may be implemented using quadratic programming.

Journal ArticleDOI
TL;DR: In this paper, the use of optimization models, methods, and theories in intensity modulated radiotherapy treatment design are surveyed.
Abstract: The design of an intensity modulated radiotherapy treatment includes the selection of beam angles (geometry problem), the computation of an intensity map for each selected beam angle (intensity problem), and finding a sequence of configurations of a multileaf collimator to deliver the treatment (realization problem). Until the end of the last century research on radiotherapy treatment design has been published almost exclusively in the medical physics literature. However, since then, the attention of researchers in mathematical optimization has been drawn to the area and important progress has been made. In this paper we survey the use of optimization models, methods, and theories in intensity modulated radiotherapy treatment design.

Journal ArticleDOI
TL;DR: An outlier detection procedure which applies a nonparametric model accounting for undesired outputs and exogenous influences in the sample to exploit the singularity of the leverage and the peer count, the super-efficiency and the order-m method and thepeer index.
Abstract: This paper suggests an outlier detection procedure which applies a nonparametric model accounting for undesired outputs and exogenous influences in the sample. Although efficiency is estimated in a deterministic frontier approach, each potential outlier initially benefits of the doubt of not being an outlier. We survey several outlier detection procedures and select five complementary methodologies which, taken together, are able to detect all influential observations. To exploit the singularity of the leverage and the peer count, the super-efficiency and the order-m method and the peer index, it is proposed to select these observations as outliers which are simultaneously revealed as atypical by at least two of the procedures. A simulated example demonstrates the usefulness of this approach. The model is applied to the Portuguese drinking water sector, for which we have an unusually rich data set.

Journal ArticleDOI
TL;DR: The combined hubbing and routing problem in postal delivery systems is considered and an iterative two-stage solution procedure for the problem is developed, updating the distances used in hubbing in order to produce a route-compatible hub configuration.
Abstract: We consider the combined hubbing and routing problem in postal delivery systems and develop an iterative two-stage solution procedure for the problem In the first stage, hub locations are determined and postal offices are multiply allocated to the hubs The second stage gives the routes in hub regions that alter the distances between points used in the hub-location problem The procedure then iterates between two stages by updating the distances used in hubbing in order to produce a route-compatible hub configuration Computational experience is reported for the test problems taken from the literature For a case study Turkish postal delivery system data are utilized As the case study is applied on a road network, a final stage, seeking improvements based on special structures in the routed network, is appended to the two-stage solution procedure

Journal ArticleDOI
TL;DR: The aim of this paper is to provide an asymptotic analysis of the conditional FDH and conditional DEA estimators, which have been applied in the literature without any theoretical background about their statistical properties.
Abstract: Cazals et al. (J. Econom. 106: 1-25, 2002), Daraio and Simar (J. Prod. Anal. 24: 93-121, 2005; Advanced Robust and Nonparametric Methods in Efficiency Analysis, 2007a; J. Prod. Anal. 28: 13-32, 2007b) developed a conditional frontier model which incorporates the environmental factors into measuring the efficiency of a production process in a fully nonparametric setup. They also provided the corresponding nonparametric efficiency measures: conditional FDH estimator, conditional DEA estimator. The two estimators have been applied in the literature without any theoretical background about their statistical properties. The aim of this paper is to provide an asymptotic analysis (i.e. asymptotic consistency and limit sampling distribution) of the conditional FDH and conditional DEA estimators.

Journal ArticleDOI
TL;DR: A 0–1 linear programming model and a solution heuristic algorithm are developed in order to solve the so-called Master Surgical Schedule Problem (MSSP), which forces the assignment of each patient to a subset of days depending on his/her expected length of stay to allow closing some stay areas during the weekend and hence reducing overall hospitalisation cost of the department.
Abstract: In this paper a 0–1 linear programming model and a solution heuristic algorithm are developed in order to solve the so-called Master Surgical Schedule Problem (MSSP). Given a hospital department made up of different surgical units (i.e. wards) sharing a given number of Operating Rooms (ORs), the problem herein addressed is determining the assignment among wards and ORs during a given planning horizon, together with the subset of patients to be operated on during each day. Different resource constraints related to operating block time length, maximum OR overtime allowable by collective labour agreement and legislation, patient length of stay (LOS), available OR equipment, number of surgeons, number of stay and ICU beds, are considered. Firstly, a 0–1 linear programming model intended to minimise a cost function based upon a priority score, that takes into proper account both the waiting time and the urgency status of each patient, is developed. Successively, an heuristic algorithm that enables us to embody some pre-assignment rules to solve this NP-hard combinatorial optimisation problem, is presented. In particular, we force the assignment of each patient to a subset of days depending on his/her expected length of stay in order to allow closing some stay areas during the weekend and hence reducing overall hospitalisation cost of the department. The results of an extensive computational experimentation aimed at showing the algorithm efficiency in terms of computational time and solution effectiveness are given and analysed.

Journal ArticleDOI
TL;DR: A particular form of convex risk measure is adopted, which includes the entropic risk measure as a particular case, as a measure of risk in a Markovian regime-switching financial model modulated by a continuous-time, observable and finite-state Markov chain whose states represent different market regimes.
Abstract: We consider a risk minimization problem in a continuous-time Markovian regime-switching financial model modulated by a continuous-time, observable and finite-state Markov chain whose states represent different market regimes. We adopt a particular form of convex risk measure, which includes the entropic risk measure as a particular case, as a measure of risk. The risk-minimization problem is formulated as a Markovian regime-switching version of a two-player, zero-sum stochastic differential game. One important feature of our model is to allow the flexibility of controlling both the diffusion process representing the financial risk and the Markov chain representing macro-economic risk. This is novel and interesting from both the perspectives of stochastic differential game and stochastic control. A verification theorem for the Hamilton-Jacobi-Bellman (HJB) solution of the game is provided and some particular cases are discussed.

Journal ArticleDOI
TL;DR: The resulting hybrid GRASP/VND algorithm is simple and quite fast and the extensive computational results on test instances from the literature show that the quality of the solutions is equal to or better than that obtained by the best existing heuristic procedures.
Abstract: The three-dimensional bin packing problem consists of packing a set of boxes into the minimum number of bins In this paper we propose a new GRASP algorithm for solving three-dimensional bin packing problems which can also be directly applied to the two-dimensional case The constructive phase is based on a maximal-space heuristic developed for the container loading problem In the improvement phase, several new moves are designed and combined in a VND structure The resulting hybrid GRASP/VND algorithm is simple and quite fast and the extensive computational results on test instances from the literature show that the quality of the solutions is equal to or better than that obtained by the best existing heuristic procedures

Journal ArticleDOI
TL;DR: A GA-based method that produces general hyper-heuristics that solve two-dimensional regular (rectangular) and irregular (convex polygonal) bin-packing problems and a variable-length representation is presented.
Abstract: The idea behind hyper-heuristics is to discover some combination of straightforward heuristics to solve a wide range of problems. To be worthwhile, such a combination should outperform the single heuristics. This article presents a GA-based method that produces general hyper-heuristics that solve two-dimensional regular (rectangular) and irregular (convex polygonal) bin-packing problems. A hyper-heuristic is used to define a high-level heuristic that controls low-level heuristics. The hyper-heuristic should decide when and where to apply each single low-level heuristic, depending on the given problem state. In this investigation two kinds of heuristics were considered: for selecting the figures (pieces) and objects (bins), and for placing the figures into the objects. Some of the heuristics were taken from the literature, others were adapted, and some other variations developed by us. We chose the most representative heuristics of their type, considering their individual performance in various studies and also in an initial experimentation on a collection of benchmark problems. The GA included in the proposed model uses a variable-length representation, which evolves combinations of condition-action rules producing hyper-heuristics after going through a learning process which includes training and testing phases. Such hyper-heuristics, when tested with a large set of benchmark problems, produce outstanding results for most of the cases. The testbed is composed of problems used in other similar studies in the literature. Some additional instances for the testbed were randomly generated.

Journal ArticleDOI
TL;DR: It is shown how the linear ordering problem can be used to model an aggregation problem consisting of going from individual preferences defined on a set of candidates to a collective ranking of these candidates.
Abstract: In this paper, we survey some results, conjectures and open problems dealing with the combinatorial and algorithmic aspects of the linear ordering problem. This problem consists in finding a linear order which is at minimum distance from a (weighted or not) tournament. We show how it can be used to model an aggregation problem consisting of going from individual preferences defined on a set of candidates to a collective ranking of these candidates.

Journal ArticleDOI
TL;DR: A multi-objective evolutionary algorithm that incorporates the concept of Pareto optimality is proposed for solving the multi- objective BAP by optimizing the berth schedule so as to minimize concurrently the three objectives of makespan, waiting time, and degree of deviation from a predetermined priority schedule.
Abstract: This paper considers a berth allocation problem (BAP) which requires the determination of exact berthing times and positions of incoming ships in a container port. The problem is solved by optimizing the berth schedule so as to minimize concurrently the three objectives of makespan, waiting time, and degree of deviation from a predetermined priority schedule. These objectives represent the interests of both port and ship operators. Unlike most existing approaches in the literature which are single-objective-based, a multi-objective evolutionary algorithm (MOEA) that incorporates the concept of Pareto optimality is proposed for solving the multi-objective BAP. The MOEA is equipped with three primary features which are specifically designed to target the optimization of the three objectives. The features include a local search heuristic, a hybrid solution decoding scheme, and an optimal berth insertion procedure. The effects that each of these features has on the quality of berth schedules are studied.

Journal ArticleDOI
TL;DR: A Memetic Algorithm, based on the NSGAII (Non-Dominated Sorting Genetic Algorithm II) acting on two chromosomes, is introduced, which adds, to the genetic stage, a local search procedure (Simulated Annealing).
Abstract: The Flexible Job-Shop Scheduling Problem is concerned with the determination of a sequence of jobs, consisting of many operations, on different machines, satisfying several parallel goals. We introduce a Memetic Algorithm, based on the NSGAII (Non-Dominated Sorting Genetic Algorithm II) acting on two chromosomes, to solve this problem. The algorithm adds, to the genetic stage, a local search procedure (Simulated Annealing). We have assessed its efficiency by running the algorithm on multiple objective instances of the problem. We draw statistics from those runs, which indicate that this Memetic Algorithm yields good and low-cost solutions.

Journal ArticleDOI
TL;DR: A polynomial-time algorithm for obtaining the best possible partition of an arbitrary graph into supernodes is given, which makes it possible to use any formulation of vertex multicolouring to encode vertex colouring.
Abstract: Pro formulace mnoha problemů z rozvrhovani v matematickem programovani je důležitý výběr formulace komponenty dane barvenim vrcholů grafu. V tomto clanku shrnujeme zname formulace barveni vrcholů grafu a představujeme novou formulaci, založenou na "supernodes". "Supernode" je v definici George a McIntyra (SIAM J. Numer. Anal. 15(1):90-112, 1978) uplný podgraf S, ve kterem každe dva vrcholy maji shodne okoli mimo S. Rozklad na nejmensi možný pocet "supernodes" je snadne ziskat. To nam umožňuje formulovat barveni grafu jako multi-barveni tohoto rozkladu. Pozitivni výsledky experimentů s běžně použivanou kolekci grafů (DIMACS) a řadou instanci rozvrhovaciho problemu (Udine Course Timetabling) jsou diskutovany.

Journal ArticleDOI
TL;DR: A computational comparison on the relative strength of the functions presented in this paper for deriving lower bounds for the bin-packing problem and valid cutting planes for the pattern minimization problem is performed.
Abstract: Dual-feasible functions are valuable tools that can be used to compute both lower bounds for different combinatorial problems and valid inequalities for integer programs. Several families of functions have been used in the literature. Some of them were defined explicitly, and others not. One of the main objectives of this paper is to survey these functions, and to state results concerning their quality. We clearly identify dominant subsets of functions, i.e. those which may lead to better bounds or stronger cuts. We also describe different frameworks that can be used to create dual-feasible functions. With these frameworks, one can get a dominant function based on other ones. Two new families of dual-feasible functions obtained by applying these methods are proposed in this paper. We also performed a computational comparison on the relative strength of the functions presented in this paper for deriving lower bounds for the bin-packing problem and valid cutting planes for the pattern minimization problem. Extensive experiments on instances generated using methods described in the literature are reported. In many cases, the lower bounds are improved, and the linear relaxations are strengthened.

Journal ArticleDOI
TL;DR: The article reviews the concept of and further develops phi-Functions (Φ-functions) as an efficient tool for mathematical modeling of two-dimensional geometric optimization problems, such as cutting and packing problems and covering problems.
Abstract: The article reviews the concept of and further develops phi-functions (Φ-functions) as an efficient tool for mathematical modeling of two-dimensional geometric optimization problems, such as cutting and packing problems and covering problems. The properties of the phi-function technique and its relationship with Minkowski sums and the nofit polygon are discussed. We also describe the advantages of phi-functions over these approaches. A clear definition of the set of objects for which phi-functions may be derived is given and some exceptions are illustrated. A step by step procedure for deriving phi-functions illustrated with examples is provided including the case of continuous rotation.

Journal ArticleDOI
TL;DR: The rough set approach gives a representation of decision maker’s time-dependent preferences under uncertainty in terms of “if…, then…” decision rules induced from rough approximations of sets of exemplary decisions.
Abstract: We consider a problem of decision under uncertainty with outcomes distributed over time. We propose a rough set model based on a combination of time dominance and stochastic dominance. For the sake of simplicity we consider the case of traditional additive probability distribution over the set of states of the world, however, we show that the model is rich enough to handle non-additive probability distributions, and even qualitative ordinal distributions. The rough set approach gives a representation of decision maker’s time-dependent preferences under uncertainty in terms of “if…, then…” decision rules induced from rough approximations of sets of exemplary decisions.

Journal ArticleDOI
TL;DR: Master surgical scheduling (MSS) is a promising approach for hospitals to optimize resource utilization and patient flows and its suitability for hospitals with different organizational foci and culture is discussed.
Abstract: Operating room (OR) planning and scheduling is a popular and challenging subject within the operational research applied to health services research (ORAHS). However, the impact in practice is very limited. The organization and culture of a hospital and the inherent characteristics of its processes impose specific implementation issues that affect the success of planning approaches. Current tactical OR planning approaches often fail to account for these issues. Master surgical scheduling (MSS) is a promising approach for hospitals to optimize resource utilization and patient flows. We discuss the pros and cons of MSS and compare MSS with centralized and decentralized planning approaches. Finally, we address various implementation issues of MSS and discuss its suitability for hospitals with different organizational foci and culture.

Journal ArticleDOI
TL;DR: The Real Options method is used to analyze the value of the flex fuel option and it is concluded that the flex option value is significant using either method and twice as high as flex premium charged by the car manufacturers, which helps explain the success that this type of automobiles have gained in Brazil since 2003.
Abstract: Renewable energy sources are becoming more important as the world’s supply of fossil fuels decrease and also due to environmental concerns. Since 2003, when the ethanol-gasoline flex fuel car became commercially available in Brazil, the growth of this market has been significant, to the point where currently more than 50% of the fuel consumption of cars in Brazil is from renewable biofuels (ethanol). This has been made possible due to the success of the flex fuel car, which can run on ethanol, gasoline, or any mix of these in the same fuel tank, and which is sold at a premium over the non-flex models. Flex fuel cars, on the other hand, provide the owner with the flexibility to choose fuels at each refueling stop. Given the uncertainty on future prices of ethanol and gas, this option adds value to the owner since he can always opt for the cheaper fuel whenever he fills up his car. We use the Real Options method to analyze the value of the flex fuel option assuming both a Geometric Brownian Motion and Mean Reverting diffusion processes for the prices of gasoline and ethanol and compare the results arising from both methods. We conclude that the flex option value is significant using either method and twice as high as flex premium charged by the car manufacturers, which helps explain the success that this type of automobiles have gained in Brazil since 2003. Our results also indicate that consumers should be willing to purchase flex fuel cars even if manufacturers increase the flex premium.

Journal ArticleDOI
TL;DR: This work implements an efficient algorithm which associates non-monotonic Simulated Annealing to Hill-Climbing and Random Restart to Vehicle Routing Problems and shows how statistical methods can be used to boost the performance of the method.
Abstract: Vehicle Routing Problems have been extensively analyzed to reduce transportation costs. More particularly, the Vehicle Routing Problem with Time Windows (VRPTW) imposes the period of time of customer availability as a constraint, a common characteristic in real world situations. Using minimization of the total distance as the main objective to be fulfilled, this work implements an efficient algorithm which associates non-monotonic Simulated Annealing to Hill-Climbing and Random Restart. The algorithm is compared to the best results published in the literature for the 56 Solomon instances and it is shown how statistical methods can be used to boost the performance of the method.