scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Heuristics in 2007"


Journal ArticleDOI
TL;DR: In this article, two variants of a generalized tabu search algorithm and a variable neighborhood search algorithm are proposed for the solution of the TOP and show that each of these algorithms beats the already known heuristics.
Abstract: The Team Orienteering Problem (TOP) is the generalization to the case of multiple tours of the Orienteering Problem, known also as Selective Traveling Salesman Problem. A set of potential customers is available and a profit is collected from the visit to each customer. A fleet of vehicles is available to visit the customers, within a given time limit. The profit of a customer can be collected by one vehicle at most. The objective is to identify the customers which maximize the total collected profit while satisfying the given time limit for each vehicle. We propose two variants of a generalized tabu search algorithm and a variable neighborhood search algorithm for the solution of the TOP and show that each of these algorithms beats the already known heuristics. Computational experiments are made on standard instances.

192 citations


Journal ArticleDOI
TL;DR: A family of local-search-based heuristics for Quadratic Unconstrained Binary Optimization (QUBO), all of which start with a (possibly fractional) initial point, sequentially improving its quality by rounding or switching the value of one variable, until arriving to a local optimum.
Abstract: We present a family of local-search-based heuristics for Quadratic Unconstrained Binary Optimization (QUBO), all of which start with a (possibly fractional) initial point, sequentially improving its quality by rounding or switching the value of one variable, until arriving to a local optimum. The effects of various parameters on the efficiency of these methods are analyzed through computational experiments carried out on thousands of randomly generated problems having 20 to 2500 variables. Tested on numerous benchmark problems, the performance of the most competitive variant (ACSIOM) was shown to compare favorably with that of other published procedures.

144 citations


Journal ArticleDOI
TL;DR: This first application of a metaheuristic technique to the very popular and NP-complete puzzle known as ‘sudoku’ is presented and it is seen that this stochastic search-based algorithm is able to complete logic-solvable puzzle-instances that feature daily in many of the UK's national newspapers.
Abstract: In this paper we present, to our knowledge, the first application of a metaheuristic technique to the very popular and NP-complete puzzle known as `sudoku' We see that this stochastic search-based algorithm, which uses simulated annealing, is able to complete logic-solvable puzzle-instances that feature daily in many of the UK's national newspapers We also introduce a new method for producing sudoku problem instances (that are not necessarily logic-solvable) and use this together with the proposed SA algorithm to try and discover for what types of instances this algorithm is best suited Consequently we notice the presence of an `easy-hard-easy' style phase-transition similar to other problems encountered in operational research

133 citations


Journal ArticleDOI
TL;DR: A mixed integer programming (MIP) model is developed, taking into account sequence-dependent setup costs and times, and then adapted for rolling horizon use, and a relax-and-fix solution heuristic is proposed and computationally tested against a high-performance MIP solver.
Abstract: A lot sizing and scheduling problem from a foundry is considered in which key materials are produced and then transformed into many products on a single machine. A mixed integer programming (MIP) model is developed, taking into account sequence-dependent setup costs and times, and then adapted for rolling horizon use. A relax-and-fix (RF) solution heuristic is proposed and computationally tested against a high-performance MIP solver. Three variants of local search are also developed to improve the RF method and tested. Finally the solutions are compared with those currently practiced at the foundry.

108 citations


Journal ArticleDOI
TL;DR: In this paper, a meta-heuristic procedure for nurse scheduling problem based on the framework proposed by Birbil and Fang (J. Glob. Opt. 25, 263---282, 2003) is presented.
Abstract: In this paper, we present a novel meta-heuristic technique for the nurse scheduling problem (NSP). This well-known scheduling problem assigns nurses to shifts per day maximizing the overall quality of the roster while taking various constraints into account. The problem is known to be NP-hard. Due to its complexity and relevance, many algorithms have been developed to solve practical and often case-specific models of the NSP. The huge variety of constraints and the several objective function possibilities have led to exact and meta-heuristic procedures in various guises, and hence comparison and state-of-the-art reporting of standard results seem to be a utopian idea. We present a meta-heuristic procedure for the NSP based on the framework proposed by Birbil and Fang (J. Glob. Opt. 25, 263---282, 2003). The Electromagnetic (EM) approach is based on the theory of physics, and simulates attraction and repulsion of sample points in order to move towards a promising solution. Moreover, we present computational experiments on a standard benchmark dataset, and solve problem instances under different assumptions. We show that the proposed procedure performs consistently well under many different circumstances, and hence, can be considered as robust against case-specific constraints.

80 citations


Journal ArticleDOI
TL;DR: This work proposes a family of tabu search solvers for the solution of TTP that make use of complex combination of many neighborhood structures and shows that the algorithm is competitive with those in the literature.
Abstract: The Traveling Tournament Problem (TTP) is a combinatorial problem that combines features from the traveling salesman problem and the tournament scheduling problem. We propose a family of tabu search solvers for the solution of TTP that make use of complex combination of many neighborhood structures. The different neighborhoods have been thoroughly analyzed and experimentally compared. We evaluate the solvers on three sets of publicly available benchmarks and we show a comparison of their outcomes with previous results presented in the literature. The results show that our algorithm is competitive with those in the literature.

79 citations


Journal ArticleDOI
TL;DR: This paper introduces several extensions to the simple edit distance, that can be used when a solution cannot be represented as a simple permutation, and develops algorithms to calculate them efficiently.
Abstract: In this paper, we discuss distance measures for a number of different combinatorial optimization problems of which the solutions are best represented as permutations of items, sometimes composed of several permutation (sub)sets. The problems discussed include single-machine and multiple-machine scheduling problems, the traveling salesman problem, vehicle routing problems, and many others. Each of these problems requires a different distance measure that takes the specific properties of the representation into account. The distance measures discussed in this paper are based on a general distance measure for string comparison called the edit distance. We introduce several extensions to the simple edit distance, that can be used when a solution cannot be represented as a simple permutation, and develop algorithms to calculate them efficiently.

70 citations


Journal ArticleDOI
TL;DR: Assessments with standard metrics on classical benchmarks demonstrate the importance of hybridization as well as the efficiency of the Target Aiming Pareto Search.
Abstract: In this paper, we present a solution method for a bi-objective vehicle routing problem, called the vehicle routing problem with route balancing (VRPRB), in which the total length and balance of the route lengths are the objectives under consideration. The method, called Target Aiming Pareto Search, is defined to hybridize a multi-objective genetic algorithm for the VRPRB using local searches. The method is based on repeated local searches with their own appropriate goals. We also propose an implementation of the Target Aiming Pareto Search using tabu searches, which are efficient meta-heuristics for the vehicle routing problem. Assessments with standard metrics on classical benchmarks demonstrate the importance of hybridization as well as the efficiency of the Target Aiming Pareto Search.

70 citations


Journal ArticleDOI
TL;DR: A heuristic for 0-1 mixed-integer linear programming problems, focusing on “stand-alone” implementation built around concave “merit functions” measuring solution integrality, and consists of four layers: gradient-based pivoting, probing pivot, convexity/intersection cutting, and diving on blocks of variables.
Abstract: This paper describes a heuristic for 0-1 mixed-integer linear programming problems, focusing on "stand-alone" implementation. Our approach is built around concave "merit functions" measuring solution integrality, and consists of four layers: gradient-based pivoting, probing pivoting, convexity/intersection cutting, and diving on blocks of variables. The concavity of the merit function plays an important role in the first and third layers, as well as in connecting the four layers. We present both the mathematical and software details of a test implementation, along with computational results for several variants.

64 citations


Journal ArticleDOI
TL;DR: The results indicate that the use of diversity control with a correct parameter setting helps to prevent premature convergence in single-objectives optimisation and promotes the emergence of multi-objective solutions that are close to the true Pareto optimal solutions while maintaining a uniform solution distribution along the Pare to front.
Abstract: This paper covers an investigation on the effects of diversity control in the search performances of single-objective and multi-objective genetic algorithms. The diversity control is achieved by means of eliminating duplicated individuals in the population and dictating the survival of non-elite individuals via either a deterministic or a stochastic selection scheme. In the case of single-objective genetic algorithm, onemax and royal road R 1 functions are used during benchmarking. In contrast, various multi-objective benchmark problems with specific characteristics are utilised in the case of multi-objective genetic algorithm. The results indicate that the use of diversity control with a correct parameter setting helps to prevent premature convergence in single-objective optimisation. Furthermore, the use of diversity control also promotes the emergence of multi-objective solutions that are close to the true Pareto optimal solutions while maintaining a uniform solution distribution along the Pareto front.

48 citations


Journal ArticleDOI
TL;DR: A computational experience validating the usefulness of the proposed approach combining Simulated Annealing with a Very Large-Scale Neighborhood search where the neighborhood is explored by solving an Integer Programming problem.
Abstract: In this paper we report on a computational experience with a local search algorithm for High-school Timetabling Problems. The timetable has to satisfy "hard" requirements, that are mandatory, and should minimize the violation of "soft" constraints. In our approach, we combine Simulated Annealing with a Very Large-Scale Neighborhood search where the neighborhood is explored by solving an Integer Programming problem. We report on a computational experience validating the usefulness of the proposed approach.

Journal ArticleDOI
TL;DR: Different GRASP heuristics for the maximum diversity problem are proposed, using distinct construction procedures and including a path-relinking technique, and performance comparison among related work and the proposed heuristic is provided.
Abstract: The maximum diversity problem (MDP) consists of identifying, in a population, a subset of elements, characterized by a set of attributes, that present the most diverse characteristics among the elements of the subset. The identification of such solution is an NP-hard problem. Some heuristics are available to obtain approximate solutions for this problem. In this paper, we propose different GRASP heuristics for the MDP, using distinct construction procedures and including a path-relinking technique. Performance comparison among related work and the proposed heuristics is provided. Experimental results show that the new GRASP heuristics are quite robust and are able to find high-quality solutions in reasonable computational times.

Journal ArticleDOI
TL;DR: This work proposes a new strategy to improve the performance of this crossover operator by the creation of virtual parents obtained from the population parameters of localisation and dispersion of the best individuals.
Abstract: The crossover operator is the most innovative and relevant operator in real-coded genetic algorithms. In this work we propose a new strategy to improve the performance of this operator by the creation of virtual parents obtained from the population parameters of localisation and dispersion of the best individuals. The idea consists of mating these virtual parents with individuals of the population. In this way, the offspring are created in the most promising regions. This strategy has been incorporated into several crossover operators. After analysing the results we can conclude that this strategy significantly improves the performance of the algorithm in most problems analysed.

Journal ArticleDOI
TL;DR: This paper suggests several alternative automated heuristics, relying on terrain preprocessing to find educated potential points for positioning relay stations, and indicates that for small networks, the results obtained by human experts are adequate although they rarely exceed the quality of automated alternatives.
Abstract: This article addresses a real-life problem - obtaining communication links between multiple base station sites, by positioning a minimal set of fixed-access relay antenna sites on a given terrain. Reducing the number of relay antenna sites is considered critical due to substantial installation and maintenance costs. Despite the significant cost saved by eliminating even a single antenna site, an inefficient manual approach is employed due to the computational complexity of the problem. From the theoretical point of view we show that this problem is not only NP hard, but also does not have a constant approximation. In this paper we suggest several alternative automated heuristics, relying on terrain preprocessing to find educated potential points for positioning relay stations. A large-scale computer-based experiment consisting of approximately 7,000 different scenarios was conducted. The quality of alternative solutions was compared by isolating and displaying factors that were found to affect the standard deviation of the solutions supplied by the tested heuristics. The results of the simulation based experiments show that the saving potential increases when more base stations are needed to be interconnected. The designs of a human expert were compared to the automatically generated solutions for a small subset of the experiment scenarios. Our studies indicate that for small networks (e.g., connecting up to ten base stations), the results obtained by human experts are adequate although they rarely exceed the quality of automated alternatives. However, the process of obtaining these results in comparison to automated heuristics is longer. In addition, when more base station sites need to be interconnected, the human approach is easily outperformed by our heuristics, both in terms of better results (fewer antennas) and in significant shorter calculation times.

Journal ArticleDOI
TL;DR: This work proposes a Lagrangian decomposition (LD) heuristic that exploits the special structure of pharmaceutical firms' new product development capabilities by efficiently allocating its analytical, clinical testing and manufacturing resources across various drug development projects.
Abstract: To stay ahead of their competition, pharmaceutical firms must make effective use of their new product development (NPD) capabilities by efficiently allocating its analytical, clinical testing and manufacturing resources across various drug development projects. The resulting project scheduling problems involve coordinating hundreds of testing and manufacturing activities over a period of several quarters. Most conventional integer programming approaches are computationally impractical for problems of this size, while priority rule-driven heuristics seldom provide consistent solution quality. We propose a Lagrangian decomposition (LD) heuristic that exploits the special structure of these problems. Some resources (typically manpower) are shared across all on-going projects while others (typically equipment) are specific to individual project categories. Our objective function is a weighted discounted cost expressed in terms of activity completion times. The LD heuristics were subjected to a comprehensive experimental study based on typical operational instances. While the conventional "Reward---Risk" priority rule heuristic generates duality gaps between 47---58%, the best LD heuristic achieves duality gaps between 10---20%. The LD heuristics also yield makespan reductions of over 30% over the Reward---Risk priority rule.

Journal ArticleDOI
TL;DR: A genetic algorithm approach and a simulated annealing approach utilizing a greedy local search and three well-known properties in the area of common due date scheduling are developed and enable the starting time of the first job not at zero.
Abstract: This study addresses a class of single-machine scheduling problems involving a common due date where the objective is to minimize the total job earliness and tardiness penalties. A genetic algorithm (GA) approach and a simulated annealing (SA) approach utilizing a greedy local search and three well-known properties in the area of common due date scheduling are developed. The developed algorithms enable the starting time of the first job not at zero and were tested using a set of benchmark problems. From the viewpoints of solution quality and computational expenses, the proposed approaches are efficient and effective for problems involving different numbers of jobs, as well as different processing time, and earliness and tardiness penalties.

Journal ArticleDOI
TL;DR: It is demonstrated that employing an RL trained agent is a robust, flexible approach that in addition can be used to support the detection of good heuristics and Markov decision models and learning methods from Artificial Intelligence to find decision policies under uncertainty.
Abstract: Order Acceptance (OA) is one of the main functions in business control. Accepting an order when capacity is available could disable the system to accept more profitable orders in the future with opportunity losses as a consequence. Uncertain information is also an important issue here. We use Markov decision models and learning methods from Artificial Intelligence to find decision policies under uncertainty. Reinforcement Learning (RL) is quite a new approach in OA. It is shown here that RL works well compared with heuristics. It is demonstrated that employing an RL trained agent is a robust, flexible approach that in addition can be used to support the detection of good heuristics.

Journal ArticleDOI
TL;DR: The minimisation of edge crossings in a book drawing of a graphs is one of the important goals for a linear VLSI design, and the 2-page crossing number of a graph provides an upper bound for the standard planar crossing number.
Abstract: The minimisation of edge crossings in a book drawing of a graph is one of the important goals for a linear VLSI design, and the 2-page crossing number of a graph provides an upper bound for the standard planar crossing number. We design genetic algorithms for the 2-page drawings, and test them on the benchmark test suits, Rome graphs and Random Connected Graphs. We also test some circulant graphs, and get better results than previously presented in the literature. Moreover, we formalise three conjectures for certain kinds of circulant graphs,supported by our experimental results.

Journal ArticleDOI
TL;DR: The problem is formulated as an objective function with constraints and shown to be NP-complete by translation to a known problem, and exact and heuristic solution methods are introduced, discussed and compared and computational results given.
Abstract: This paper considers an optimisation problem encountered in the implementation of traffic policies on network routers, namely the ordering of rules in an access control list to minimise or reduce processing time and hence packet latency. The problem is formulated as an objective function with constraints and shown to be NP-complete by translation to a known problem. Exact and heuristic solution methods are introduced, discussed and compared and computational results given. The emphasis throughout is on practical implementation of the optimisation process, that is within the tight constraints of a production network router seeking to reduce latency, on-line, in real-time but without the overhead of significant extra computation.

Journal ArticleDOI
TL;DR: It is possible to formulate simple theorems disclosing useful properties of such trajectories in the context of integer programming to reinforce the case on behalf of approaches that endorse infeasible/feasible search trajectories.
Abstract: The notion that strategies in non-linear and combinatorial optimization can benefit by purposefully and systematically navigating between feasible and infeasible space has been around for many years, but still is sometimes dismissed as having little relevance for creating more effective methods. To reinforce the case on behalf of approaches that endorse infeasible/feasible search trajectories, it is possible to formulate simple theorems disclosing useful properties of such trajectories in the context of integer programming. These results motivate a closer examination of integer programming search processes based on directional rounding processes, including a special variant called conditional directional rounding. From these foundations a variety of new strategies emerge for exploiting connections between feasible and infeasible space.


Journal ArticleDOI
TL;DR: In order to quantitatively analyze the selective pressure, the concept of selection degree and a simple linear control equation are introduced and can maintain the diversity of the evolutionary population by controlling the value of the selection degree.
Abstract: Based on some phenomena from human society and nature, we propose a binary affinity genetic algorithm (aGA) by adopting the following strategies: the population is adaptively updated to avoid stagnation; the newly generated individuals will be ensured to survive for some generations in order for them to have time to show their good genes; new individuals and the old ones are balanced to have the advantages of both. In order to quantitatively analyze the selective pressure, the concept of selection degree and a simple linear control equation are introduced. We can maintain the diversity of the evolutionary population by controlling the value of the selection degree. Performance of aGA is further enhanced by incorporating local search strategies.

Journal ArticleDOI
TL;DR: Experimental results carried out on real DNA data show the advantages of using the proposed algorithm, and statistical tests confirm the superiority of the proposed variant over the state-of-the-art heuristics.
Abstract: This paper presents a genetic algorithm for an important computational biology problem. The problem appears in the computational part of a new proposal for DNA sequencing denominated sequencing by hybridization. The general usage of this method for real sequencing purposes depends mainly on the development of good algorithmic procedures for solving its computational phase. The proposed genetic algorithm is a modified version of a previously proposed hybrid genetic algorithm for the same problem. It is compared with two well suited meta-heuristic approaches reported in the literature: the hybrid genetic algorithm, which is the origin of our proposed variant, and a tabu-scatter search algorithm. Experimental results carried out on real DNA data show the advantages of using the proposed algorithm. Furthermore, statistical tests confirm the superiority of the proposed variant over the state-of-the-art heuristics.

Journal ArticleDOI
TL;DR: The tomographic reconstruction method based on Monte Carlo random searching guided by the information contained in the projections of radiographed objects is presented, and a multiscale algorithm is proposed to reduce computation.
Abstract: A tomographic reconstruction method based on Monte Carlo random searching guided by the information contained in the projections of radiographed objects is presented. In order to solve the optimization problem, a multiscale algorithm is proposed to reduce computation. The reconstruction is performed in a coarse-to-fine multigrid scale that initializes each resolution level with the reconstruction of the previous coarser level, which substantially improves the performance. The method was applied to a real case reconstructing the internal structure of a small metallic object with internal components, showing excellent results.

Journal ArticleDOI
TL;DR: The island confinement method can be incorporated in, and significantly improve, the search performance of two successful local search procedures, DLM and ESG, on SAT problems arising from binary CSPs.
Abstract: Typically local search methods for solving constraint satisfaction problems such as GSAT, WalkSAT, DLM, and ESG treat the problem as an optimisation problem. Each constraint contributes part of a penalty function in assessing trial valuations. Local search examines the neighbours of the current valuation, using the penalty function to determine a "better" neighbour valuation to move to, until finally a solution which satisfies all the constraints is found. In this paper we investigate using some of the constraints as "hard" constraints, that are always satisfied by every trial valuation visited, rather than as part of a penalty function. In this way these constraints reduce the possible neighbours in each move and also the overall search space. The treating of some constraints as hard requires that the space of valuations that are satisfied is "connected" in order to guarantee that a solution can be found from any starting position within the region; thus the concept of islands and the name "island confinement method" arises. Treating some constraints as hard provides new difficulties for the search mechanism since the search space becomes more jagged, and there are more deep local minima. A new escape strategy is needed. To demonstrate the feasibility and generality of our approach, we show how the island confinement method can be incorporated in, and significantly improve, the search performance of two successful local search procedures, DLM and ESG, on SAT problems arising from binary CSPs.

Journal ArticleDOI
TL;DR: The local value distribution is used to define a Markov model to model the dynamics of a corresponding stochastic local search algorithm for k-sat, and the model is evaluated by comparing the predicted algorithm dynamics to experimental results.
Abstract: A new analytical tool is presented to provide a better understanding of the search space of k-sat. This tool, termed the local value distribution , describes the probability of finding assignments of any value q? in the neighbourhood of assignments of value q. The local value distribution is then used to define a Markov model to model the dynamics of a corresponding stochastic local search algorithm for k-sat. The model is evaluated by comparing the predicted algorithm dynamics to experimental results. In most cases the fit of the model to the experimental results is very good, but limitations are also recognised.