scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic (computer science) published in 2001"


Journal ArticleDOI
TL;DR: This paper surveys recent results in coverage path planning, a new path planning approach that determines a path for a robot to pass over all points in its free space, and organizes the coverage algorithms into heuristic, approximate, partial-approximate and exact cellular decompositions.
Abstract: This paper surveys recent results in coverage path planning, a new path planning approach that determines a path for a robot to pass over all points in its free space. Unlike conventional point-to-point path planning, coverage path planning enables applications such as robotic de-mining, snow removal, lawn mowing, car-body painting, machine milling, etc. This paper will focus on coverage path planning algorithms for mobile robots constrained to operate in the plane. These algorithms can be classified as either heuristic or complete. It is our conjecture that most complete algorithms use an exact cellular decomposition, either explicitly or implicitly, to achieve coverage. Therefore, this paper organizes the coverage algorithms into four categories: heuristic, approximate, partial-approximate and exact cellular decompositions. The final section describes some provably complete multi-robot coverage algorithms.

1,206 citations


Journal ArticleDOI
TL;DR: A unified tabu search heuristic for the vehicle routing problem with time windows and for two important generalizations: the periodic and the multi-depot vehicle routing problems with timewindows is presented.
Abstract: This paper presents a unified tabu search heuristic for the vehicle routing problem with time windows and for two important generalizations: the periodic and the multi-depot vehicle routing problems with time windows. The major benefits of the approach are its speed, simplicity and flexibility. The performance of the heuristic is assessed by comparing it to alternative methods on benchmark instances of the vehicle routing problem with time windows. Computational experiments are also reported on new randomly generated instances for each of the two generalizations.

857 citations


Journal ArticleDOI
TL;DR: This tutorial describes algorithms that are representative of each category of basic search algorithms, and discusses which type of algorithm might be suitable for different applications.
Abstract: The process of categorizing packets into "flows" in an Internet router is called packet classification. All packets belonging to the same flow obey a predefined rule and are processed in a similar manner by the router. For example, all packets with the same source and destination IP addresses may be defined to form a flow. Packet classification is needed for non-best-effort services, such as firewalls and quality of service; services that require the capability to distinguish and isolate traffic in different flows for suitable processing. In general, packet classification on multiple fields is a difficult problem. Hence, researchers have proposed a variety of algorithms which, broadly speaking, can be categorized as basic search algorithms, geometric algorithms, heuristic algorithms, or hardware-specific search algorithms. In this tutorial we describe algorithms that are representative of each category, and discuss which type of algorithm might be suitable for different applications.

774 citations


Journal ArticleDOI
TL;DR: This study compares the hybrid algorithms in terms of solution quality and computation time on a number of packing problems of different size and shows the effectiveness of the design of the different algorithms.

487 citations


Proceedings ArticleDOI
22 Apr 2001
TL;DR: This work proposes an efficient heuristic algorithm, H MCOP, which attempts to minimize both the nonlinear cost function and the primary cost function for the feasibility part and the optimality part of the problem, and proves that HMCOP guarantees at least the performance of GLA and often improves upon it.
Abstract: Providing quality-of-service (QoS) guarantees in packet networks gives rise to several challenging issues. One of them is how to determine a feasible path that satisfies a set of constraints while maintaining high utilization of network resources. The latter objective implies the need to impose an additional optimality requirement on the feasibility problem. This can be done through a primary cost function (e.g., administrative weight, hop count) according to which the selected feasible path is optimal. In general, multi-constrained path selection, with or without optimization, is an NP-complete problem that cannot be exactly solved in polynomial-time. Heuristics and approximation algorithms with polynomial and pseudo-polynomial-time complexities are often used to deal with this problem. However, existing solutions suffer either from excessive computational complexities that cannot be used for online network operation or from low performance. Moreover, they only deal with special cases of the problem (e.g., two constraints without optimization, one constraint with optimization, etc.). For the feasibility problem under multiple constraints, some researchers have proposed a nonlinear cost function whose minimization provides a continuous spectrum of solutions ranging from a generalized linear approximation (GLA) to an asymptotically exact solution. We propose an efficient heuristic algorithm for the most general form of the problem. We first formalize the theoretical properties of the above nonlinear cost function. We then introduce our heuristic algorithm (H MCOP), which attempts to minimize both the nonlinear cost function (for the feasibility part) and the primary cost function (for the optimality part). We prove that H MCOP guarantees at least the performance of GLA and often improves upon it. H MCOP has the same order of complexity as Dijkstra's algorithm. Using extensive simulations on random graphs with correlated and uncorrelated link weights, we show that under the same level of computational complexity, H MCOP outperforms its (less general) contenders in its success rate in finding feasible paths and in the cost of such paths.

414 citations


Proceedings ArticleDOI
06 Jul 2001
TL;DR: This paper analyzes local search heuristics for the k-median and facility location problems and proves that without this stretch, the problem becomes NP-Hard to approximate.
Abstract: In this paper, we analyze local search heuristics for the k-median and facility location problems. We define the {\em locality gap\/} of a local search procedure as the maximum ratio of a locally optimum solution (obtained using this procedure) to the global optimum. For k-median, we show that local search with swaps has a locality gap of exactly 5. When we permit p facilities to be swapped simultaneously then the locality gap of the local search procedure is exactly 3+2/p. This is the first analysis of local search for k-median that provides a bounded performance guarantee with only k medians. This also improves the previous known 4 approximation for this problem. For Uncapacitated facility location, we show that local search, which permits adding, dropping and swapping a facility, has a locality gap of exactly 3. This improves the 5 bound of Korupolu et al. We also consider a capacitated facility location problem where each facilitym has a capacity and we are allowed to open multiple copies of a facility. For this problem we introduce a new operation which opens one or more copies of a facility and drops zero or more facilities. We prove that local search which permits this new operation has a locality gap between 3 and 4.

366 citations


Proceedings ArticleDOI
01 May 2001
TL;DR: A fast and scalable algorithm for determining whether part or all of a query can be computed from materialized views and how it can be incorporated in transformation-based optimizers is presented.
Abstract: Materialized views can provide massive improvements in query processing time, especially for aggregation queries over large tables. To realize this potential, the query optimizer must know how and when to exploit materialized views. This paper presents a fast and scalable algorithm for determining whether part or all of a query can be computed from materialized views and describes how it can be incorporated in transformation-based optimizers. The current version handles views composed of selections, joins and a final group-by. Optimization remains fully cost based, that is, a single “best” rewrite is not selected by heuristic rules but multiple rewrites are generated and the optimizer chooses the best alternative in the normal way. Experimental results based on an implementation in Microsoft SQL Server show outstanding performance and scalability. Optimization time increases slowly with the number of views but remains low even up to a thousand.

338 citations


Journal ArticleDOI
TL;DR: A heuristic solution algorithm is developed that applies successive linear programming based on the reformulation and the relaxation of the original problem to produce feasible solutions with very small gaps between the solutions and their upper bound.

336 citations


Journal ArticleDOI
TL;DR: The methodological issues that must be confronted by researchers undertaking experimental evaluations of heuristics, including experimental design, sources of test instances, measures of algorithmic performance, analysis of results, and presentation in papers and talks are highlighted.
Abstract: Heuristic optimization algorithms seek good feasible solutions to optimization problems in circumstances where the complexities of the problem or the limited time available for solution do not allow exact solution. Although worst case and probabilistic analysis of algorithms have produced insight on some classic models, most of the heuristics developed for large optimization problem must be evaluated empirically—by applying procedures to a collection of specific instances and comparing the observed solution quality and computational burden. This paper focuses on the methodological issues that must be confronted by researchers undertaking such experimental evaluations of heuristics, including experimental design, sources of test instances, measures of algorithmic performance, analysis of results, and presentation in papers and talks. The questions are difficult, and there are no clear right answers. We seek only to highlight the main issues, present alternative ways of addressing them under different circumstances, and caution about pitfalls to avoid.

319 citations


Journal ArticleDOI
TL;DR: This work uses extremal optimization to elucidate the phase transition in the 3-coloring problem, and provides independent confirmation of previously reported extrapolations for the ground-state energy of +/-J spin glasses in d = 3 and 4.
Abstract: We explore a new general-purpose heuristic for finding high-quality solutions to hard discrete optimization problems. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. Extremal optimization successively updates extremely undesirable variables of a single suboptimal solution, assigning them new, random values. Large fluctuations ensue, efficiently exploring many local optima. We use extremal optimization to elucidate the phase transition in the 3-coloring problem, and we provide independent confirmation of previously reported extrapolations for the ground-state energy of {+-}J spin glasses in d=3 and 4 .

300 citations


Journal ArticleDOI
TL;DR: In this paper, a hybrid method drawn upon the Tabu search approach, extended with features taken from other combinatorial approaches such as genetic algorithms and simulated annealing, and from practical heuristic approaches is proposed.
Abstract: The capacitor placement (replacement) problem for radial distribution networks determines capacitor types, sizes, locations, and control schemes. Optimal capacitor placement is a hard combinatorial problem that can be formulated as a mixed integer nonlinear program. Since this is a nonpolynomial time (NP) complete problem, the solution approach uses a combinatorial search algorithm. The paper proposes a hybrid method drawn upon the Tabu search approach, extended with features taken from other combinatorial approaches such as genetic algorithms and simulated annealing, and from practical heuristic approaches. The proposed method has been tested in a range of networks available in the literature with superior results regarding both quality and cost of solutions.

Journal ArticleDOI
TL;DR: The algorithmic techniques used in FF in comparison to hsp are described and their benefits in terms of run-time and solution-length behavior are evaluated.
Abstract: Fast-forward (FF) was the most successful automatic planner in the Fifth International Conference on Artificial Intelligence Planning and Scheduling (AIPS '00) planning systems competition. Like the well-known hsp system, FF relies on forward search in the state space, guided by a heuristic that estimates goal distances by ignoring delete lists. It differs from HSP in a number of important details. This article describes the algorithmic techniques used in FF in comparison to hsp and evaluates their benefits in terms of run-time and solution-length behavior.

Journal ArticleDOI
TL;DR: The inherent nature of the CNDP, the marginal function for the lower-level user equilibrium problem is proved to be continuously differentiable and its functional value and derivative in link capacity enhancement can be obtained efficiently by implementing a user equilibrium assignment subroutine.
Abstract: The continuous network design problem (CNDP) is characterized by a bilevel programming model and recognized to be one of the most difficult and challenging problems in transportation. The main difficulty stems from the fact that the bilevel formulation for the CNDP is nonconvex and nondifferentiable, and indeed only some heuristic methods have been so far proposed. In this paper, the bilevel programming model for CNDPs is transferred into a single level optimization problem by virtue of a marginal function tool. By exploring the inherent nature of the CNDP, the marginal function for the lower-level user equilibrium problem is proved to be continuously differentiable and its functional value and derivative in link capacity enhancement can be obtained efficiently by implementing a user equilibrium assignment subroutine. Thus a continuously differentiable but still nonconvex optimization formulation of the CNDP is created and a locally convergent augmented Lagrangian method is applied to solve this equivalent problem. The descent direction in each step of the inner loop of the solution method can be found by doing an all or nothing assignment. These favorable characteristics indicate the potential of the algorithm to solve large CNDPs. Numerical examples are presented to compare the proposed method with some existing algorithms.

Journal ArticleDOI
TL;DR: In this paper, a greedy randomized adaptive search procedure (GRASP) is applied to solve the transmission network expansion problem, and the best solution over all GRASP iterations is chosen as the result.
Abstract: A greedy randomized adaptive search procedure (GRASP) is a heuristic method that has shown to be very powerful in solving combinatorial problems. In this paper we apply GRASP to solve the transmission network expansion problem. This procedure is an expert iterative sampling technique that has two phases for each iteration. The first, construction phase, finds a feasible solution for the problem. The second phase, a local search, seeks for improvements on construction phase solution by a local search. The best solution over all GRASP iterations is chosen as the result.

Journal ArticleDOI
TL;DR: It is proved that, under certain conditions, having equality degree constraints with multiple edges allowed in the design of logical topologies does not affect congestion and helps in reducing the dimensionality of the search space and hence speeds up the search for an optimal solution of the linear formulation.
Abstract: We consider the problem of constructing logical topologies over a wavelength-routed optical network with no wavelength changers. We present a general linear formulation which considers routing traffic demands, and routing and assigning wavelengths to lightpaths, as a combined optimization problem. The formulation also takes into account the maximum number of hops a lightpath is permitted to take, multiple logical links in the logical topology, multiple physical links in the physical topology, and symmetry/asymmetry restrictions in designing logical topologies. The objective is to minimize congestion. We show by examples how equality and inequality logical degree constraints have a bearing on congestion. We prove that, under certain conditions, having equality degree constraints with multiple edges allowed in the design of logical topologies does not affect congestion. This helps in reducing the dimensionality of the search space and hence speeds up the search for an optimal solution of the linear formulation. We solve the linear formulation for small examples and show the tradeoff between congestion, number of wavelengths available and the maximum number of hops a lightpath is allowed to take. For large networks, we solve the linear formulation by relaxing the integer constraints. We develop topology design algorithms for large networks based on rounding the solutions obtained by solving the relaxed problem. Since the whole problem is linearizable, the solution obtained by relaxation of the integer constraints yields a lower bound on congestion. This is useful in comparing the efficiency of our heuristic algorithms. Following Bienstock and Gunluk (1995), we introduce a cutting plane which helps in obtaining better lower bounds on congestion and also enables us to reduce the previously obtained upper bounds on congestion.

Journal ArticleDOI
TL;DR: A new pruning algorithm is presented that uses the sensitivity analysis to quantify the relevance of input and hidden units and a new statistical pruning heuristic is proposed, based on the variance analysis, to decide which units to prune.
Abstract: Architecture selection is a very important aspect in the design of neural networks (NNs) to optimally tune performance and computational complexity. Sensitivity analysis has been used successfully to prune irrelevant parameters from feedforward NNs. This paper presents a new pruning algorithm that uses the sensitivity analysis to quantify the relevance of input and hidden units. A new statistical pruning heuristic is proposed, based on the variance analysis, to decide which units to prune. The basic idea is that a parameter with a variance in sensitivity not significantly different from zero, is irrelevant and can be removed. Experimental results show that the new pruning algorithm correctly prunes irrelevant input and hidden units. The new pruning algorithm is also compared with standard pruning algorithms.

Journal ArticleDOI
TL;DR: The objective of this paper is to present and categorise the solution approaches in the literature for 2D regular and irregular strip packing problems and focus is hereby on the analysis of themethods involving genetic algorithms.
Abstract: This paper is a review of the approaches developed to solve 2D packing problems with meta-heuristic algorithms As packing tasks are combinatorial problems with very large search spaces, the recent literature encourages the use of meta-heuristic search methods, in particular genetic algorithms The objective of this paper is to present and categorise the solution approaches in the literature for 2D regular and irregular strip packing problems The focus is hereby on the analysis of the methods involving genetic algorithms An overview of the methods applying other meta-heuristic algorithms including simulated annealing, tabu search, and artificial neural networks is also given

Journal ArticleDOI
TL;DR: A comparative study among GA, SA, and TS, which shows that these algorithms have many similarities, but they also possess distinctive features, mainly in their strategies for searching the solution state space.

Journal ArticleDOI
TL;DR: A new hybrid algorithm is described that exploits a compact genetic algorithm in order to generate high-quality tours, which are then refined by means of the Lin-Kernighan (LK) local search.
Abstract: The combination of genetic and local search heuristics has been shown to be an effective approach to solving the traveling salesman problem (TSP). This paper describes a new hybrid algorithm that exploits a compact genetic algorithm in order to generate high-quality tours, which are then refined by means of the Lin-Kernighan (LK) local search. The local optima found by the LK local search are in turn exploited by the evolutionary part of the algorithm in order to improve the quality of its simulated population. The results of several experiments conducted on different TSP instances with up to 13,509 cities show the efficacy of the symbiosis between the two heuristics.

Journal ArticleDOI
TL;DR: Compared to an optimal method, the improved heuristic is shown to be a very efficient algorithm which allocates shelf space at near-optimal levels.

Journal ArticleDOI
TL;DR: The proposed algorithm is particularly effective when the facility reopening and closing costs are relatively significant in the multi-period problem, and can be implemented to solve the composite problem.

Proceedings ArticleDOI
14 Oct 2001
TL;DR: This work proposes a heuristic for allocation in combinatorial auctions that can provide excellent solutions for problems with over 1000 items and 10,000 bids and achieves an average approximation error of less than 1%.
Abstract: We propose a heuristic for allocation in combinatorial auctions. We first run an approximation algorithm on the linear programming relaxation of the combinatorial auction. We then run a sequence of greedy algorithms, starting with the order on the bids determined by the approximate linear program and continuing in a hill-climbing fashion using local improvements in the order of bids. We have implemented the algorithm and have tested it on the complete corpus of instances provided by Vohra and de Vries as well as on instances drawn from the distributions of Leyton-Brown, Pearson, and Shoham. Our algorithm typically runs two to three orders of magnitude faster than the reported running times of Vohra and de Vries, while achieving an average approximation error of less than 1%. This algorithm can provide, in less than a minute of CPU time, excellent solutions for problems with over 1000 items and 10,000 bids. We thus believe that combinatorial auctions for most purposes face no practical computational hurdles.

Journal ArticleDOI
TL;DR: It is found that although all methods are able to generate significant in-sample and out-of-sample profits when transaction costs are zero, the genetic algorithm approach is superior for non-zero transaction costs, although none of the methods produce significant profits at realistic transaction costs.
Abstract: We consider strategies which use a collection of popular technical indicators as input and seek a profitable trading rule defined in terms of them. We consider two popular computational learning approaches, reinforcement learning and genetic programming, and compare them to a pair of simpler methods: the exact solution of an appropriate Markov decision problem, and a simple heuristic. We find that although all methods are able to generate significant in-sample and out-of-sample profits when transaction costs are zero, the genetic algorithm approach is superior for non-zero transaction costs, although none of the methods produce significant profits at realistic transaction costs. We also find that there is a substantial danger of overfitting if in-sample learning is not constrained.

Book ChapterDOI
05 Sep 2001
TL;DR: An algorithm based on a depth-first, heuristic-driven (DFHD) search for finding minimal covers of hypergraphs, which indicates that DFHD search is more efficient than Dep-Miner's levelwise search or TANE's partitioning approach for many of these benchmark instances.
Abstract: The problem of discovering functional dependencies (FDs) from an existing relation instance has received considerable attention in the database research community. To date, even the most efficient solutions have exponential complexity in the number of attributes of the instance. We develop an algorithm, FastFDs, for solving this problem based on a depth-first, heuristic-driven (DFHD) search for finding minimal covers of hypergraphs. The technique of reducing the FD discovery problem to the problem of finding minimal covers of hypergraphs was applied previously by Lopes et al. in the algorithm Dep-Miner. Dep-Miner employs a levelwise search for minimal covers, whereas FastFDs uses DFHD search. We report several tests on distinct benchmark relation instances involving Dep-Miner, FastFDs, and TANE. Our experimental results indicate that DFHD search is more efficient than Dep-Miner's levelwise search or TANE's partitioning approach for many of these benchmark instances.

Journal ArticleDOI
TL;DR: The approach, which is based on the cyclic transfer neighborhood structure due to Thompson and Psaraftis and Thompson and Orlin transforms a profitable exchange into a negative cost subset-disjoint cycle in a graph, called an improvement graph, and identifies these cycles using variants of shortest path label-correcting algorithms.
Abstract: The capacitated minimum spanning tree (CMST) problem is to find a minimum cost spanning tree with an additional cardinality constraint on the sizes of the subtrees incident to a given root node. The CMST problem is an NP-complete problem, and existing exact algorithms can solve only small size problems. Currently, the best available heuristic procedures for the CMST problem are tabu search algorithms due to Amberg et al. and Sharaiha et al. These algorithms use two-exchange neighborhood structures that are based on exchanging a single node or a set of nodes between two subtrees. In this paper, we generalize their neighborhood structures to allow exchanges of nodes among multiple subtrees simultaneously; we refer to such neighborhood structures as multi-exchange neighborhood structures. Our first multi-exchange neighborhood structure allows exchanges of single nodes among several subtrees. Our second multi-exchange neighborhood structure allows exchanges that involve multiple subtrees. The size of each of these neighborhood structures grows exponentially with the problem size without any substantial increase in the computational times needed to find improved neighbors. Our approach, which is based on the cyclic transfer neighborhood structure due to Thompson and Psaraftis and Thompson and Orlin transforms a profitable exchange into a negative cost subset-disjoint cycle in a graph, called an improvement graph, and identifies these cycles using variants of shortest path label-correcting algorithms. Our computational results with GRASP and tabu search algorithms based on these neighborhood structures reveal that (i) for the unit demand case our algorithms obtained the best available solutions for all benchmark instances and improved some; and (ii) for the heterogeneous demand case our algorithms improved the best available solutions for most of the benchmark instances with improvements by as much as 18%. The running times our multi-exchange neighborhood search algorithms are comparable to those taken by two-exchange neighborhood search algorithms.

Journal ArticleDOI
TL;DR: The solution method, which is based upon Atkinson's greedy look-ahead heuristic, enhances traditional vehicle routing approaches, and provides surprisingly good performance results with respect to a set of standard test problems from the literature.
Abstract: In this paper we consider the problem of physically distributing finished goods from a central facility to geographically dispersed customers, which pose daily demands for items produced in the facility and act as sales points for consumers. The management of the facility is responsible for satisfying all demand, and promises deliveries to the customers within fixed time intervals that represent the earliest and latest times during the day that a delivery can take place. We formulate a comprehensive mathematical model to capture all aspects of the problem, and incorporate in the model all critical practical concerns such as vehicle capacity, delivery time intervals and all relevant costs. The model, which is a case of the vehicle routing problem with time windows, is solved using a new heuristic technique. Our solution method, which is based upon Atkinson's greedy look-ahead heuristic, enhances traditional vehicle routing approaches, and provides surprisingly good performance results with respect to a set of standard test problems from the literature. The approach is used to determine the vehicle fleet size and the daily route of each vehicle in an industrial example from the food industry. This actual problem, with approximately two thousand customers, is presented and solved by our heuristic, using an interface to a Geographical Information System to determine inter-customer and depot–customer distances. The results indicate that the method is well suited for determining the required number of vehicles and the delivery schedules on a daily basis, in real life applications.

Book ChapterDOI
28 May 2001
TL;DR: Experimental results suggest that M-HEU finds 96% optimal solutions on average with much reduced computational complexity and performs favorably relative to other heuristic algorithms for MMKP.
Abstract: The Multiple-Choice Multi-Dimension Knapsack Problem (MMKP) is a variant of the 0-1 Knapsack Problem, an NP-Hard problem. Hence algorithms for finding the exact solution of MMKP are not suitable for application in real time decision-making applications, like quality adaptation and admission control of an interactive multimedia system. This paper presents two new heuristic algorithms, M-HEU and I-HEU for solving MMKP. Experimental results suggest that M-HEU finds 96% optimal solutions on average with much reduced computational complexity and performs favorably relative to other heuristic algorithms for MMKP. The scalability property of I-HEU makes this heuristic a strong candidate for use in real time applications.

Journal ArticleDOI
TL;DR: In this article, a branch-and-bound algorithm was proposed to solve the problem of batching orders in a parallel-aisle warehouse, with the objective to minimize the maximum lead time of any of the batches.
Abstract: In this paper we address the problem of batching orders in a parallel-aisle warehouse, with the objective to minimize the maximum lead time of any of the batches. This is a typical objective for a wave picking operation. Many heuristics have been suggested to solve order batching problems. We present a branch-and-bound algorithm to solve this problem exactly. An initial upper bound for the branch-and-bound algorithm is obtained using a 2-opt heuristic. We present a basic version of the algorithm and show that major improvements are obtained by a simple but very powerful preprocessing step and an improved lower bound. The improvements for the algorithm are developed and tested using a relatively small test set. Next, the improved algorithm is tested on an extensive test set. It appears that problems of moderate size can be solved to optimality in practical time, especially when the number of batches is of importance. The 2-opt heuristic appears to be very powerful, providing tight upper bounds. Therefore, a truncated branch-and-bound algorithm would suffice in practice.

Journal ArticleDOI
TL;DR: The design and implementation of a PC-based computer system to aid the construction of a combined university course–examination timetable is reported, which allows the easy construction and testing of alternative schedules which are pre-conditioned according to requirements specified by the user.

Journal ArticleDOI
TL;DR: Several new techniques for dealing with the Steiner problem in (undirected) networks are presented, including heuristics that achieve sharper upper bounds than the strongest known heuristic for this problem despite running times which are smaller by orders of magnitude.