scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic published in 2002"


Journal ArticleDOI
TL;DR: This work presents a simple and efficient implementation of Lloyd's k-means clustering algorithm, which it calls the filtering algorithm, and establishes the practical efficiency of the algorithm's running time.
Abstract: In k-means clustering, we are given a set of n data points in d-dimensional space R/sup d/ and an integer k and the problem is to determine a set of k points in Rd, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's (1982) algorithm. We present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation.

5,288 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: This paper introduces a proposal to extend the heuristic called "particle swarm optimization" (PSO) to deal with multiobjective optimization problems and it maintains previously found nondominated vectors in a global repository that is later used by other particles to guide their own flight.
Abstract: This paper introduces a proposal to extend the heuristic called "particle swarm optimization" (PSO) to deal with multiobjective optimization problems. Our approach uses the concept of Pareto dominance to determine the flight direction of a particle and it maintains previously found nondominated vectors in a global repository that is later used by other particles to guide their own flight. The approach is validated using several standard test functions from the specialized literature. Our results indicate that our approach is highly competitive with current evolutionary multiobjective optimization techniques.

1,842 citations


Proceedings Article
01 Jan 2002
TL;DR: The proposed extensions of the Support Vector Machine learning approach lead to mixed integer quadratic programs that can be solved heuristic ally and a generalization of SVMs makes a state-of-the-art classification technique, including non-linear classification via kernels, available to an area that up to now has been largely dominated by special purpose methods.
Abstract: This paper presents two new formulations of multiple-instance learning as a maximum margin problem. The proposed extensions of the Support Vector Machine (SVM) learning approach lead to mixed integer quadratic programs that can be solved heuristic ally. Our generalization of SVMs makes a state-of-the-art classification technique, including non-linear classification via kernels, available to an area that up to now has been largely dominated by special purpose methods. We present experimental results on a pharmaceutical data set and on applications in automated image indexing and document categorization.

1,556 citations


Journal ArticleDOI
TL;DR: The recognition heuristic, arguably the most frugal of all heuristics, makes inferences from patterns of missing knowledge that leads to the counterintuitive less-is-more effect in which less knowledge is better than more for making accurate inferences.
Abstract: One view of heuristics is that they are imperfect versions of optimal statistical procedures considered too complicated for ordinary minds to carry out. In contrast, the authors consider heuristics to be adaptive strategies that evolved in tandem with fundamental psychological mechanisms. The recognition heuristic, arguably the most frugal of all heuristics, makes inferences from patterns of missing knowledge. This heuristic exploits a fundamental adaptation of many organisms: the vast, sensitive, and reliable capacity for recognition. The authors specify the conditions under which the recognition heuristic is successful and when it leads to the counterintuitive less-is-more effect in which less knowledge is better than more for making accurate inferences.

1,227 citations


Journal ArticleDOI
TL;DR: The effectiveness of the proposed PSO-based algorithm is demonstrated by comparing it with the genetic algorithm, which is well-known population-based probabilistic heuristic, on randomly generated task interaction graphs.

649 citations


Proceedings ArticleDOI

[...]

28 Jul 2002
TL;DR: This paper applies Lifelong Planning A* to robot navigation inunknown terrain, including goal-directed navigation in unknown terrain and mapping of unknown terrain, and develops the resulting D* Lite algorithm, which implements the same behavior as Stentz' Focussed Dynamic A* but is algorithmically different.
Abstract: Incremental heuristic search methods use heuristics to focus their search and reuse information from previous searches to find solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. In this paper, we apply Lifelong Planning A* to robot navigation in unknown terrain, including goal-directed navigation in unknown terrain and mapping of unknown terrain. The resulting D* Lite algorithm is easy to understand and analyze. It implements the same behavior as Stentz' Focussed Dynamic A* but is algorithmically different. We prove properties about D* Lite and demonstrate experimentally the advantages of combining incremental and heuristic search for the applications studied. We believe that these results provide a strong foundation for further research on fast replanning methods in artificial intelligence and robotics.

576 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a trade-off strategy where multiple negotiation decision variables are traded-off against one another (e.g., paying a higher price in order to obtain an earlier delivery date or waiting longer to obtain a higher quality service).

566 citations


Journal ArticleDOI
TL;DR: It is shown that several key properties, used to design heuristic procedures, do not hold in the blocking and no-wait cases, while some of the most effective ideas used to develop branch and bound algorithms can be easily extended.

448 citations


Journal ArticleDOI
TL;DR: This work provides an overview of heuristic algorithms for constraint-based path selection, focusing on restricted shortest path and multi-constrained path algorithms.
Abstract: Constraint-based path selection aims at identifying a path that satisfies a set of quality of service (QoS) constraints. In general, this problem is known to be NP-complete, leading to the proposal of many heuristic algorithms. We provide an overview of these algorithms, focusing on restricted shortest path and multi-constrained path algorithms.

337 citations


Book ChapterDOI
10 Dec 2002
TL;DR: This work builds a path planning system based on RRTs that interleaves planning and execution, first evaluating it in simulation and then applying it to physical robots, and demonstrates that ERRT is significantly more efficient for replanning than a basic RRT planner.
Abstract: Mobile robots often must find a trajectory to another position in their environment, subject to constraints. This is the problem of planning a path through a continuous domain Rapidly-exploring random trees (RRTs) are a recently developed representation on which fast continuous domain path planners can be based. In this work, we build a path planning system based on RRTs that interleaves planning and execution, first evaluating it in simulation and then applying it to physical robots. Our planning algorithm, ERRT (execution extended RRT), introduces two novel extensions of previous RRT work, the waypoint cache and adaptive cost penalty search, which improve replanning efficiency and the quality of generated paths. ERRT is successfully applied to a real-time multi-robot system. Results demonstrate that ERRT is significantly more efficient for replanning than a basic RRT planner, performing competitively with or better than existing heuristic and reactive real-time path planning approaches. ERRT is a significant step forward with the potential for making path planning common on real robots, even in challenging continuous, highly dynamic domains.

317 citations


Journal ArticleDOI
TL;DR: The paper builds on earlier work, in which the second-best tax rule for this problem was derived for general static networks, so that the solution presented is valid for any graph of the network, and for any set of tolling points available on that network.
Abstract: This paper considers the second-best problem where not all links of a congested transportation network can be tolled. This paper builds on earlier work, in which the second-best tax rule for this problem was derived for general static networks, so that the solution presented is valid for any graph of the network, and for any set of tolling points available on that network. An algorithm is presented for finding second-best tolls, based on this general solution. A simulation model is used for studying its performance for various archetype pricing schemes: a toll-cordon, area licences, parking policies in the city centre, pricing of a single major highway, and pay-lanes and `free-lanes' on major highways. Furthermore, an exploratory analysis is given of a method for selecting the optimal location of toll points when not all links can be tolled.

Journal ArticleDOI
TL;DR: An overview of recent results on lexicographic, linear, and Bayesian models for paired comparison from a cognitive psychology perspective, and identifies the optimal model in each class, where optimality is defined with respect to performance when fitting known data.
Abstract: This article provides an overview of recent results on lexicographic, linear, and Bayesian models for paired comparison from a cognitive psychology perspective. Within each class, we distinguish subclasses according to the computational complexity required for parameter setting. We identify the optimal model in each class, where optimality is defined with respect to performance when fitting known data. Although not optimal when fitting data, simple models can be astonishingly accurate when generalizing to new data. A simple heuristic belonging to the class of lexicographic models is Take The Best (Gigerenzer & Goldstein (1996) Psychol. Rev. 102: 684). It is more robust than other lexicographic strategies which use complex procedures to establish a cue hierarchy. In fact, it is robust due to its simplicity, not despite it. Similarly, Take The Best looks up only a fraction of the information that linear and Bayesian models require; yet it achieves performance comparable to that of models which integrate information. Due to its simplicity, frugality, and accuracy, Take The Best is a plausible candidate for a psychological model in the tradition of bounded rationality. We review empirical evidence showing the descriptive validity of fast and frugal heuristics.

Journal ArticleDOI
TL;DR: A new heuristic called self‐adapting genetic algorithm to solve the classical resource‐constrained project scheduling problem (RCPSP), which employs the well‐known activity list representation and considers two different decoding procedures.
Abstract: This papers deals with the classical resource-constrained project scheduling problem (RCPSP). There, the activities of a project have to be scheduled subject to precedence and resource constraints. The objective is to minimize the makespan of the project. We propose a new heuristic called self-adapting genetic algorithm to solve the RCPSP. The heuristic employs the well-known activity list representation and considers two different decoding procedures. An additional gene in the representation determines which of the two decoding procedures is actually used to compute a schedule for an individual. This allows the genetic algorithm to adapt itself to the problem instance actually solved. That is, the genetic algorithm learns which of the alternative decoding procedures is the more successful one for this instance. In other words, not only the solution for the problem, but also the algorithm itself is subject to genetic optimization. Computational experiments show that the mechanism of self-adaptation is capable to exploit the benefits of both decoding procedures. Moreover, the tests show that the proposed heuristic is among the best ones currently available for the RCPSP. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 433–448, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10029


Journal ArticleDOI
I-Ming Chao1
TL;DR: A solution construction method and a tabu search improvement heuristic coupled with the deviation concept found in deterministic annealing are developed to solve the truck and trailer routing problem.

Journal ArticleDOI
TL;DR: In this article, rank-two relaxation was proposed to solve the MAX-CUT problem and a specialized version of the Goemans-Williamson technique was developed to achieve better practical performance.
Abstract: The Goemans--Williamson randomized algorithm guarantees a high-quality approximation to the MAX-CUT problem, but the cost associated with such an approximation can be excessively high for large-scale problems due to the need for solving an expensive semidefinite relaxation. In order to achieve better practical performance, we propose an alternative, rank-two relaxation and develop a specialized version of the Goemans--Williamson technique. The proposed approach leads to continuous optimization heuristics applicable to MAX-CUT as well as other binary quadratic programs, for example the MAX-BISECTION problem. A computer code based on the rank-two relaxation heuristics is compared with two state-of-the-art semidefinite programming codes that implement the Goemans--Williamson randomized algorithm, as well as with a purely heuristic code for effectively solving a particular MAX-CUT problem arising in physics. Computational results show that the proposed approach is fast and scalable and, more importantly, attains a higher approximation quality in practice than that of the Goemans--Williamson randomized algorithm. An extension to MAX-BISECTION is also discussed, as is an important difference between the proposed approach and the Goemans--Williamson algorithm; namely, that the new approach does not guarantee an upper bound on the MAX-CUT optimal value.

Journal ArticleDOI
TL;DR: Results indicate that the limited path heuristic is relatively insensitive to the number of constraints and is superior to the limited granularity heuristic in solving k-constrained QoS routing problems when k > 3.
Abstract: Multiconstrained quality-of-service (QoS) routing deals with finding routes that satisfy multiple independent QoS constraints. This problem is NP-hard. In this paper, two heuristics, the limited granularity heuristic and the limited path heuristic, are investigated. Both heuristics extend the Bellman-Ford shortest path algorithm and solve general k-constrained QoS routing problems. Analytical and simulation studies are conducted to compare the time/space requirements of the heuristics and the effectiveness of the heuristics in finding paths that satisfy the QoS constraints. The major results of this paper are the following. For an N-nodes and E-edges network with k (a small constant) independent QoS constraints, the limited granularity heuristic must maintain a table of size O(|N|k- 1) in each node to be effective, which results in a time complexity of O (|N|K|E|); while the limited path heuristic can achieve very high performance by maintaining O (|N|2lg(|N|)) entries in each node. These results indicate that the limited path heuristic is relatively insensitive to the number of constraints and is superior to the limited granularity heuristic in solving k-constrained QoS routing problems when k > 3.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: This paper considers resource allocation and pricing for the downlink of a wireless network, and considers a suboptimal scheme which does not require knowledge of the users' utility functions, and shows that this scheme is asymptotically optimal, in the limit of large demand.
Abstract: This paper considers resource allocation and pricing for the downlink of a wireless network. We describe a model that applies to either a time-slotted system (e.g. Qualcomm's HDR proposal) or a CDMA system; the main feature of this model is that the channel quality varies across the users. We study using a pricing scheme for the allocation of radio resources. We show that to maximize revenue in such a system, the base station should allocate resources in a discriminatory manner, where different users are charged different prices based in part on their channel quality. However, optimally allocating resources in this way is shown to require knowledge about each user's utility function. We consider a suboptimal scheme which does not require knowledge of the users' utility functions, and show that this scheme is asymptotically optimal, in the limit of large demand. Moreover, such a scheme is shown to maximize social welfare. We also consider a heuristic scheme for the case of small demand, which does not require perfect knowledge about the users' utility functions. We provide numerical results that illustrate the performance of this heuristic.

Proceedings Article
23 Apr 2002
TL;DR: In this article, a fast planner using local search for solving planning graphs is presented, which is inspired by Walksat, which in Kautz and Selman's Blackbox can be used to solve the SAT-encoding of a planning graph.
Abstract: We present LPG, a fast planner using local search for solving planning graphs. LPG can use various heuristics based on a parametrized objective function. These parameters weight different types of inconsistencies in the partial plan represented by the current search state, and are dynamically evaluated during search using Lagrange multipliers. LPG's basic heuristic was inspired by Walksat, which in Kautz and Selman's Blackbox can be used to solve the SAT-encoding of a planning graph. An advantage of LPG is that its heuristics exploit the structure of the planning graph, while Black-box relies on general heuristics for SAT-problems, and requires the translation of the planning graph into propositional clauses. Another major difference is that LPG can handle action costs to produce good quality plans. This is achieved by an "anytime" process minimizing an objective function based on the number of inconsistencies in the partial plan and on its overall cost. The objective function can also take into account the number of parallel steps and the overall plan duration. Experimental results illustrate the efficiency of our approach showing, in particular, that for a set of well-known benchmark domains LPG is significantly faster than existing Graphplan-style planners.

Journal ArticleDOI
TL;DR: Eight types of heuristic planning techniques were applied to three increasingly diffi cult forest planning problems where the objective function sought to maximize the amount of land in certain types of wildlife habitat to understand the relative challenges and opportunities each technique presents when more complex Diffi cult goals are desired.
Abstract: As both spatial and temporal characteristics of desired future conditions are becoming important measures of forest plan success, forest plans and forest planning goals are becoming complex. Heuristic techniques are becoming popular for developing alternative forest plans that include spatial constraints. Eight types of heuristic planning techniques were applied to three increasingly diffi cult forest planning problems where the objective function sought to maximize the amount of land in certain types of wildlife habitat. The goal of this research was to understand the relative challenges and opportunities each technique presents when more complex diffi cult goals are desired. The eight heuristic techniques were random search, simulated annealing, great deluge, threshold accepting, tabu search with 1-opt moves, tabu search with 1-opt and 2-opt moves, genetic algorithm, and a hybrid tabu search / genetic algorithm search process. While our results should not be viewed as universal truths, we determined that for the problems we examined, there were three classes of techniques: very good (simulated annealing, threshold accepting, great deluge, tabu search with 1-opt and 2-opt moves, and tabu search / genetic algorithm), adequate (tabu search with 1-opt moves, genetic algorithm), and less than adequate (random search). The relative advantages in terms of solution time and complexity of programming code are discussed and should provide planners and researchers a guide to help match the appropriate technique to their planning problem. The hypothetical landscape model used to evaluate the techniques can also be used by others to further compare their techniques to the ones described here.

01 Jan 2002
TL;DR: In this article, the authors develop a model for analyzing complex games with repeated interactions, for which a full game-theoretic analysis is intractable, and compute a heuristic-payoff table specifying the expected payoffs of the joint heuristic strategy space.
Abstract: We develop a model for analyzing complex games with repeated interactions, for which a full game-theoretic analysis is intractable. Our approach treats exogenously specified, heuristic strategies, rather than the atomic actions, as primitive, and computes a heuristic-payoff table specifying the expected payoffs of the joint heuristic strategy space. We analyze two games based on (i) automated dynamic pricing and (ii) continuous double auction. For each game we compute Nash equilibria of previously published heuristic strategies. To determine the most plausible equilibria, we study the replicator dynamics of a large population playing the strategies. In order to account for errors in estimation of payoffs or improvements in strategies, we also analyze the dynamics and equilibria based on perturbed payoffs.

MonographDOI
01 Feb 2002
TL;DR: Data Mining: A Heuristic Approach is a repository for the applications of these techniques in the area of data mining.
Abstract: From the Publisher: Real-life problems are known to be messy, dynamic and multi-objective, and involve high levels of uncertainty and constraints. Because traditional problem-solving methods are no longer capable of handling this level of complexity, heuristic search methods have attracted increasing attention in recent years for solving such problems. Inspired by nature, biology, statistical mechanics, physics and neuroscience, heuristic techniques are used to solve many problems where traditional methods have failed. Data Mining: A Heuristic Approach is a repository for the applications of these techniques in the area of data mining.

Journal ArticleDOI
TL;DR: In this paper, a modified shifting bottleneck heuristic is developed for minimizing the total weighted tardiness in a semiconductor wafer fabrication facility, which is characterized by re-entrant or re-circulating product flow through a number of different tool groups (one or more machines operating in parallel).
Abstract: Increases in the demand for integrated circuits have highlighted the importance of meeting customer quality and on-time delivery expectations in the semiconductor industry A modified shifting bottleneck heuristic is developed for minimizing the total weighted tardiness in a semiconductor wafer fabrication facility This ‘complex’ job shop is characterized by re-entrant or re-circulating product flow through a number of different tool groups (one or more machines operating in parallel) These tool groups typically contain batching machines, as well as machines that are subject to sequence-dependent setups The disjunctive graph of the complex job shop is presented, along with a description of the proposed heuristic Preliminary results indicate the heuristic's potential for promoting on-time deliveries by semiconductor manufacturers for their customers' orders Copyright © 2002 John Wiley & Sons, Ltd

Journal ArticleDOI
TL;DR: This paper develops a solution procedure that considers each objective separately and search for a set of efficient solutions instead of a single optimum within the framework of the evolutionary approach known as scatter search.
Abstract: In this paper we address the problem of routing school buses in a rural area. We approach this problem with a node routing model with multiple objectives that arise from conflicting viewpoints. From the point of view of cost, it is desirable to minimise the number of buses used to transport students from their homes to school and back. From the point of view of service, it is desirable to minimise the time that a given student spends en route. The current literature deals primarily with single-objective problems and the models with multiple objectives typically employ a weighted function to combine the objectives into a single one. We develop a solution procedure that considers each objective separately and search for a set of efficient solutions instead of a single optimum. Our solution procedure is based on constructing, improving and then combining solutions within the framework of the evolutionary approach known as scatter search. Experimental testing with real data is used to assess the merit of our proposed procedure.

Journal ArticleDOI
TL;DR: A Tabu Search framework is introduced exploiting a new constructive heuristic for the evaluation of the neighborhood of 3BP, showing the effectiveness of the approach with respect to exact and heuristic algorithms from the literature.

Journal ArticleDOI
TL;DR: An Ant Colony Optimization approach is proposed to solve the 2-machine flowshop scheduling problem with the objective of minimizing both the total completion time and the makespan criteria.

Journal ArticleDOI
TL;DR: Results on a set of benchmark test problems show that the proposed heuristic produces excellent solutions in short computing times, and produced new best-known solutions for three of the test problems.

Journal Article
TL;DR: In this paper, the authors present two algorithms to prove termination of programs by synthesizing linear ranking functions, using an invariant generator based on iterative forward propagation with widening and extracting ranking functions from the generated invariants.
Abstract: We present two algorithms to prove termination of programs by synthesizing linear ranking functions. The first uses an invariant generator based on iterative forward propagation with widening and extracts ranking functions from the generated invariants by manipulating polyhedral cones. It is capable of finding subtle ranking functions which are linear combinations of many program variables, but is limited to programs with few variables. The second, more heuristic, algorithm targets the class of structured programs with single-variable ranking functions. Its invariant generator uses a heuristic extrapolation operator to avoid iterative forward propagation over program loops. For the programs we have considered, this approach converges faster and the invariants it discovers are sufficiently strong to imply the existence of ranking functions.

Patent
Kumar Gajjar, Jim Collins1, Richard Meyer, Chandra Prasad1, Dipam Patel 
13 Feb 2002
TL;DR: In this article, a storage provisioning policy is created by specifying storage heuristics for storage attributes using storage heuristic metadata, which are defined to express a rule or constraint as a function of a storage attribute.
Abstract: A storage provisioning policy is created by specifying storage heuristics for storage attributes using storage heuristic metadata. Storage attributes characterize a storage device and storage heuristic metadata describe how to specify a storage heuristic. Using the storage heuristic metadata, storage heuristics are defined to express a rule or constraint as a function of a storage attribute. In addition, the storage provisioning policy may also specify mapping rules for exporting the storage to a consumer of the storage, such as the server or server cluster.

Proceedings ArticleDOI
12 May 2002
TL;DR: The problem can be solved successfully by a genetic algorithm based hyperheuristic (hyper-GA) for scheduling geographically distributed training staff and courses, and results for four versions of the hyper-GA as well as a range of simpler heuristics and applying them to five test data set are presented.
Abstract: This paper investigates a genetic algorithm based hyperheuristic (hyper-GA) for scheduling geographically distributed training staff and courses. The aim of the hyper-GA is to evolve a good-quality heuristic for each given instance of the problem and use this to find a solution by applying a suitable ordering from a set of low-level heuristics. Since the user only supplies a number of low-level problem-specific heuristics and an evaluation function, the hyperheuristic can easily be reimplemented for a different type of problem, and we would expect it to be robust across a wide range of problem instances. We show that the problem can be solved successfully by a hyper-GA, presenting results for four versions of the hyper-GA as well as a range of simpler heuristics and applying them to five test data set.