scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic published in 2006"


Journal ArticleDOI
TL;DR: Fast Downward as discussed by the authors uses hierarchical decompositions of planning tasks for computing its heuristic function, called the causal graph heuristic, which is very different from traditional HSP-like heuristics based on ignoring negative interactions of operators.
Abstract: Fast Downward is a classical planning system based on heuristic search. It can deal with general deterministic planning problems encoded in the propositional fragment of PDDL2.2, including advanced features like ADL conditions and effects and derived predicates (axioms). Like other well-known planners such as HSP and FF, Fast Downward is a progression planner, searching the space of world states of a planning task in the forward direction. However, unlike other PDDL planning systems, Fast Downward does not use the propositional PDDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multivalued planning tasks, which makes many of the implicit constraints of a propositional planning task explicit. Exploiting this alternative representation, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic function, called the causal graph heuristic, which is very different from traditional HSP-like heuristics based on ignoring negative interactions of operators. In this article, we give a full account of Fast Downward's approach to solving multivalued planning tasks. We extend our earlier discussion of the causal graph heuristic to tasks involving axioms and conditional effects and present some novel techniques for search control that are used within Fast Downward's best-first search algorithm: preferred operators transfer the idea of helpful actions from local search to global best-first search, deferred evaluation of heuristic functions mitigates the negative effect of large branching factors on search performance, and multiheuristic best-first search combines several heuristic evaluation functions within a single search algorithm in an orthogonal way. We also describe efficient data structures for fast state expansion (successor generators and axiom evaluators) and present a new non-heuristic search algorithm called focused iterative-broadening search, which utilizes the information encoded in causal graphs in a novel way. Fast Downward has proven remarkably successful: It won the "classical" (i. e., propositional, non-optimising) track of the 4th International Planning Competition at ICAPS 2004, following in the footsteps of planners such as FF and LPG. Our experiments show that it also performs very well on the benchmarks of the earlier planning competitions and provide some insights about the usefulness of the new search enhancements.

1,400 citations


Journal ArticleDOI
TL;DR: This paper gives an up-to-date and comprehensive survey of SALBP research with a special emphasis on recent outstanding and guiding contributions to the field.

833 citations


Journal ArticleDOI
TL;DR: An extensively revised heuristic-analytic theory of reasoning is presented incorporating three principles of hypothetical thinking, including thesingularity principle, relevance principle, and satisfaction principle.
Abstract: An extensively revised heuristic-analytic theory of reasoning is presented incorporating three principles of hypothetical thinking. The theory assumes that reasoning and judgment are facilitated by the formation of epistemic mental models that are generated one at a time (singularity principle) by preconscious heuristic processes that contextualize problems in such a way as to maximize relevance to current goals (relevance principle). Analytic processes evaluate these models but tend to accept them unless there is good reason to reject them (satisficing principle). At a minimum, analytic processing of models is required so as to generate inferences or judgments relevant to the task instructions, but more active intervention may result in modification or replacement of default models generated by the heuristic system. Evidence for this theory is provided by a review of a wide range of literature on thinking and reasoning.

550 citations


Journal ArticleDOI
TL;DR: A unified model is developed that is capable of handling most variants of the Vehicle Routing Problem with Backhauls and it has improved the best known solution for 227 of these.

421 citations


Journal ArticleDOI
TL;DR: The honey-bees mating optimization algorithm (HBMO) is presented and tested with few benchmark examples consisting of highly non-linear constrained and/or unconstrained real-valued mathematical models and results obtained are promising and compare well with the results of other well-known heuristic approaches.
Abstract: Over the last decade, evolutionary and meta-heuristic algorithms have been extensively used as search and optimization tools in various problem domains, including science, commerce, and engineering. Their broad applicability, ease of use, and global perspective may be considered as the primary reason for their success. The honey-bees mating process may also be considered as a typical swarm-based approach to optimization, in which the search algorithm is inspired by the process of real honey-bees mating. In this paper, the honey-bees mating optimization algorithm (HBMO) is presented and tested with few benchmark examples consisting of highly non-linear constrained and/or unconstrained real-valued mathematical models. The performance of the algorithm is quite comparable with the results of the well-developed genetic algorithm. The HBMO algorithm is also applied to the operation of a single reservoir with 60 periods with the objective of minimizing the total square deviation from target demands. Results obtained are promising and compare well with the results of other well-known heuristic approaches.

340 citations


Journal ArticleDOI
TL;DR: This paper combines elements from scatter search, a generic population-based evolutionary search method, and a recently introduced heuristic method for the optimisation of unconstrained continuous functions based on an analogy with electromagnetism theory to provide near-optimal heuristic solutions for resource-constrained project scheduling.

332 citations


Journal ArticleDOI
TL;DR: The goal is to identify the main features of different heuristic strategies, develop a unifying classification framework, and summarize relevant computational experience of various heuristic shortest path algorithms developed in the past.

324 citations


Journal ArticleDOI
TL;DR: This study suggests two methods for determining the locations of relocated blocks using a branch-and-bound (B&B) algorithm and a decision rule proposed by using an estimator for an expected number of additional relocations for a stack.

292 citations


Proceedings ArticleDOI
18 Sep 2006
TL;DR: A heuristic is presented that performs extremely well while providing excellent (almost optimal) solutions to the general optimization problem of how to select Web services for each task so that the overall QoS and cost requirements of the composition are satisfied.
Abstract: This paper discusses the Quality of Service (QoS)- aware composition of Web Services. The work is based on the assumption that for each task in a workflow a set of alternative Web Services with similar functionality is available and that these Web Services have different QoS parameters and costs. This leads to the general optimization problem of how to select Web Services for each task so that the overall QoS and cost requirements of the composition are satisfied. Current proposals use exact algorithms or complex heuristics (e.g. genetic algorithms) to solve this problem. An actual implementation of a workflow engine (like our WSQoSX architecture), however, has to be able to solve these optimization problems in real-time and under heavy load. Therefore, we present a heuristic that performs extremely well while providing excellent (almost optimal) solutions. Using simulations, we show that in most cases our heuristic is able to calculate solutions that come as close as 99% to the optimal solution while taking less than 2% of the time of a standard exact algorithm. Further, we also investigate how much and under which circumstances the solution obtained by our heuristic can be further improved by other heuristics.

273 citations


Journal ArticleDOI
TL;DR: A heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion is presented, validated using the UCI database and applied to a real problem, the classification of traffic sign images.
Abstract: We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.

270 citations


Journal ArticleDOI
21 Aug 2006
TL;DR: A concrete approach to multirobot mapping is presented in form of a special similarity metric and a stochastic search algorithm that guides the search algorithm toward optimal solutions.
Abstract: Mapping can potentially be speeded up in a significant way by using multiple robots exploring different parts of the environment. But the core question of multirobot mapping is how to integrate the data of the different robots into a single global map. A significant amount of research exists in the area of multirobot mapping that deals with techniques to estimate the relative robots poses at the start or during the mapping process. With map merging, the robots in contrast individually build local maps without any knowledge about their relative positions. The goal is then to identify regions of overlap at which the local maps can be joined together. A concrete approach to this idea is presented in form of a special similarity metric and a stochastic search algorithm. Given two maps m and m', the search algorithm transforms m' by rotations and translations to find a maximum overlap between m and m'. In doing so, the heuristic similarity metric guides the search algorithm toward optimal solutions. Results from experiments with up to six robots are presented based on simulated as well as real-world map data

Proceedings ArticleDOI
21 Oct 2006
TL;DR: This work investigates variants of Lloyd's heuristic for clustering high dimensional data in an attempt to explain its popularity (a half century after its introduction) among practitioners, and proposes and justifies a clusterability criterion for data sets.
Abstract: We investigate variants of Lloyd's heuristic for clustering high dimensional data in an attempt to explain its popularity (a half century after its introduction) among practitioners, and in order to suggest improvements in its application. We propose and justify a clusterability criterion for data sets. We present variants of Lloyd's heuristic that quickly lead to provably near-optimal clustering solutions when applied to well-clusterable instances. This is the first performance guarantee for a variant of Lloyd's heuristic. The provision of a guarantee on output quality does not come at the expense of speed: some of our algorithms are candidates for being faster in practice than currently used variants of Lloyd's method. In addition, our other algorithms are faster on well-clusterable instances than recently proposed approximation algorithms, while maintaining similar guarantees on clustering quality. Our main algorithmic contribution is a novel probabilistic seeding process for the starting configuration of a Lloyd-type iteration.

Journal ArticleDOI
TL;DR: A new metaheuristic to solve the LRP with capacitated routes and depots is presented, based on an extended and randomized version of Clarke and Wright algorithm and is competitive with a meta heuristics published for the case of uncapacitated depots.
Abstract: As shown in recent researches, the costs in distribution systems may be excessive if routes are ignored when locating depots. The location routing problem (LRP) overcomes this drawback by simultaneously tackling location and routing decisions. This paper presents a new metaheuristic to solve the LRP with capacitated routes and depots. A first phase executes a GRASP, based on an extended and randomized version of Clarke and Wright algorithm. This phase is implemented with a learning process on the choice of depots. In a second phase, new solutions are generated by a post-optimization using a path relinking. The method is evaluated on sets of randomly generated instances, and compared to other heuristics and a lower bound. Solutions are obtained in a reasonable amount of time for such a strategic problem. Furthermore, the algorithm is competitive with a metaheuristic published for the case of uncapacitated depots.

Journal ArticleDOI
TL;DR: It is shown that case based reasoning can act effectively as an intelligent approach to learn which heuristics work well for particular timetabling situations.
Abstract: This paper presents a case-based heuristic selection approach for automated university course and exam timetabling. The method described in this paper is motivated by the goal of developing timetabling systems that are fundamentally more general than the current state of the art. Heuristics that worked well in previous similar situations are memorized in a case base and are retrieved for solving the problem in hand. Knowledge discovery techniques are employed in two distinct scenarios. Firstly, we model the problem and the problem solving situations along with specific heuristics for those problems. Secondly, we refine the case base and discard cases which prove to be non-useful in solving new problems. Experimental results are presented and analyzed. It is shown that case based reasoning can act effectively as an intelligent approach to learn which heuristics work well for particular timetabling situations. We conclude by outlining and discussing potential research issues in this critical area of knowledge discovery for different difficult timetabling problems.

Journal ArticleDOI
TL;DR: In this paper, the authors show how to reduce revenue management problems to a common formulation in which the firm controls the aggregate rate at which all products jointly consume resource capacity, highlighting their common structure, and in some cases leading to algorithmic simplifications through the reduction in the control dimension of the associated optimization problems.
Abstract: Consider a firm that owns a fixed capacity of a resource that is consumed in the production or delivery of multiple products. The firm strives to maximize its total expected revenues over a finite horizon, either by choosing a dynamic pricing strategy for each product or, if prices are fixed, by selecting a dynamic rule that controls the allocation of capacity to requests for the different products. This paper shows how these well-studied revenue management problems can be reduced to a common formulation in which the firm controls the aggregate rate at which all products jointly consume resource capacity, highlighting their common structure, and in some cases leading to algorithmic simplifications through the reduction in the control dimension of the associated optimization problems. In the context of their associated deterministic (fluid) formulations, this reduction leads to a closed-form characterization of the optimal controls, and suggests several natural static and dynamic pricing heuristics. These are analyzed asymptotically and through an extensive numerical study. In the context of the former, we show that resolving the fluid heuristic achieves asymptotically optimal performance under fluid scaling.

Book ChapterDOI
09 Sep 2006
TL;DR: A general mathematical framework, suited to answer three questions about whether all objectives are necessary to preserve the problem characteristics is proposed, and corresponding algorithms, exact and heuristic ones are proposed.
Abstract: Most of the available multiobjective evolutionary algorithms (MOEA) for approximating the Pareto set have been designed for and tested on low dimensional problems (≤3 objectives). However, it is known that problems with a high number of objectives cause additional difficulties in terms of the quality of the Pareto set approximation and running time. Furthermore, the decision making process becomes the harder the more objectives are involved. In this context, the question arises whether all objectives are necessary to preserve the problem characteristics. One may also ask under which conditions such an objective reduction is feasible, and how a minimum set of objectives can be computed. In this paper, we propose a general mathematical framework, suited to answer these three questions, and corresponding algorithms, exact and heuristic ones. The heuristic variants are geared towards direct integration into the evolutionary search process. Moreover, extensive experiments for four well-known test problems show that substantial dimensionality reductions are possible on the basis of the proposed methodology.

Posted Content
TL;DR: It is shown that the corresponding decision version of modularity maximization is NP-complete in the strong sense, by showing that any efficient algorithm is only heuristic and yields suboptimal partitions on many instances.
Abstract: Several algorithms have been proposed to compute partitions of networks into communities that score high on a graph clustering index called modularity. While publications on these algorithms typically contain experimental evaluations to emphasize the plausibility of results, none of these algorithms has been shown to actually compute optimal partitions. We here settle the unknown complexity status of modularity maximization by showing that the corresponding decision version is NP-complete in the strong sense. As a consequence, any efficient, i.e. polynomial-time, algorithm is only heuristic and yields suboptimal partitions on many instances.

Journal ArticleDOI
TL;DR: In this paper, the authors tested the central claim of current dual-process theories that analytic operations involve time-consuming executive processing whereas the heuristic system would operate automatically and found that making correct analytic inferences demanded more processing time than did making heuristic inferences.
Abstract: Human reasoning has been shown to overly rely on intuitive, heuristic processing instead of a more demanding analytic inference process. Four experiments tested the central claim of current dual-process theories that analytic operations involve time-consuming executive processing whereas the heuristic system would operate automatically. Participants solved conjunction fallacy problems and indicative and deontic selection tasks. Experiment 1 established that making correct analytic inferences demanded more processing time than did making heuristic inferences. Experiment 2 showed that burdening the executive resources with an attention-demanding secondary task decreased correct, analytic responding and boosted the rate of conjunction fallacies and indicative matching card selections. Results were replicated in Experiments 3 and 4 with a different secondary-task procedure. Involvement of executive resources for the deontic selection task was less clear. Findings validate basic processing assumptions of the dual-process framework and complete the correlational research programme of K. E. Stanovich and R. F. West (2000).

Journal ArticleDOI
TL;DR: It has been shown that the HMOEA is effective in solving multi-objective combinatorial optimization problems, such as finding useful trade-off solutions for the TTVRP routing problem.

Journal ArticleDOI
TL;DR: This paper considers the problem of arranging and rearranging manufacturing facilities such that the sum of the material handling and rearrangement costs is minimized and develops two simulated annealing heuristics for the dynamic facility layout problem.

Journal ArticleDOI
TL;DR: It is proved that even in this simple case, the optimization problem is NP-hard, and some efficient, scalable, and distributed heuristic approximation algorithms are proposed for solving this problem and the total transmission cost can be significantly improved over direct transmission or the shortest path tree.
Abstract: We consider the problem of correlated data gathering by a network with a sink node and a tree-based communication structure, where the goal is to minimize the total transmission cost of transporting the information collected by the nodes, to the sink node. For source coding of correlated data, we consider a joint entropy-based coding model with explicit communication where coding is simple and the transmission structure optimization is difficult. We first formulate the optimization problem definition in the general case and then we study further a network setting where the entropy conditioning at nodes does not depend on the amount of side information, but only on its availability. We prove that even in this simple case, the optimization problem is NP-hard. We propose some efficient, scalable, and distributed heuristic approximation algorithms for solving this problem and show by numerical simulations that the total transmission cost can be significantly improved over direct transmission or the shortest path tree. We also present an approximation algorithm that provides a tree transmission structure with total cost within a constant factor from the optimal.

Journal ArticleDOI
TL;DR: A numerical model selection heuristic based on a convex hull is proposed and results show that this heuristic performs almost perfectly, except for Tucker3 data arrays with at least one small mode and a relatively large amount of error.
Abstract: Several three-mode principal component models can be considered for the modelling of three-way, three-mode data, including the Candecomp/Parafac, Tucker3, Tucker2, and Tucker I models. The following question then may be raised: given a specific data set, which of these models should be selected, and at what complexity (i.e. with how many components)? We address this question by proposing a numerical model selection heuristic based on a convex hull. Simulation results show that this heuristic performs almost perfectly, except for Tucker3 data arrays with at least one small mode and a relatively large amount of error.

Journal ArticleDOI
TL;DR: This work considers the case when customers can call in orders during the daily operations, and a heuristic solution method is developed where sample scenarios are generated, solved heuristically and combined iteratively to form a solution to the overall problem.
Abstract: The statement of the standard vehicle routing problem cannot always capture all aspects of real-world applications. As a result, extensions or modifications to the model are warranted. Here we consider the case when customers can call in orders during the daily operations; i.e., both customer locations and demands may be unknown in advance. This is modeled as a combined dynamic and stochastic programming problem, and a heuristic solution method is developed where sample scenarios are generated, solved heuristically, and combined iteratively to form a solution to the overall problem.

Journal ArticleDOI
TL;DR: It is shown that the problem of routing messages in a wireless sensor network so as to maximize network lifetime is NP-hard and an online heuristic is developed, which performs two shortest path computations to route each message, which results in greater lifetime.
Abstract: We show that the problem of routing messages in a wireless sensor network so as to maximize network lifetime is NP-hard. In our model, the online model, each message has to be routed without knowledge of future route requests. We also develop an online heuristic to maximize network lifetime. Our heuristic, which performs two shortest path computations to route each message, is superior to previously published heuristics for lifetime maximization - our heuristic results in greater lifetime and its performance is less sensitive to the selection of heuristic parameters. Additionally, our heuristic is superior on the capacity metric

Journal ArticleDOI
TL;DR: The tolerance-based DUO principle is introduced and its solution existence and uniqueness is discussed, a solution heuristic is developed, and its properties are demonstrated through numerical examples.
Abstract: Dynamic Traffic Assignment (DTA) is long recognized as a key component for network planning and transport policy evaluations as well as for real-time traffic operation and management. How traffic is encapsulated in a DTA model has important implications on the accuracy and fidelity of the model results. This study compares and contrasts the properties of DTA modelled with point queues versus those with physical queues, and discusses their implications. One important finding is that with the more accurate physical queue paradigm, under certain congested conditions, solutions for the commonly adopted dynamic user optimal (DUO) route choice principle just do not exist. To provide some initial thinking to accommodate this finding, this study introduces the tolerance-based DUO principle. This paper also discusses its solution existence and uniqueness, develops a solution heuristic, and demonstrates its properties through numerical examples. Finally, we conclude by presenting some prospective future research di...

Journal ArticleDOI
TL;DR: This paper addresses the assembly flowshop scheduling problem with respect to a due date-based performance measure, i.e., maximum lateness, and proposes three heuristics for the problem: particle swarm optimization, Tabu search, and EDD.

Journal ArticleDOI
TL;DR: This work argues that the retrieval of subjective recognition precedes that of an objective probabilistic cue and occurs at little to no cognitive cost and gives rise to 2 predictions, both of which have been empirically supported: Inferences in line with the recognition heuristic are made faster than inferences inconsistent with it and are more prevalent under time pressure.
Abstract: The recognition heuristic is a prime example of a boundedly rational mind tool that rests on an evolved capacity, recognition, and exploits environmental structures. When originally proposed, it was conjectured that no other probabilistic cue reverses the recognition-based inference (D. G. Goldstein & G. Gigerenzer, 2002). More recent studies challenged this view and gave rise to the argument that recognition enters inferences just like any other probabilistic cue. By linking research on the heuristic with research on recognition memory, the authors argue that the retrieval of recognition information is not tantamount to the retrieval of other probabilistic cues. Specifically, the retrieval of subjective recognition precedes that of an objective probabilistic cue and occurs at little to no cognitive cost. This retrieval primacy gives rise to 2 predictions, both of which have been empirically supported: Inferences in line with the recognition heuristic (a) are made faster than inferences inconsistent with it and (b) are more prevalent under time pressure. Suspension of the heuristic, in contrast, requires additional time, and direct knowledge of the criterion variable, if available, can trigger such suspension.

Journal ArticleDOI
TL;DR: A new insertion-based construction heuristic to solve the multi-vehicle pickup and delivery problem with time windows that does not only consider the classical incremental distance measure in the insertion evaluation criteria but also the cost of reducing the time window slack due to the insertion.

Journal ArticleDOI
TL;DR: The results demonstrate the capability of the proposed algorithm of dealing with the multi-objective nature of the re-balancing problem, with solutions with advantages both in workload re-assignment and in completion costs obtained.

Journal ArticleDOI
TL;DR: An insertion-based procedure to generate good initial solutions and a heuristic based on the record-to-record travel, tabu lists, and route improvement procedures are proposed to resolve the vehicle routing problems with simultaneous deliveries and pickups.
Abstract: The vehicle routing problem with backhauls involves the delivery and pickup of goods at different customer locations. In many practical situations, however, the same customer may require both a delivery of goods from the distribution centre and a pickup of recycled items simultaneously. In this paper, an insertion-based procedure to generate good initial solutions and a heuristic based on the record-to-record travel, tabu lists, and route improvement procedures are proposed to resolve the vehicle routing problems with simultaneous deliveries and pickups. Computational characteristics of the insertion-based procedure and the hybrid heuristic are evaluated through computational experiments. Computational results show that the insertion-based procedure obtained better solutions than those found in the literature. Computational experiments also show that the proposed hybrid heuristic is able to reduce the gap between initial solutions and optimal solutions effectively and is capable of obtaining optimal solutions very efficiently for small-sized problems.