scispace - formally typeset
Search or ask a question
Proceedings Article

New optimization functions for potential heuristics

07 Jun 2015-pp 193-201
TL;DR: Objectives are explored that attempt to maximize heuristic estimates for all states (reachable and unreachable), maximize heuristics for a sample of reachable states, maximize the number of detected dead ends, or minimize search effort.
Abstract: Potential heuristics, recently introduced by Pommerening et al., characterize admissible and consistent heuristics for classical planning as a set of declarative constraints. Every feasible solution for these constraints defines an admissible heuristic, and we can obtain heuristics that optimize certain criteria such as informativeness by specifying suitable objective functions. The original paper only considered one such objective function: maximizing the heuristic value of the initial state. In this paper, we explore objectives that attempt to maximize heuristic estimates for all states (reachable and unreachable), maximize heuristic estimates for a sample of reachable states, maximize the number of detected dead ends, or minimize search effort. We also search for multiple heuristics with complementary strengths that can be combined to obtain even better heuristics.
Citations
More filters
Journal ArticleDOI
TL;DR: This work proposes a greedy algorithm to generate orders and shows how to use hill-climbing search to optimize a given order, which leads to significantly better heuristic estimates than using the best random order that is generated in the same time.
Abstract: Cost partitioning is a method for admissibly combining a set of admissible heuristic estimators by distributing operator costs among the heuristics. Computing an optimal cost partitioning, i.e., the operator cost distribution that maximizes the heuristic value, is often prohibitively expensive to compute. Saturated cost partitioning is an alternative that is much faster to compute and has been shown to yield high-quality heuristics. However, its greedy nature makes it highly susceptible to the order in which the heuristics are considered. We propose a greedy algorithm to generate orders and show how to use hill-climbing search to optimize a given order. Combining both techniques leads to significantly better heuristic estimates than using the best random order that is generated in the same time. Since there is often no single order that gives good guidance on the whole state space, we use the maximum of multiple orders as a heuristic that is significantly better informed than any single-order heuristic, especially when we actively search for a set of diverse orders.

24 citations


Cites methods from "New optimization functions for pote..."

  • ..., 2007), hLM-cut (Helmert & Domshlak, 2009), the operator counting heuristic with the state equation and LM-Cut constraints h + LM-cut (Pommerening, Röger, Helmert, & Bonet, 2014), the diverse potentials heuristic hpot (Seipp et al., 2015), merge-and-shrink (hM&S) using bisimulation and the SCC-DFP merge strategy (Helmert, Haslum, Hoffmann, & Nissim, 2014; Sievers, Wehrle, & Helmert, 2016), and the state-equation heuristic hSEQ (Bonet, 2013)....

    [...]

Journal ArticleDOI
TL;DR: In cases where forward search can identify dead-ends, and where h C dead-end detection is effective, the techniques reduce the depth-first search space size by several orders of magnitude, and often result in state-of-the-art performance.

19 citations


Cites methods from "New optimization functions for pote..."

  • ...tics [51, 52, 53], and critical-path heuristics [39, 54, 55]....

    [...]

  • ...We run potential heuristics [51, 52] for dead-end detection, another component of Aidos, and the one...

    [...]

  • ...Here, we will empirically investigate the combination of u with the aforementioned dead-end detectors u based on merge-and-shrink abstraction [36, 49] respectively potential heuristics [52, 53]....

    [...]

Proceedings Article
09 Jul 2016
TL;DR: This work analyzes how complex a heuristic function must be to directly guide a state-space search algorithm towards the goal and examines functions that evaluate states with a weighted sum of state features.
Abstract: We analyze how complex a heuristic function must be to directly guide a state-space search algorithm towards the goal. As a case study, we examine functions that evaluate states with a weighted sum of state features. We measure the complexity of a domain by the complexity of the required features. We analyze conditions under which the search algorithm runs in polynomial time and show complexity results for several classical planning domains.

17 citations

Journal ArticleDOI
07 Feb 2020
TL;DR: It is shown that the previously proposed fam-groups-based pruning techniques for the STRIPS representation can be utilized during the grounding process with lifted fam- groups, i.e., before the full STriPS representation is known.
Abstract: In this paper, we focus on the inference of mutex groups in the lifted (PDDL) representation. We formalize the inference and prove that the most commonly used translator from the Fast Downward (FD) planning system infers a certain subclass of mutex groups, called fact-alternating mutex groups (fam-groups). Based on that, we show that the previously proposed fam-groups-based pruning techniques for the STRIPS representation can be utilized during the grounding process with lifted fam-groups, i.e., before the full STRIPS representation is known. Furthermore, we propose an improved inference algorithm for lifted fam-groups that produces a richer set of fam-groups than the FD translator and we demonstrate a positive impact on the number of pruned operators and overall coverage.

16 citations


Cites methods from "New optimization functions for pote..."

  • ...…shrink strategy (Helmert et al. 2014; Sievers, Wehrle, and Helmert 2016), the potential (pot) heuristic optimized for all syntactic states (Seipp, Pommerening, and Helmert 2015), and two non-portfolio win- ners of the last IPC 2018, Complementary1 (comp1) (Franco et al. 2018), and…...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: How heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching is described and an optimality property of a class of search strategies is demonstrated.
Abstract: Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development of efficient search procedures. Moreover, there is no adequate conceptual framework within which the various ad hoc search strategies proposed to date can be compared. This paper describes how heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching and demonstrates an optimality property of a class of search strategies.

10,366 citations


"New optimization functions for pote..." refers methods in this paper

  • ...The A∗ algorithm (Hart, Nilsson, and Raphael 1968) finds optimal solutions if its heuristic is admissible....

    [...]

Journal ArticleDOI
TL;DR: Fast Downward as discussed by the authors uses hierarchical decompositions of planning tasks for computing its heuristic function, called the causal graph heuristic, which is very different from traditional HSP-like heuristics based on ignoring negative interactions of operators.
Abstract: Fast Downward is a classical planning system based on heuristic search. It can deal with general deterministic planning problems encoded in the propositional fragment of PDDL2.2, including advanced features like ADL conditions and effects and derived predicates (axioms). Like other well-known planners such as HSP and FF, Fast Downward is a progression planner, searching the space of world states of a planning task in the forward direction. However, unlike other PDDL planning systems, Fast Downward does not use the propositional PDDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multivalued planning tasks, which makes many of the implicit constraints of a propositional planning task explicit. Exploiting this alternative representation, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic function, called the causal graph heuristic, which is very different from traditional HSP-like heuristics based on ignoring negative interactions of operators. In this article, we give a full account of Fast Downward's approach to solving multivalued planning tasks. We extend our earlier discussion of the causal graph heuristic to tasks involving axioms and conditional effects and present some novel techniques for search control that are used within Fast Downward's best-first search algorithm: preferred operators transfer the idea of helpful actions from local search to global best-first search, deferred evaluation of heuristic functions mitigates the negative effect of large branching factors on search performance, and multiheuristic best-first search combines several heuristic evaluation functions within a single search algorithm in an orthogonal way. We also describe efficient data structures for fast state expansion (successor generators and axiom evaluators) and present a new non-heuristic search algorithm called focused iterative-broadening search, which utilizes the information encoded in causal graphs in a novel way. Fast Downward has proven remarkably successful: It won the "classical" (i. e., propositional, non-optimising) track of the 4th International Planning Competition at ICAPS 2004, following in the footsteps of planners such as FF and LPG. Our experiments show that it also performs very well on the benchmarks of the earlier planning competitions and provide some insights about the usefulness of the new search enhancements.

1,400 citations

Journal ArticleDOI
01 Nov 1995
TL;DR: In this paper, the complexity of finding a minimal plan and finding any plan in the SAS + formalism is studied and shown to be maximal tractable under all combinations of the previously considered restrictions.
Abstract: We have previously reported a number of tractable planning problems defined in the SAS + formalism. This article complements these results by providing a complete map over the complexity of SAS + planning under all combinations of the previously considered restrictions. We analyze the complexity of both finding a minimal plan and finding any plan. In contrast to other complexity surveys of planning, we study not only the complexity of the decision problems but also the complexity of the generation problems. We prove that the SAS + -PUS problem is the maximal tractable problem under the restrictions we have considered if we want to generate minimal plans. If we are satisfied with any plan, then we can generalize further to the SAS + -US problem, which we prove to be the maximal tractable problem in this case.

551 citations

Proceedings Article
19 Sep 2009
TL;DR: A new admissible heuristic called the landmark cut heuristic is introduced, which compares favourably with the state of the art in terms of heuristic accuracy and overall performance.
Abstract: Current heuristic estimators for classical domain-independent planning are usually based on one of four ideas: delete relaxations, critical paths, abstractions, and, most recently, landmarks. Previously, these different ideas for deriving heuristic functions were largely unconnected. We prove that admissible heuristics based on these ideas are in fact very closely related. Exploiting this relationship, we introduce a new admissible heuristic called the landmark cut heuristic, which compares favourably with the state of the art in terms of heuristic accuracy and overall performance.

410 citations


"New optimization functions for pote..." refers background in this paper

  • ...On many domains hpotdiverse is even competitive with the state-of-the-art hLM-cut heuristic (Helmert and Domshlak 2009)....

    [...]

  • ...On many domains h diverse is even competitive with the state-of-the-art hLM-cut heuristic (Helmert and Domshlak 2009)....

    [...]

Proceedings Article
22 Jul 2007
TL;DR: A novel way of constructing good patterns automatically from the specification of planning problem instances is presented, which allows a domain-independent planner to solve planning problems optimally in some very challenging domains, including a STRIPS formulation of the Sokoban puzzle.
Abstract: Heuristic search is a leading approach to domain-independent planning. For cost-optimal planning, however, existing admissible heuristics are generally too weak to effectively guide the search. Pattern database heuristics (PDBs), which are based on abstractions of the search space, are currently one of the most promising approaches to developing better admissible heuristics. The informedness of PDB heuristics depends crucially on the selection of appropriate abstractions (patterns). Although PDBs have been applied to many search problems, including planning, there are not many insights into how to select good patterns, even manually. What constitutes a good pattern depends on the problem domain, making the task even more difficult for domain-independent planning, where the process needs to be completely automatic and generaL We present a novel way of constructing good patterns automatically from the specification of planning problem instances. We demonstrate that this allows a domain-independent planner to solve planning problems optimally in some very challenging domains, including a STRIPS formulation of the Sokoban puzzle.

202 citations


"New optimization functions for pote..." refers background or methods in this paper

  • ...This explains the result by Haslum et al. (2007), who show that the average heuristic value is not the best predictor for search effort....

    [...]

  • ...We adopt the sampling procedure from Haslum et al. (2007): we first calculate a heuristic value for the initial state using the potential heuristic optimized for this value....

    [...]