scispace - formally typeset
Search or ask a question

Showing papers on "Incremental heuristic search published in 1991"


Proceedings Article
24 Aug 1991
TL;DR: It is proved that if the average speed of the target is slower than that of the problem solver, then the problem Solver is guaranteed to eventually reach the target.
Abstract: We consider the case of heuristic search where the location of the goal may change during the course of the search. For example, the goal may be a target that is actively avoiding the problem solver. We present a moving target search algorithm (MTS) to solve this problem. We prove that if the average speed of the target is slower than that of the problem solver, then the problem solver is guaranteed to eventually reach the target. An implementation with randomly positioned obstacles confirms that the MTS algorithm is highly effective in various situations.

111 citations


01 Jun 1991
TL;DR: Taburoute as discussed by the authors is a new tabu search heuristic for the vehicle routing problem with capacity and route length restrictions, which considers a sequence of adjacent solutions obtained by repeatedly removing a vertex from its current route, and reinserting it into another route.
Abstract: The purpose of this paper is to describe TABUROUTE, a new tabu search heuristic for the vehicle routing problem with capacity and route length restrictions. The algorithm considers a sequence of adjacent solutions obtained by repeatedly removing a vertex from its current route, and reinserting it into another route. This is done by means of a generalized insertion procedure previously developed by the authors. During the course of the algorithm, infeasible solutions are allowed. Numerical tests on a set of benchmark problems indicate that tabu search outperforms the best existing heuristics, and TABUROUTE often produces the bes known solutions. (A)

106 citations


Journal ArticleDOI
Rok Sosic1, Jun Gu
01 Nov 1991
TL;DR: QS2 and QS3 are probabilistic local search algorithms, based on a gradient-based heuristic, capable of finding a solution for extremely large n-queens problems.
Abstract: The n-queens problem is to place n queens on an n*n chessboard so that no two queens attack each other. The authors present two new algorithms, called queen search 2 (QS2) and queen search 3 (QS3). QS2 and QS3 are probabilistic local search algorithms, based on a gradient-based heuristic. These algorithms, running in almost linear time, are capable of finding a solution for extremely large n-queens problems. For example, QS3 can find a solution for 500000 queens in approximately 1.5 min. >

58 citations


Proceedings Article
14 Jul 1991
TL;DR: A model is developed to analyze the time and space complexity of these three heuristic search algorithms in terms of the heuristic branching factor and solution density and presents a new algorithm, DFS*, which is a hybrid of iterative deepening and depth-first branch-and-bound, and shows that it outperforms the other three algorithms on some problems.
Abstract: We present a comparison of three well known heuristic search algorithms: best-first search (BFS), iterative-deepening (ID), and depth-first branch-and-bound (DFBB). We develop a model to analyze the time and space complexity of these three algorithms in terms of the heuristic branching factor and solution density. Our analysis identifies the types of problems on which each of the search algorithms performs better than the other two. These analytical results are validated through experiments on different problems. We also present a new algorithm, DFS*, which is a hybrid of iterative deepening and depth-first branch-and-bound, and show that it outperforms the other three algorithms on some problems.

56 citations


Journal ArticleDOI
TL;DR: This paper presents a scheme to deal efficiently with incremental search problems that allows the incremental addition and deletion of constraints and is based on re-execution, using parts of computation paths stored during previous computations.
Abstract: Incremental search consists of adding new constraints or deleting old ones once a solution to a search problem has been found. Although incremental search is of primary importance in application areas such as scheduling, planning, trouble shooting, and interactive problem-solving, it is not presently supported by logic programming languages and little research has been devoted to this topic. This paper presents a scheme to deal efficiently with incremental search problems. The scheme allows the incremental addition and deletion of constraints and is based on re-execution, using parts of computation paths stored during previous computations. The scheme has been implemented as part of the constraint logic programming language CHIP and applied to practical problems. It has shown arbitrarily large (i.e. unbounded) speedups compared with previous approaches on practical problems.

40 citations


Proceedings Article
01 Feb 1991
TL;DR: A new heuristic is presented, a proof that it is admissible (for certain successor functions), and some experimental results suggesting that it are a significant improvement over the currently used heuristic.
Abstract: Finding best explanations is often formalized in AI in terms of minimal-cost proofs. Finding such proofs is naturally characterized as a best-first search of the proof-tree (actually a proof dag). Unfortunately, the only known search heuristic for this task is quite poor. In this paper we present a new heuristic, a proof that it is admissible (for certain successor functions), and some experimental results suggesting that it is a significant improvement over the currently used heuristic.

39 citations


01 Jan 1991
TL;DR: This paper examines parallel algorithms for performing a depth-first search of a directed or undirected graph in sub-linear time and surveys three seminal papers on the subject, which proves that a special case of DFS is (in all likelihood) inherently sequential.
Abstract: In this paper we examine parallel algorithms for performing a depth-first search (DFS) of a directed or undirected graph in sub-linear time. this subject is interesting in part because DFS seemed at first to be an inherently sequential process, and for a long time many researchers believed that no such algorithms existed. We survey three seminal papers on the subject. The first one proves that a special case of DFS is (in all likelihood) inherently sequential; the second shows that DFS for planar undirected graphs is in NC; and the third shows that DFS for general undirected graphs is in RNC. We also discuss randomnized algorithms, Pcompleteness and matching, three topics that are essential for understanding and appreciating the results in these papers. Disciplines Theory and Algorithms Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-91-71. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/428 Parallel Algorithms For Depth-First Search MS-CIS-91-71

25 citations


Book
01 Jan 1991

24 citations


Proceedings Article
14 Jul 1991
TL;DR: It is shown that finding an optimal solution is NP-hard in an important variant of the domain, and popular extensions, which enlarges the range of model domains whose complexity has been explored mathematically and demonstrates that the complexity of search in blocks world is on the same level as for sliding block problems, the traveling salesperson problem, binpacking problems, and the like.
Abstract: Blocks world (cube world) has been one of the most popular model domains in artificial intelligence search and planning. The operation and effectiveness of alternative heuristic strategies, both basic and complex, can be observed easily in this domain. We show that finding an optimal solution is NP-hard in an important variant of the domain, and popular extensions. This enlarges the range of model domains whose complexity has been explored mathematically, and it demonstrates that the complexity of search in blocks world is on the same level as for sliding block problems, the traveling salesperson problem, binpacking problems, and the like. These results also support the practice of using blocks world as a tutorial search domain in courses on artificial intelligence, to reveal both the value and limitations of heuristic search when seeking optimal solutions.

21 citations




Proceedings Article
24 Aug 1991
TL;DR: This paper describes the localized search mechanism of the GEMPLAN multiagent planner, and formal complexity results and empirical results are provided, demonstrating the benefits of localized search.
Abstract: This paper describes the localized search mechanism of the GEMPLAN multiagent planner. Both formal complexity results and empirical results are provided, demonstrating the benefits of localized search, A localized domain description is one that decomposes domain activities and requirements into a set of regions. This description is used to infer how domain requirements are semantically localized and, as a result, to enable the decomposition of the planning search space into a set of spaces, one for each domain region. Benefits of localization include a smaller and cheaper overall search space as well as heuristic guidance in controlling search. Such benefits are critical if current planning technologies and other types of reasoning are to be scaled up to large, complex domains.

Journal ArticleDOI
01 Apr 1991
TL;DR: In this paper, the authors extended the well-known economic search model to take into account the fact that all search takes place in two-dimensional space and showed that these two problems are interdependent and can only be solved simultaneously.
Abstract: The paper extends the well-known economic search model to take into account the fact that all search takes place in two-dimensional space. This adds arouting problem to thestopping problem usually discussed in the search literature. The paper shows that these two problems are interdependent and can only be solved simultaneously. This relates thespatial search problem as it is discussed in this paper toNP-Complete problems like the traveling salesman problem, some of the most complex problems in mathematics. The paper discusses this relationship and closes with some suggestions about how to circumvent this complexity.

01 Jan 1991
TL;DR: A model is developed to analyze the time and space complexity of these three heuristic search algorithms in terms of the heuristic branching factor and solution density and presents a new algorithm, DFS*, which is a hybrid of iterative deepening and depth-first branch-and-bound, and shows that it outperforms the other three algorithms on some problems.
Abstract: We present a comparison of three well known heuristic _ search algorithms: best-first search (BFS) , iterativedeepening (ID), and depth-first branch-and-bound (DFBB). We develop a model to analyze the time and space complexity of these three algorithms in terms of the heuristic branching factor and solution density. Our analysis identifies the types of problems on which each of the search algorithms performs better than the other two. These analytical results are validated through experiments on different problems. We also present a new algorithm, DFS*, which is a hybrid of iterative deepening and depth-first branch-and-bound, and show that it outperforms the other three algorithms on some problems.

Journal ArticleDOI
TL;DR: In this article, an integer programming model and a heuristic were developed to effectively assign machines to cells, considering component volumes, costs related to movement of components between and within cells, and penalty for not using all machines in a cell visited by a component.

Journal ArticleDOI
TL;DR: This paper proposes a new front-to-front algorithm that is computationally much less expensive and does not guarantee optimality always, but its solution quality and execution time can be controlled by some external parameters.

01 May 1991
TL;DR: This work constitutes the first empirical comparison of simulated annealing, genetic algorithms and tabu search and found that GA and TS are the superior techniques for solving the GPP, both with respect to solution quality and computational requirements.
Abstract: In this research we will develop a framework for applying some abstract heuristic search (AHS) methods to a well known NP-Complete problem: the graph partitioning problem (GPP). The uniform graph partitioning problem can be described as partitioning the nodes of a graph into two sets of equal size to minimize the sum of the cost of arcs having end-points in different sets. This problem has important applications in VLSI design, computer compiler design, and in placement and layout problems. Within this research we demonstrate that the solutions obtained from using the traditional GPP heuristic are often of poor quality. We introduce an extended local search procedure, which performed extremely well for the test problems. A solution method based on mathematical programming and Lagrangian relaxation is introduced. The Lagrangian problem was used to (1) obtain good feasible solutions, and (2) derive lower bounds for the graph partitioning problem. The lower bounds facilitate a thorough empirical analysis of the performance of the heuristic procedures. We further introduce a new procedure for solving the GPP based on tabu search (TS). This procedure includes a new technique for diversification in TS. A new procedure for solving the GPP based on genetic algorithms (GA) is also presented. This method includes a new technique for selection, known as the "queen bee" strategy. We provide a common ground for a thorough comparison of solution procedures for the GPP by studying a worst-case measure of the various solutions' closeness to optimality, and an average empirical worst-case measure for each solution technique. This work constitutes the first empirical comparison of simulated annealing, genetic algorithms and tabu search. The AHS techniques constitute a major step forward as general problem solving techniques in that they overcome some of the limitations of the traditional problem solving paradigms of operations research and artificial intelligence. The AHS methods do not generally take on strong assumptions regarding the shape of the feasible region, nor regarding the form of the objective function. In addition, we found that GA and TS are the superior techniques for solving the GPP, both with respect to solution quality and computational requirements.

Journal ArticleDOI
Yuval Lirov1
01 May 1991
TL;DR: This work proposes an efficient algorithm for constructing multi-objective heuristics and develops some sufficiency conditions for the admissibility of the heuristic.
Abstract: Merging multi-objective optimization and expert systems technology results in reduced modeling efforts and enhanced problem-solving tools. Search is one of the ways to combine multi-objective optimization and knowledge-intensive computation schemes. Search is usually associated with prohibitive computational costs and heuristics are often used to alleviate the computational burden. We propose an efficient algorithm for constructing multi-objective heuristics. We also develop some sufficiency conditions for the admissibility of the heuristic. Our multi-objective A ∗ algorithm has been implemented and experimentally evaluated. Its time performance is comparable and often superior to that of other more conventional algorithms.

Proceedings Article
24 Aug 1991
TL;DR: A method is presented that causes A* to return high quality solutions while solving a set of problems using a non-admissible heuristic, and it is shown how one may construct heuristics for finding highquality solutions at lower cost than those returned by A* using available admissible heuristic.
Abstract: A method is presented that causes A* to return high quality solutions while solving a set of problems using a non-admissible heuristic. The heuristic guiding the search changes as new information is learned during the search, and it converges to an admissible heuristic which 'contains the insight' of the original nonadmissible one. After a finite number of problems, A* returns only optimal solutions. Experiments on sliding tile problems suggest that learning occurs very fast. Beginning with hundreds of randomly generated problems and an overestimating heuristic, the system learned sufficiently fast that only the first problem was solved non-optimally. As an application we show how one may construct heuristics for finding high quality solutions at lower cost than those returned by A* using available admissible heuristics.

Markus G. Wloka1
01 May 1991
TL;DR: The P-hardness results make it unlikely, however, that the exact parallel equivalent of a local search heuristic will be found that can find even a local minimum-cost solution in polylogarithmic time.
Abstract: We investigate the parallel complexity of VLSI (very large scale integration) CAD (computer aided design) synthesis problems. Parallel algorithms are very appropriate in VLSI CAD because computationally intensive optimization methods are needed to derive ``good'''' chip layouts. We find that for many problems with polynomial-time serial complexity, it is possible to find an efficient parallel algorithm that runs in polylogarithmic time. We illustrate this by parallelizing the ``left-edge'''' channel routing algorithm and the one-dimensional constraint-graph compaction algorithm. Curiously enough, we find P-hard, or inherently difficult-to-parallelize algorithms when certain key heuristics are used to get approximate solutions to NP-complete problems. In particular, we show that many local search heuristics for problems related to VLSI placement are P-hard or P-complete. These include the Kernighan-Lin heuristic and the simulated annealing heuristic for graph partitioning. We show that local search heuristics for grid embeddings and hypercube embeddings based on vertex swaps are P-hard, as are any local search heuristics that minimize the number of column conflicts in a channel routing by accepting cost-improving swaps of tracks or subtracks. We believe that the P-hardness reductions we provide in this thesis can be extended to include many other important applications of local search heuristics. Local search heuristics have been established as the method of choice for many optimization problems whenever very good solutions are desired. Local search heuristics are also among the most time-consuming bottlenecks in VLSI CAD systems, and would benefit greatly from parallel speedup. Our P-hardness results make it unlikely, however, that the exact parallel equivalent of a local search heuristic will be found that can find even a local minimum-cost solution in polylogarithmic time. The P-hardness results also put into perspective experimental results reported in the literature: attempts to construct the exact parallel equivalent of serial simulated-annealing-based heuristics for graph embedding have yielded disappointing parallel speedups. We introduce the massively parallel {\it Mob} heuristic and report on experiments on the CM-2 Connection Machine. The design of the {\it Mob} heuristic was influenced by the P-hardness results. {\it Mob} can execute many moves of a local search heuristic in parallel. We applied our heuristic to the graph-partitioning, grid and hypercube-embedding problems, and report on an extensive series of experiments on the 32K-processor CM-2 Connection Machine that shows impressive reductions in edge costs. To obtain solutions that are within 5\ heuristic needed less than nine minutes of computation time on random graphs of two million edges, and the {\it Mob} grid and hypercube embedding heuristics needed less than 30 minutes on random graphs of one million edges. Due to excessive run times, heuristics reported previously in the lit

Journal ArticleDOI
TL;DR: The game of chess is an ideal task for comparing search heuristics of individuals with a range of skill levels as discussed by the authors, and the deGroot (1965) protocols yields evidence of a potentially powerful heuristic for pruning branches of the search tree.
Abstract: The game of chess is an ideal task for comparing search heuristics of individuals with a range of skill levels. Reanalysis of the deGroot (1965) protocols yields evidence of a potentially powerful heuristic for pruning branches of the search tree. The strategy involves expanding search immediately following a negative evaluation, and narrowing search immediately following a positive evaluation. Application of this homing heuristic was found in grand masters and masters, but not in lower rated chess players.

01 Sep 1991
TL;DR: A theoretical analysis is presented to explain why the heuristic method for solving large-scale constraint satisfaction and scheduling problems works so well on certain types of problems and to predict when it is likely to be most effective.
Abstract: This paper describes a simple heuristic method for solving large-scale constraint satisfaction and scheduling problems. Given an initial assignment for the variables in a problem, the method operates by searching through the space of possible repairs. The search is guided by an ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. We demonstrate empirically that the method performs orders of magnitude better than traditional backtracking techniques on certain standard problems. For example, the one million queens problem can be solved rapidly using our approach. We also describe practical scheduling applications where the method has been successfully applied. A theoretical analysis is presented to explain why the method works so well on certain types of problems and to predict when it is likely to be most effective.

Journal ArticleDOI
TL;DR: This paper develops search algorithms, NSA and Successive NSA, and analyzes their performance using a uniform m-ary search tree G with a single goal located at an unknown location at d, using non-parametric statistical inference methods.
Abstract: The computational complexity of widely-used heuristic search algorithm, A ∗, could be exponential in the length of the path from the start node to a goal node. In several interesting cases, for example in a tree model, we can improve the average-case performance of the search by making use of some global behavior of a suitable statistic derived from the heuristic estimator. Until now, only the utilization of parametric statistical inference methods has been studied. These methods assume that the probability distribution of the underlying statistic and also its parameters are precisely known. However, such a precise knowledge may not be available for some practical situations. In this paper, we propose an approach based on non-parametric statistical inference methods that do not require such precise information about the distribution. We develop search algorithms, NSA and Successive NSA, and analyze their performance using a uniform m-ary search tree G with a single goal located at an unknown location at d...

Proceedings ArticleDOI
08 Jan 1991
TL;DR: A software system called XVRP-GA is developed that demonstrates an integrated framework for synergism, in the domain of computer-aided vehicle routing and scheduling problems, that assists researchers and decision makers in applying mathematical algorithms to a specific routing problem instance by intelligently adapting the algorithm to the problem description.
Abstract: Research into alternative ways of employing artificial intelligence techniques to direct mathematical algorithms is described. The authors have developed a software system called XVRP-GA that demonstrates an integrated framework for this synergism, in the domain of computer-aided vehicle routing and scheduling problems. The system assists researchers and decision makers in applying mathematical algorithms to a specific routing problem instance by intelligently adapting the algorithm to the problem description. The genetic search adaptively refines the parameters that control the work of the underlying algorithm. The resultant solutions are uniformly superior to the best known algorithms working alone. To reduce the computational overhead of genetic search, a mechanism for improving the performance of the search is employed. Several evaluation functions that permit the parallel investigation of multiple peaks in the search space are utilized, resulting in significantly increased efficiency in the genetic search. >


Journal ArticleDOI
TL;DR: A methodology for transforming the original search space into an equivalent but minimal search space is proposed and pi - lambda transformation is introduced to reduce the parallel search space.
Abstract: A methodology for transforming the original search space into an equivalent but minimal search space is proposed. First, the concept of dependences leads to a procedure for reduction of the search space. The search procedure using this method can produce a minimal and complete search space. It is shown that this method is applicable to parallel search as well. An added advantage of this method is that it does not exclude the use of heuristics. pi - lambda transformation is introduced to reduce the parallel search space. >


Proceedings ArticleDOI
13 Oct 1991
TL;DR: An intelligent search strategy for solving the symmetric traveling salesman problem is proposed, which focuses on learning while searching, i.e., the ability to discover interesting characteristics of solutions generated by an unconstrained search strategy.
Abstract: An intelligent search strategy for solving the symmetric traveling salesman problem is proposed. The strategy captures two additional features over traditional heuristic search methods: sustained exploration and learning while searching. Sustained exploration refers to a strategy of not stopping when a local optimum is reached but continuing the search process until some prespecified criteria are satisfied. Learning while searching refers to the mechanism of learning, during sustained exploration, about characteristics of edges which are likely to be in an optimal tour. The authors focus on learning while searching, i.e., the ability to discover interesting characteristics of solutions generated by an unconstrained search strategy. The discovery of interesting characteristics of solutions can be used to improve system performance. >

Book ChapterDOI
01 Oct 1991
TL;DR: The concept of wave-shaping for parallel bidirectional heuristic island search is introduced and the resulting algorithm improves the performance of PBA* by dynamically redirecting the local search processes that run concurrently on PBA*, toward quick path establishment.
Abstract: Parallel bidirectional heuristic island search combines forward chaining, backward chaining, and parallelism to search a state space. The only algorithm in this category to date (PBA*) has been demonstrated to exhibit excellent performance in practice (superlinear speedup in all tested cases) [Nels90d]. This paper introduces the concept of wave-shaping for parallel bidirectional heuristic island search. The resulting algorithm improves the performance of PBA* by dynamically redirecting the local search processes that run concurrently on PBA*, toward quick path establishment. Experimental results on a uniprocessor, as well as a multiprocessor machine (Intel iPSC/2 hypercube) demonstrate the viability of the proposed method.

Proceedings ArticleDOI
03 Apr 1991
TL;DR: Two heuristic search algorithms are presented, namely BDA* (breadth-depth-A*) and CA* (controlled A*), which can overcome both time and storage limitations at the expense of not guaranteeing optimal solutions at all times.
Abstract: Many heuristic search algorithms are available for solving combinatorial optimization problems in artificial intelligence and operations research applications. However, most of these algorithms do not scale up in practice because of their time and/or storage limitations. The paper presents two algorithms, namely BDA* (breadth-depth-A*) and CA* (controlled A*), which can overcome both time and storage limitations at the expense of not guaranteeing optimal solutions at all times. The paper demonstrates the working of BDA* and CA* on the well known state space problems, namely 15-puzzle and 3-machine flow-shop scheduling problem. A new inadmissible heuristic is suggested for sliding tile puzzles. Detailed experimental results showing the effectiveness of the algorithms are also presented. >