scispace - formally typeset
Search or ask a question

Showing papers on "Best-first search published in 1989"


Journal ArticleDOI
TL;DR: A variation of the Beam Search method, called Filtered Beam Search, is proposed, able to use priority functions to search a number of solution paths in parallel and was not only efficient but also consistent in providing near-optimal solutions with a relatively small search tree.
Abstract: We examine the problem of scheduling a given set of jobs on a single machine to minimize total early and tardy costs. Two dispatch priority rules are proposed and tested for this NP-complete problem. These were found to perform far better than known heuristics that ignored early costs. For situations where the potential cost savings are sufficiently high to justify more sophisticated techniques, we propose a variation of the Beam Search method developed by researchers in artificial intelligence. This variant, called Filtered Beam Search, is able to use priority functions to search a number of solution paths in parallel. A computational study showed that this search method was not only efficient but also consistent in providing near-optimal solutions with a relatively small search tree. The study also includes an investigation of the impacts of Beam Search parameters on three variations of Beam Search for this problem.

266 citations


Proceedings Article
20 Aug 1989
TL;DR: The concepts of topology and texture are introduced to characterize problem structure and areas to focus attention respectively and the resulting model reduces search complexity and provides a more principled explanation of the nature and power of heuristics in problem solving.
Abstract: We propose a model of problem solving that provides both structure and focus to search. The model achieves this by combining constraint satisfaction with heuristic search. We introduce the concepts of topology and texture to characterize problem structure and areas to focus attention respectively. The resulting model reduces search complexity and provides a more principled explanation of the nature and power of heuristics in problem solving. We demonstrate the model of Constrained Heuristic Search in two domains: spatial planning and factory scheduling. In the former we demonstrate significant reductions in search.

139 citations


Journal ArticleDOI
TL;DR: These parameterized algorithms are found to encompass a wide class of best first search algorithms and guarantee admissible solutions within specified memory limitations (above the minimum required).

134 citations


Journal ArticleDOI
TL;DR: A description is given of an implementation of a novel frame-synchronous network search algorithm for recognizing continuous speech as a connected sequence of words according to a specified grammar that is inherently based on hidden Markov model (HMM) representations.
Abstract: A description is given of an implementation of a novel frame-synchronous network search algorithm for recognizing continuous speech as a connected sequence of words according to a specified grammar. The algorithm, which has all the features of earlier methods, is inherently based on hidden Markov model (HMM) representations and is described in an easily understood, easily programmable manner. The new features of the algorithm include the capability of recording and determining (unique) word sequences corresponding to the several best paths to each grammar node, and the capability of efficiently incorporating a range of word and state duration scoring techniques directly into the forward search of the algorithm, thereby eliminating the need for a postprocessor as in previous implementations. It is also simple and straightforward to incorporate deterministic word transition rules and statistical constraints (probabilities) from a language model into the forward search of the algorithm. >

131 citations


Journal ArticleDOI
TL;DR: An improved algorithm BS∗ is described which expands significantly less nodes on average than any other algorithm in the same class of non-wave-shaping admissible bidirectional algorithms, and is the first staged search algorithm which preserves admissibility.

87 citations


Proceedings Article
20 Aug 1989
TL;DR: Extensive runs on a variety of search problems, involving search graphs that may or may not be trees, indicate that MREC with M = 0 is as good as IDA* on problems such as the 15- puzzle for which IDA*, is suitable, while M REC with large M is as fast as A*, on problems for which node expansion time is not negligible.
Abstract: MREC is a new recursive best-first search algorithm which combines the good features of A* and IDA*. It is closer in operation to IDA*, and does not use an OPEN list. In order to execute, all MREC needs is sufficient memory for its implicit stack. But it can also be fed at runtime a parameter M which tells it how much additional memory is available for use. In this extra memory, MREC stores as much as possible of the explicit graph. When M = 0, MREC is identical to IDA*. But when M > 0, it can make far fewer node expansions than IDA*. This can be advantageous for problems where the time to expand a node is significant. Extensive runs on a variety of search problems, involving search graphs that may or may not be trees, indicate that MREC with M = 0 is as good as IDA* on problems such as the 15- puzzle for which IDA* is suitable, while MREC with large M is as fast as A* on problems for which node expansion time is not negligible.

85 citations


Proceedings Article
20 Aug 1989
TL;DR: A general approach to the study of problem-solving, in which search steps are considered decisions in the same sense as actions in the world, is outlined, and a formula for the expected value of a search step in a game-playing context is developed using the single-step assumption.
Abstract: In this paper we outline a general approach to the study of problem-solving, in which search steps are considered decisions in the same sense as actions in the world. Unlike other metrics in the literature, the value of a search step is defined as a real utility rather than as a quasi-utility, and can therefore be computed directly from a model of the base-level problem-solver. We develop a formula for the expected value of a search step in a game-playing context using the single-step assumption, namely that a computation step can be evaluated as it was the last to be taken. We prove some meta-level theorems that enable the development of a low-overhead algorithm, MGSS*, that chooses search steps in order of highest estimated utility. Although we show that the single-step assumption is untenable in general, a program implemented for the game of Othello soundly beats an alpha-beta search while expanding significantly fewer nodes, even though both programs use the same evaluation function.

84 citations


Journal ArticleDOI
TL;DR: Two hybrid search strategies for the efficient solution of the data clustering problem based on the minimum variance approach are presented and are consistently superior to the popular K-MEANS algorithm as well as to other techniques based on a single search strategy.

76 citations


Proceedings Article
20 Aug 1989
TL;DR: A heuristic search algorithm which finds the set of node aggregations that makes a network singly connected and allows inference to execute in minimum time, and a "graph-directed" algorithm which is guaranteed to find a feasible but not necessary optimal solution and with less computation than the search algorithm.
Abstract: This study describes a general framework and several algorithms for reducing Bayesian networks with loops (i.e., undirected cycles) into equivalent networks which are singly connected. The purpose of this conversion is to take advantage of a distributed inference algorithm (6). The framework and algorithms center around one basic operation, node aggregation. In this operation, a cluster of nodes in a network is replaced with a single node without changing the underlying joint distribution of the network. The framework for us ing this operation includes a node aggregation theorem which describes whether a cluster of nodes can be combined, and a complexity analysis which estimates the computational require ments for the resulting networks. The algorithms described include a heuristic search algorithm which finds the set of node aggregations that makes a network singly connected and allows inference to execute in minimum time, and a "graph-directed" algorithm which is guaranteed to find a feasible but not necessary optimal solution and with less computation than the search algorithm.

27 citations


01 Mar 1989
TL;DR: This paper describes mixing of simulated annealing and tabu search algorithms into a new hybrid search algorithm applied to the traveling salesman problem and indicates consistently better results.
Abstract: A new hybrid algorithm technique (HAT) based on the idea of mixing two or more algorithms is proposed. Though the algorithm is general and may be applied to the majority of optimization problems, a hybrid algorithm search technique (HAST) is the focus of this paper. As an example of HAST, this paper describes mixing of simulated annealing and tabu search algorithms into a new hybrid search algorithm applied to the traveling salesman problem. A brief introduction to the simulated annealing and tabu search algorithms is given followed by a description of how we mixed these algorithms to form a new parallel hybrid search technique. Comparison of our algorithm mixer with simulated annealing and tabu search indicates consistently better results. Examples include 33, 42, 50, 57, 75, and 100 city problems from the literature. Solutions for the 50 and 75 city problems outperform best known published to date results.

20 citations


Proceedings Article
20 Aug 1989
TL;DR: This algorithm is capable of achieving almost linear speedup on a large number of processors with a relatively small problem size and an informal analysis of its load balancing, scalability, and speedup is given.
Abstract: In this paper, a distributed heuristic search algorithm is presented. We show that the algorithm is admissible and give an informal analysis of its load balancing, scalability, and speedup. A flow-shop scheduling problem has been implemented on a BBN Butterfly Multicomputer using up to 80 processors to empirically test this algorithm. From our experiments, this algorithm is capable of achieving almost linear speedup on a large number of processors with a relatively small problem size.

Journal Article
TL;DR: It is shown that such an algorithm reduces the search space by reducing the number of nodes and links and providing a tighter bound during the tree search, and the performance of the algorithm is favorably compared with a network-extraction network-design model.
Abstract: Two variants of a network design problem are solved by application of the tree search method. The first formulation aims to reduce a specified vehicle-minutes of traffic congestion at the least possible budget expenditure, and the second minimizes traffic congestion for a given budget. Both involve system-optimizing traffic assignment models with multipath flows. The solution method consists of network abstraction, tree search, and network disaggregation--collectively referred to as the "hierarchical search algorithm." It is shown that such an algorithm reduces the search space by reducing the number of nodes and links and providing a tighter bound during the tree search. It also groups detailed links according to the function they perform--whether it be access/egress, line-haul, bypass, or internal circulation. However, the algorithm yields only a suboptimal solution, the quality of which is measured by an error function. The metropolitan network of Taipei, Taiwan, Republic of China, is used as a case study to verify some of the algorithmic properties, confirming its role in real-world applications. Finally, the performance of the algorithm, which is based on network abstraction, is favorably compared with a network-extraction network-design model.

Journal ArticleDOI
TL;DR: In comparison with A* search, the hypothesis, performance and computational complexity of SA are discussed and the principle and performance of the statistical beuristic search algorithm SA are unraveled.
Abstract: In order to further unravel the principle and performance of the statistical beuristic search algorithm SA, in this paper, in comparison with A* search, the hypothesis, performance and computational complexity of SA are discussed.

Journal ArticleDOI
TL;DR: An application of the optimal algorithm for searching for a minimum of a discrete periodic bimodal function of period P for a clustering analysis of a data communication network is provided.



Proceedings ArticleDOI
13 Dec 1989
TL;DR: Two dual algorithms for solving quadratic transportation problems are discussed and compared and the performance of the algorithms is improved by combining two known line search techniques, the Bitran-Hax line search algorithms and the sequential line search algorithm.
Abstract: Two dual algorithms for solving quadratic transportation problems are discussed and compared. Both algorithms include a similar line search subproblem. The performance of the algorithms is improved by combining two known line search techniques, the Bitran-Hax line search algorithm and the sequential line search algorithm. Results of a computational study carried out to identify the best method to solve quadratic transportation problems are presented. >

01 Jan 1989
TL;DR: By modeling vision problems as state-space search problems, this dissertation improves two critical aspects of vision algorithms: (1) generality, and (2) ease of developing algorithms for new applications.
Abstract: Computer vision is a task of information processing which can be modeled as a canonical sequence of subtasks: preprocessing, labeling, grouping, extracting and matching. A complete vision algorithm can be constructed by synthesizing individual operators performing the subtasks. Two problems in this synthesis are the selecting of operators at each step and the coordination between operators. The former has been studied extensively, but little has been done about the latter. This dissertation presents the state-space search method as a coordination scheme for vision algorithms. The approach regards the input image as the start state, image operators as search operators, models recognized by the system as goal states; and the state space consists of start state, goal states and intermediate states. In addition to these components, an algorithm graph is included in the state-space search model. An algorithm graph of a vision problem acts as a specification of the problem and part of a move generator of the search routine. The first contribution made in this dissertation is a new methodology for the design of cost functions based on information distortion. This method reflects the fact that the output of an image recognition process should present what the input presents; the only difference between the input and the output is that they represent the information in different ways. Therefore, information distortion should be used to measure the cost of a path. The second contribution of this dissertation is the development of a parallel search algorithm. The computational costs of state-space search can be reduced by parallelizing the search. A parallel search algorithm V$\sp\*$ is presented in this dissertation. The V$\sp\*$ algorithm is special in that it provides multiple levels of parallelism. An empirical study is presented which shows what performance improvements can be obtained with the V$\sp\*$ algorithm. By modeling vision problems as state-space search problems, this dissertation improves two critical aspects of vision algorithms: (1) generality, and (2) ease of developing algorithms for new applications.

Book ChapterDOI
Jack Mostow1
01 Dec 1989
TL;DR: This work describes an object-oriented representation for heuristic search algorithms that supports inference and transformation and suggests a suitable representation for reinforcement learning.
Abstract: Learning by reasoning about the problem solver requires a suitable representation of it. We describe an object-oriented representation for heuristic search algorithms that supports inference and transformation.

Journal ArticleDOI
TL;DR: In this paper, a heuristic search algorithm for finding a planning horizon for a production smoothing problem is proposed, which is simple and straightforward, and hence, is readily understandable by practitioners.

Journal ArticleDOI
TL;DR: The priority system is an attempt to simulate best-first search using depth-first iterative-deepening search using a sequent-style back chaining theorem prover and has been shown to be effective.


Book ChapterDOI
26 Sep 1989
TL;DR: This work proposes a formal method to derive optimal and adaptive communication strategies based on Dynamic Programming in Markov Chains to tackle the problem of the diffusion of commonly useful intermediate results between processors linked by “slow” communication lines without shared memory.
Abstract: Algorithms for combinatorial search problems such as the travelling salesman problem, the polynomial problem or game-tree searching have been prime candidates for a distributed implementation as these problems offer substantial parallelism on the programm level. However, a number of experimental studies show that naive approaches to distributed combinatorial search tend to yield only a moderate speedup. The problem is to find a communication strategy that is able to limit stand-stills and superfluous searching of individual processors by distributing the work-load and intermediate results among the processors effectively and that causes only moderate communication overhead. To tackle the problem of the diffusion of commonly useful intermediate results between processors linked by “slow” communication lines without shared memory, we propose a formal method to derive optimal and adaptive communication strategies based on Dynamic Programming in Markov Chains. The key idea of our approach is to compare the running time to be expected under the current contents of the local copy of the shared state with the search and communication effort when acquiring the intermediate result of another processor via an exchange of messages. Using this method we study communication strategies for various distributed combinatorial search problems: for the distributed determination of the maximum of a vector, for a distibuted version of a simple variant of alpha-beta pruning, and for distributed branch and bound methods, where we examine the set partitioning problem.

Proceedings ArticleDOI
08 Feb 1989
TL;DR: Optical implementations of Boolean matrix operations are described for manipulating the constraint matrices to perform forward-checking and thereby increase the search efficiency.
Abstract: Artificial Intelligence problems are often formulated as search trees. These problems suffer from exponential growth of the solution space with problem size. Consequently, solutions based on exhaustive search are generally impractical. To overcome the combinatorial explosion, forward-checking techniques are employed to prune the search space down to a manageable size before or during the actual search procedure. Boolean matrices are a natural data representation for those problems in which the initial problem information is given in the form of a set of binary constraints. In this paper optical implementations of Boolean matrix operations are described for manipulating the constraint matrices to perform forward-checking and thereby increase the search efficiency. Architectural issues affecting the speed of these techniques are discussed.

01 Jan 1989
TL;DR: The paper proposes the idea of combining nonparametric statistical inference methods with the heuristic search, and an algorithm NSA, which may find the goal asymptotically with probability one, and its computational complexity remains 0.
Abstract: Introducing statistical inference methods into heuristic search may greatly improve the efficiency of heuristic search and decrease the computational complexity. Until now, however, only the utilization of parametric statistical inference methods are concerned. The effectiveness of the statistical heuristic search will depend on the assumption to the distibution of the statistic U (n), which characterizes the subtree T(n) rooted at node n. As the consequence, the model of statistical heuristic search is largely limited, and its application is unexpectedly constrained. To beat this shortcoming, the paper proposes the idea of combining nonparametric statistical inference methods with the heuristic search. A nonparametric statistical heuristic search, algorithm NSA, is presented, and its computational complexity is discussed. It is shown that in a uniform m-ary search tree G with a single goal S, locating at a unknown site in the N-th depth, NSA may find the goal asymptotically with probability one, and the complexity remains 0 (N(/~LN)~), where N is the length of the solution path.

Book ChapterDOI
11 Dec 1989
TL;DR: The use of approximate algorithms in aiding heuristic search is explored and it is found that the algorithm expands considerably fewer number of nodes and yet very often produces optimal solutions.
Abstract: The use of approximate algorithms in aiding heuristic search is explored. Approximate algorithms are coupled with algorithm A to generate an algorithm BFPR which maintains provably good quality of solutions. Experiments were performed with Euclidean Traveling Salesman problem and with the problem of Scheduling Independent Tasks. It is found that the algorithm expands considerably fewer number of nodes and yet very often produces optimal solutions.

Proceedings Article
01 Jan 1989
TL;DR: The authors have developed a method to reduce the code search efforts in Vector Quantization by using continuity of physical signals and their geometiric relation, while keeping the minimal distortion condition.
Abstract: The authors have developed a method to reduce the code search efforts in Vector Quantization by using continuity of physical signals and their geometiric relation, while keeping the minimal distortion condition. Suppose a codebook designed by the LBG algorithm is given. The algorithm proposed here is based on an assumption that there is some, hopefully good, estimate of the current ende, for example, the previous ende in the case of Speech signal. The distance between the input vector and the previous code becomes a upper bound of the distortion. Distances among code vectors are tabulated in the increasing order for each of them, and used in order to decrease the number of code vectors examined in the quantization phase. The experimental results on speech and picture data proved that the calculation efforts were reduced by several times without increase of distortions comparing with the original full search algorithm.

Journal ArticleDOI
TL;DR: A new, restricted type of network model is proposed that calculates the probability distribution of output costs generated by the heuristic search algorithm A of Bagchi and Mohanti in terms of theHeuristic distribution.

Proceedings ArticleDOI
09 Nov 1989
TL;DR: An edge detection technique that utilizes human heuristic knowledge and integrates local threshold and local contrast information was developed and implemented and it was concluded that heuristic information reduces the total search effort and thatHeuristic search has the flexibility to adjust itself to changes in the local contrast and gradient.
Abstract: An edge detection technique that utilizes human heuristic knowledge and integrates local threshold and local contrast information was developed and implemented. A rule-based expert system was used to represent the logic of the heuristic search method and the rule-of-thumb knowledge of a human observer. The result from preliminary testing showed that the ratio of the solution path to the total search effort is approximately one half. The cell edges were accurately detected, even when weak edges were present. It was concluded that heuristic information reduces the total search effort and that heuristic search has the flexibility to adjust itself to changes in the local contrast and gradient. >

Proceedings Article
20 Aug 1989
TL;DR: A data structure called a threaded decision graph is discussed, which can be created by preprocessing the search space for some problems, and winch captures the control knowledge for problem solving by showing that by using this method, a great deal of time can be saved during problem solving processes.
Abstract: Heuristic search procedures are useful in a large number of problems of practical importance. Such procedures operate by searching several paths in a search space at the same time, expanding some paths more quickly than others depending on which paths look most promising. Often large amounts of time are required in keeping track of the control knowledge. For some problems, this overhead can be greatly reduced by preprocessing the problem in appropriate ways. In particular, we discuss a data structure called a threaded decision graph, which can be created by preprocessing the search space for some problems, and winch captures the control knowledge for prob lern solving We show how this can be done, and we present an analysis showing that by us ing such a method, a great deal of time can be saved during problem solving processes.