scispace - formally typeset
Search or ask a question

Showing papers on "Incremental heuristic search published in 2000"


Journal ArticleDOI
TL;DR: In this article, a simple local search heuristic was proposed to obtain polynomial-time approximation bounds for metric versions of the k-median problem and the uncapacitated facility location problem.

441 citations


Journal ArticleDOI
16 May 2000
TL;DR: This paper proposes three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic that incorporates novel optimizations that improve efficiency greatly.
Abstract: Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multiquery optimization aims at exploiting common sub-expressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space.In this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algorithms, using workloads consisting of queries from the TPC-D benchmark. The study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time.

414 citations


Proceedings ArticleDOI
14 Apr 2000
TL;DR: A new admissible heuristic for planning is formulated, used to guide an IDA* search, and empirically evaluate the resulting optimal planner over a number of domains.
Abstract: HSP and HSPr are two recent planners that search the state-space using an heuristic function extracted from Strips encodings. HSP does a forward search from the initial state recomputing the heuristic in every state, while HSPr does a regression search from the goal computing a suitable representation of the heuristic only once. Both planners have shown good performance, often producing solutions that are competitive in time and number of actions with the solutions found by Graphplan and SAT planners. HSP and HSPr. however, are not optimal planners. This is because the heuristic function is not admissible and the search algorithms are not optimal. In this paper we address this problem. We formulate a new admissible heuristic for planning, use it to guide an IDA* search, and empirically evaluate the resulting optimal planner over a number of domains. The main contribution is the idea underlying the heuristic that yields not one but a whole family of polynomial and admissible heuristics that trade accuracy for efficiency. The formulation is general and sheds some light on the heuristics used in HSP and Graphplan, and their relation. It exploits the factored (Strips) representation of planning problems, mapping shortest-path problems in state-space into suitably defined shortest-path problems in atom-space. The formulation applies with little variation to sequential and parallel planning, and problems with different action costs.

370 citations


Proceedings Article
14 Apr 2000
TL;DR: The formulation of planning as heuristic search with heuristics derived from problem representations in the context planning with incomplete information is made explicit, to test it over a number of domains, and to extend it to tasks like planning with sensing where the standard search algorithms do not apply.
Abstract: The formulation of planning as heuristic search with heuristics derived from problem representations has turned out to be a fruitful approach for classical planning. In this paper, we pursue a similar idea in the context planning with incomplete information. Planning with incomplete information can be formulated as a problem of search in belief space, where belief states can be either sets of states or more generally probability distribution over states. While the formulation (as the formulation of classical planning as heuristic search) is not particularly novel, the contribution of this paper is to make it explicit, to test it over a number of domains, and to extend it to tasks like planning with sensing where the standard search algorithms do not apply. The resulting planner appears to be competitive with the most recent conformant and contingent planners (e.g., CGP, SGP, and CMBP) while at the same time is more general as it can handle probabilistic actions and sensing, different action costs, and epistemic goals.

354 citations


Journal ArticleDOI
TL;DR: The proposed local search method is based on a tabu search technique and on the shifting bottleneck procedure used to generate the initial solution and to refine the next-current solutions.

220 citations


Patent
07 Dec 2000
TL;DR: In this paper, a peer-to-peer search engine is proposed to augment search engine results with P2P search results by combining the results from both search processes so that the user receives an augmented search result with more information than a search result from either process by itself.
Abstract: A method and system for augmenting conventional search engine results with peer-to-peer search results. Rather than relying solely on an index search in a database that has only indexed a minor portion of the entire World Wide Web, a server-based, peer-to-peer search is initiated in conjunction with the index search. The results from both search processes can be combined so that the user receives an augmented search result with more information than a search result from either process by itself. The entities that are involved in the search can also establish financially rewarding relationships. The server operator agrees to share a percentage of its revenue with peer-to-peer nodes as an incentive to join its registered set of root nodes and expand its peer-to-peer connections. The identified sources of information that provided the search hits can be used by the operator of the search engine in a compensation transaction.

186 citations


Journal ArticleDOI
TL;DR: This article presents a detailed comparative analysis of two particularly well-known families of local search algorithms for SAT, the GSAT and WalkSAT architectures, using a benchmark set that contains instances from randomized distributions as well as SAT-encoded problems from various domains.
Abstract: Local search algorithms are among the standard methods for solving hard combinatorial problems from various areas of artificial intelligence and operations research. For SAT, some of the most successful and powerful algorithms are based on stochastic local search, and in the past 10 years a large number of such algorithms have been proposed and investigated. In this article, we focus on two particularly well-known families of local search algorithms for SAT, the GSAT and WalkSAT architectures. We present a detailed comparative analysis of these algorithms" performance using a benchmark set that contains instances from randomized distributions as well as SAT-encoded problems from various domains. We also investigate the robustness of the observed performance characteristics as algorithm-dependent and problem-dependent parameters are changed. Our empirical analysis gives a very detailed picture of the algorithms" performance for various domains of SAT problems; it also reveals a fundamental weakness in some of the best-performing algorithms and shows how this can be overcome.

160 citations


Book ChapterDOI
11 Oct 2000
TL;DR: A new heuristic method to evaluate planning states, which is based on solving a relaxation of the planning problem, which outperforms all state-of-the-art planners on a large range of domains.
Abstract: We present a new heuristic method to evaluate planning states, which is based on solving a relaxation of the planning problem The solutions to the relaxed problem give a good estimate for the length of a real solution, and they can also be used to guide action selection during planning Using these informations, we employ a search strategy that combines Hill-climbing with systematic search The algorithm is complete on what we call deadlock-free domains Though it does not guarantee the solution plans to be optimal, it does find close to optimal plans in most cases Often, it solves the problems almost without any search at all In particular, it outperforms all state-of-the-art planners on a large range of domains

79 citations


Proceedings ArticleDOI
03 Sep 2000
TL;DR: A new sub-optimal subset search method for feature selection that is not dependent on pre-specified direction of search (forward or backward) and usable in real-time systems is introduced.
Abstract: A new sub-optimal subset search method for feature selection is introduced. As opposed to other subset selection methods the oscillating search is not dependent on pre-specified direction of search (forward or backward). The generality of the oscillating search concept allowed us to define several different algorithms suitable for different purposes. We can specify the need to obtain good results in very short time, or let the algorithm search more thoroughly to obtain near-optimum results. In many cases the oscillating search out-performed all the other tested methods. The oscillating search may be restricted by a preset time-limit, this makes it usable in real-time systems.

76 citations


01 Jan 2000
TL;DR: The main point of the paper is a simple design of the parallel search engine that achieves parallelism by distribution across networked computers.
Abstract: Search in constraint programming is a time consuming task. Search can be speeded up by exploring subtrees of a search tree in parallel. This paper presents distributed search engines that achieve parallelism by distribution across networked computers. The main point of the paper is a simple design of the parallel search engine. Simplicity comes as an immediate consequence of clearly separating search, concurrency, and distribution. The obtained distributed search engines are simple yet offer substantial speedup on standard network computers.

61 citations


Journal ArticleDOI
TL;DR: This paper returns to the hypothesis that understanding of problem structure plays a critical role in successful heuristic search even in the presence of powerful propagators and examines three heuristic commitment techniques and shows that the two techniques based on dynamic problem structure analysis achieve superior performance across all experiments.

Proceedings ArticleDOI
14 Aug 2000
TL;DR: A divide-andconquer geometric approach for constructing optimal search paths for arbitrarily-shaped regions of interest that is both generalizable to multiple search agents and extensible in that additional real-life search requirements can be incorporated into the existing framework.
Abstract: The problem of optimal (or near-optimal) exhaustive search for a moving target is of importance in many civilian and military applications. Search-andrescue in open sea or in sparsely-populated areas and search missions for previously-spotted enemy targets are just a few examples. Yet, few known algorithms exist for solving this problem and none of them combine the optimal allocation of search effort with the actual computation of trajectories that a searcher must (and physically can) follow. We propose a divide-andconquer geometric approach for constructing optimal search paths for arbitrarily-shaped regions of interest. The technique is both generalizable to multiple search agents and extensible in that additional real-life search requirements (maneuverability constraints, additional information about the sensor, etc.) can be incorporated into the existing framework. Another novelty of our approach is the ability to optimally deal with a search platform which, due to design constraints, can only perform detection while moving along straight-line sweeps.

Book ChapterDOI
01 Jan 2000
TL;DR: The proposed framework allows designing algorithms to solve a wide range of distribution restoration problems by heuristic search armed with practical rules (say, based on operator experience) to guide the search.
Abstract: Service restoration, system reconfiguration, and other related problems are formulated and solved by heuristic search, which is a search strategy (e.g. depth-first search) armed with practical rules (say, based on operator experience) to guide the search. The proposed framework allows designing algorithms to solve a wide range of distribution restoration problems. System operator procedures can be accommodated as part of the search process. Test results are presented. An illustrative example is given in the Appendix.

Proceedings ArticleDOI
01 Sep 2000
TL;DR: The application of four different search methods when applied in two and three dimensional digital images is presented and evaluated and it is shown that for a specific application, the use of a simple heuristic function leads to a considerable reduction in the number of evaluated nodes as compared with the traditional unidirectional approach.
Abstract: Describes the use of heuristics in the determination of a minimum cost path between two points in digital images. The application of four different search methods when applied in two and three dimensional digital images is presented and evaluated. Experiments show that the number of nodes that are being addressed in the search process strongly depends on the discriminative power of the feature used. Furthermore it is shown that for a specific application, the use of a simple heuristic function leads to a considerable reduction in the number of evaluated nodes as compared with the traditional unidirectional approach.

Journal ArticleDOI
TL;DR: Computational results indicate that the best versions of the latter heuristics consistently produce optimal or near optimal solutions on test problems.
Abstract: This article considers the preventive flow interception problem (FIP) on a network. Given a directed network with known origin-destination path flows, each generating a certain amount of risk, the preventive FIP consists of optimally locating m facilities on the network in order to maximize the total risk reduction. A greedy search heuristic as well as several variants of an ascent search heuristic and of a tabu search heuristic are presented for the FIP. Computational results indicate that the best versions of the latter heuristics consistently produce optimal or near optimal solutions on test problems. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 287–303, 2000

Journal ArticleDOI
TL;DR: A suggestion is made to compare intelligence between contestants (in solving fully enclosed life/death problems) by comparing their relative increase in the time to solve problems with the increase of the difficulty of the problems.

Book ChapterDOI
26 Oct 2000
TL;DR: A safe method to select the interesting moves using game definition functions, which has multiple advantages over basic alphabeta search: it solves more problems, the answers it finds are always correct, it solves problems faster and with less nodes, and it is more simple to program than usual heuristic methods.
Abstract: In complex games with a large branching factor such as Go, programs usually use highly selective search methods, heuristically expanding just a few plausible moves in each position. As in early Chess programs, these methods have shortcomings, they often neglect good moves or overlook a refutation. We propose a safe method to select the interesting moves using game definition functions. This method has multiple advantages over basic alphabeta search: it solves more problems, the answers it finds are always correct, it solves problems faster and with less nodes, and it is more simple to program than usual heuristic methods. The only small drawback is the requirement for an abstract analysis of the game. This could be avoided by keeping track of the intersections tested during the search, maybe with a loss of efficacy but with a gain in generality. We give examples and experimental results for the capture game, an important sub-game of the game of Go. The principles underlying the method are not specific to the capture game. The method can also be used with different search algorithms. This algorithm is important for every Go programmer, and is likely to interest other game programmers.

Journal ArticleDOI
TL;DR: This paper presents a tabu search approach to a combinatorial optimization problem, in which the objective is to maximize the production throughput of a high-speed automated placement machine.
Abstract: Combinatorial optimization represents a wide range of real-life manufacturing optimization problems. Due to the high computational complexity, and the usually high number of variables, the solution of these problems imposes considerable challenges. This paper presents a tabu search approach to a combinatorial optimization problem, in which the objective is to maximize the production throughput of a high-speed automated placement machine. Tabu search is a modern heuristic technique widely employed to cope with large search spaces, for which classical search methods would not provide satisfactory solutions in a reasonable amount of time. The developed TS strategies are tailored to address the different issues caused by the modular structure of the machine.

Proceedings ArticleDOI
01 Dec 2000
TL;DR: The goal programming model integrated with the genetic algorithm and the stochastic search present a new approach able to lead a search towards a multi-objective solution.
Abstract: This study presents a new approach to solve multi-response simulation optimization problems. This approach integrates a simulation model with a genetic algorithm heuristic and a goal programming model. The genetic algorithm technique offers a very flexible and reliable tool able to search for a solution within a global context. This method was modified to perform the search considering the mean and the variance of the responses. In this way, the search is performed stochastically, and not deterministically like most of the approaches reported in the literature. The goal programming model integrated with the genetic algorithm and the stochastic search present a new approach able to lead a search towards a multi-objective solution.

Book ChapterDOI
26 Jul 2000
TL;DR: This work has learned how to accurately predict the running time of admissible heuristic search algorithms, as a function of the solution depth and the heuristic evaluation function.
Abstract: In the past several years, significant progress has been made in finding optimal solutions to combinatorial problems. In particular, random instances of both Rubik's Cube, with over 1019 states, andt he 5 × 5 sliding-tile puzzle, with almost 1025 states, have been solved optimally. This progress is not the result of better search algorithms, but more effective heuristic evaluation functions. In addition, we have learned how to accurately predict the running time of admissible heuristic search algorithms, as a function of the solution depth and the heuristic evaluation function. One corollary of this analysis is that an admissible heuristic function reduces the effective depth of search, rather than the effective branching factor.

Posted Content
TL;DR: In this paper, a new neighborhood search heuristic that makes effctive use of memory structures in a way that is different from tabu search is introduced, which has been applied to the traveling salesperson problem and the subset sum problem.
Abstract: Neighborhood search heuristics like local search and its variants are some of the most popular approaches to solve discrete optimization problems of moderate to large size. Apart from tabu search, most of these heuristics are memoryless. In this paper we introduce a new neighborhood search heuristic that makes effctive use of memory structures in a way that is different from tabu search. We report computational experiments with this heuristic on the traveling salesperson problem and the subset sum problem.

Book ChapterDOI
01 Jan 2000
TL;DR: This work exploits the similarities in problems to produce a more general problem formulation and associated solution methods that apply to a broad range of problems.
Abstract: Ring network design problems have many important applications, especially in the field of telecommunications and vehicle routing. Those problems generally consist of constructing a ring network by selecting a node subset and corresponding direct links. Different requirements and objectives lead to various specific types of NP-hard ring network design problems reported in the literature, each with its own algorithms. We exploit the similarities in problems to produce a more general problem formulation and associated solution methods that apply to a broad range of problems. Computational results are reported for an implementation using a meta-heuristics framework with generic components for heuristic search.

Journal ArticleDOI
TL;DR: In this article, a number of new evolutionary algorithms have been proposed and modifications have been made to several constructive algorithms to cope with non-unique jobs or jobs with multiple demands, and a numerical comparison of these algorithms and real data of a truck assembly line has been presented.

01 Jan 2000
TL;DR: The departure operations at a major US hub airport are analyzed, and a discrete event simulation of the departure operations is constructed, finding that to minimize the average time spent in the queue for different traffic conditions, a queue assignment algorithm is needed to maintain an even balance of aircraft in the queues.
Abstract: Two problems relating to the departure problem in air traffic control automation are examined. The first problem that is addressed is the scheduling of aircraft for departure. The departure operations at a major US hub airport are analyzed, and a discrete event simulation of the departure operations is constructed. Specifically, the case where there is a single departure runway is considered. The runway is fed by two queues of aircraft. Each queue, in turn, is fed by a single taxiway. Two salient areas regarding scheduling are addressed. The first is the construction of optimal departure sequences for the aircraft that are queued. Several greedy search algorithms are designed to minimize the total time to depart a set of queued aircraft. Each algorithm has a different set of heuristic rules to resolve situations within the search space whenever two branches of the search tree with equal edge costs are encountered. These algorithms are then compared and contrasted with a genetic search algorithm in order to assess the performance of the heuristics. This is done in the context of a static departure problem where the length of the departure queue is fixed. A greedy algorithm which deepens the search whenever two branches of the search tree with non-unique costs are encountered is shown to outperform the other heuristic algorithms. This search strategy is then implemented in the discrete event simulation. A baseline performance level is established, and a sensitivity analysis is performed by implementing changes in traffic mix, routing, and miles-in-trail restrictions for comparison. It is concluded that to minimize the average time spent in the queue for different traffic conditions, a queue assignment algorithm is needed to maintain an even balance of aircraft in the queues. A necessary consideration is to base queue assignment upon traffic management restrictions such as miles-in-trail constraints. The second problem addresses the technical challenges associated with merging departure aircraft onto their filed routes in a congested airspace environment. Conflicts between departures and en route aircraft within the Center airspace are analyzed. Speed control, holding the aircraft at an intermediate altitude, re-routing, and vectoring are posed as possible deconfliction maneuvers. A cost assessment of these merge strategies, which are based upon 4D flight management and conflict detection and resolution principles, is given. Several merge conflicts are studied and a cost for each resolution is computed. It is shown that vectoring tends to be the most expensive resolution technique. Altitude hold is simple, costs less than vectoring, but may require a long time for the aircraft to achieve separation. Re-routing is the simplest, and provides the most cost benefit since the aircraft flies a shorter distance than if it had followed its filed route. Speed control is shown to be ineffective as a means of increasing separation, but is effective for maintaining separation between aircraft. In addition, the effects of uncertainties on the cost are assessed. The analysis shows that cost is invariant with the decision time.

01 Jan 2000
TL;DR: Results of benchmark tests indicate that this technique performs better than genetic algorithms on a wide range of problems and the application of the technique to non-parametric optimisation problems is further illustrated using an example from conceptual structural design.
Abstract: "Evolutionary search techniques such as Genetic Algorithms (GA) have recently gained considerable attention. They have been used for solving a wide range of problems including function optimisation and learning. In this paper, a new global search technique, called Probabilistic Global Search (PGS), is presented. Results of benchmark tests indicate that this technique performs better than genetic algorithms on a wide range of problems. PGS is a stochastic search technique. It works by generating points in the search space according to a probability distribution function (PDF) defined over the search space. Each axis is divided into a fixed number of intervals with equal probability density. The probability densities of intervals are modified dynamically so that points are generated with higher probability in regions containing good solutions. The algorithm includes four nested cycles: 1. Sampling 2. Probability updating 3. Focusing 4. Subdomain cycle In the sampling cycle (innermost cycle) a certain number of points are generated randomly according to the current PDF. Each point is evaluated by the user defined objective function and the best point is selected. In the next cycle, probabilities of regions containing good solutions are increased and probabilities decreased in regions containing less attractive solutions. In the third cycle, search is focused on the interval containing the best solution after a number of probability updating cycles, by further subdivision of the interval. In the subdomain cycle, the search space is progressively narrowed by selecting a subdomain of smaller size centred on the best point after each focusing cycle. Each cycle serves a different purpose in the search for a global optimum. The sampling cycle permits a more uniform and exhaustive search over the entire search space than other cycles. Probability updating and focusing cycles refine search in the neighbourhood of good solutions. Convergence is achieved by means of the subdomain cycle. The algorithm was tested on highly non-linear, non-separable functions in ten to hundred variables. Results are compared with those from three versions of GAs. In most cases PGS gives better results in terms of the number of times global optima were found and the number of evaluations required to find them. The application of the technique to non-parametric optimisation problems is further illustrated using an example from conceptual structural design."

Book ChapterDOI
01 Mar 2000
TL;DR: In this article, a branch and bound algorithm and a highly effective heuristic search algorithm were proposed to find the global minimum of problems of similar size in about one second, and should comfortably handle much larger problem instances.
Abstract: We consider algorithms for a simple one-dimensional point placement problem: given N points on a line, and noisy measurements of the distances between many pairs of them, estimate the relative positions of the points. Problems of this flavor arise in a variety of contexts. The particular motivating example that inspired this work comes from molecular biology; the points are markers on a chromosome and the goal is to map their positions. The problem is NP-hard under reasonable assumptions. We present two algorithms for computing least squares estimates of the ordering and positions of the markers: a branch and bound algorithm and a highly effective heuristic search algorithm. The branch and bound algorithm is able to solve to optimality problems of 18 markers in about an hour, visiting about 106 nodes out of a search space of 1016 nodes. The local search algorithm usually was able to find the global minimum of problems of similar size in about one second, and should comfortably handle much larger problem instances.

Journal Article
TL;DR: In this paper, the power of a new scheme that generates search heuristics mechanically was evaluated in the context of optimization in belief networks and was extended to Max-CSP.
Abstract: This paper evaluates the power of a new scheme that generates search heuristics mechanically. This approach was presented and evaluated first in the context of optimization in belief networks. In this paper we extend this work to Max-CSP. The approach involves extracting heuristics from a parameterized approximation scheme called Mini-Bucket elimination that allows controlled trade-off between computation and accuracy. The heuristics are used to guide Branch-and-Bound and Best-First search, whose performance are compared on a number of constraint problems. Our results demonstrate that both search schemes exploit the heuristics effectively, permitting controlled trade-off between preprocessing (for heuristic generation) and search. These algorithms are compared with a state of the art complete algorithm as well as with the stochastic local search anytime approach, demonstrating superiority in some problem cases.

Journal ArticleDOI
TL;DR: The architecture of LOCAL++ provides a principled modularization for the solution of combinatorial search problems, and helps the designer deriving a neat conceptual scheme of the application, thus facilitating the development and debugging phases.
Abstract: Local search is an emerging paradigm for combinatorial search which has recently been shown to be very effective for a large number of combinatorial problems. It is based on the idea of navigating the search space by iteratively stepping from one solution to one of its neighbors, which are obtained by applying a simple local change to it. In this paper we present LOCAL++, an object-oriented framework to be used as a general tool for the development and implementation of local search algorithms in C++. The framework comprises a hierarchy of abstract template classes, one for each local search technique taken into account (i.e. hill-climbing, simulated annealing and tabu search). Each class specifies and implements the invariant part of the algorithm built according to the technique, and is supposed to be specialized by a concrete class once a given search problem is considered, so as to implement the problem-dependent part of the algorithm. LOCAL++ comprises also a set of abstract classes for creating new techniques by combining different search techniques and different neighborhood relations. The architecture of LOCAL++ provides a principled modularization for the solution of combinatorial search problems, and helps the designer deriving a neat conceptual scheme of the application, thus facilitating the development and debugging phases. LOCAL++ proved to be flexible enough for the implementation of the algorithms solving various scheduling problems. Copyright © 2000 John Wiley & Sons, Ltd.

Book ChapterDOI
26 Oct 2000
TL;DR: This paper investigates how temporal difference learning and genetic algorithms can be used to improve various decisions made during game-tree search and proposes a modified update rule that uses the TD error of the evaluation function to shorten the lag between two rewards.
Abstract: The strength of a game-playing program is mainly based on the adequacy of the evaluation function and the efficacy of the search algorithm. This paper investigates how temporal difference learning and genetic algorithms can be used to improve various decisions made during game-tree search. The existent TD algorithms are not directly suitable for learning search decisions. Therefore we propose a modified update rule that uses the TD error of the evaluation function to shorten the lag between two rewards. The genetic algorithms can be applied directly to learn search decisions. For our experiments we selected the problem of time allocation from the set of search decisions. On each move the player can decide on a certain search depth, being constrained by the amount of time left. As testing ground, we used the game of Lines of Action, which has roughly the same complexity as Othello. From the results we conclude that both the TD and the genetic approach lead to good results when compared to the existent time-allocation techniques. Finally, a brief discussion of the issues that can emerge when the algorithms are applied to more complex search decisions is given.

Proceedings ArticleDOI
21 Aug 2000
TL;DR: A genetic search algorithm for motion estimation (GSAME) which applies genetic operation to motion estimation is proposed, and the result shows that the method not only solve the problem of being trapped to local optima, but also have a speed close to that of TSS.
Abstract: Motion estimation is essential for many interframe video coding techniques, but most of the fast search algorithms for motion estimation are suboptimal and they are susceptible to being trapped into local optima. In this paper, we propose a genetic search algorithm for motion estimation (GSAME) which applies genetic operation to motion estimation, and also compare the GSAME to three-step search (TSS) and full-search algorithms (FSA). The result shows that the method not only solve the problem of being trapped to local optima, but also have a speed close to that of TSS.