scispace - formally typeset
Search or ask a question

Showing papers on "Best-first search published in 2008"


Proceedings ArticleDOI
01 Oct 2008
TL;DR: This paper introduces a novel approach for searching in high-dimensional spaces taking into account behaviors drawn from fish schools, and presents simulations where the FSS algorithm is compared with, and in some cases outperforms, well-known intelligent algorithms such as particle swarm optimization inHigh-dimensional searches.
Abstract: Search problems are sometimes hard to compute. This is mainly due to the high dimensionality of some search spaces. Unless suitable approaches are used, search processes can be time-consuming and ineffective. Nature has evolved many complex systems able to deal with such difficulties. Fish schools, for instance, benefit greatly from the large number of constituent individuals in order to increase mutual survivability. In this paper we introduce a novel approach for searching in high-dimensional spaces taking into account behaviors drawn from fish schools. The derived algorithm - fish-school search (FSS) - is mainly composed of three operators: feeding, swimming and breeding. Together these operators afford the evoked computation: (i) wide-ranging search abilities, (ii) automatic capability to switch between exploration and exploitation, and (iii) self-adaptable global guidance for the search process. This paper includes a detailed description of the novel algorithm. Finally, we present simulations where the FSS algorithm is compared with, and in some cases outperforms, well-known intelligent algorithms such as particle swarm optimization in high-dimensional searches.

185 citations


Journal ArticleDOI
TL;DR: This paper proposes a way to combine the Mesh Adaptive Direct Search (MADS) algorithm, which extends the Generalized Pattern Search algorithm, with the Variable Neighborhood Search (VNS) metaheuristic, for nonsmooth constrained optimization.
Abstract: This paper proposes a way to combine the Mesh Adaptive Direct Search (MADS) algorithm, which extends the Generalized Pattern Search (GPS) algorithm, with the Variable Neighborhood Search (VNS) metaheuristic, for nonsmooth constrained optimization. The resulting algorithm retains the convergence properties of MADS, and allows the far reaching exploration features of VNS to move away from local solutions. The paper also proposes a generic way to use surrogate functions in the VNS search. Numerical results illustrate advantages and limitations of this method.

182 citations


Journal ArticleDOI
TL;DR: This work generalizes the standard vehicle routing problem with time windows by allowing both traveling times and traveling costs to be time-dependent functions and proposes an algorithm that evaluates solutions in these neighborhoods more efficiently than the ones computing the dynamic programming from scratch.

160 citations


Proceedings Article
13 Jul 2008
TL;DR: It is shown that for a number of common planning benchmark domains, including ones that admit optimal solution in polynomial time, general search algorithms such as A* must necessarily explore an exponential number of search nodes even under the optimistic assumption of almost perfect heuristic estimators, whose heuristic error is bounded by a small additive constant.
Abstract: Heuristic search using algorithms such as A* and IDA* is the prevalent method for obtaining optimal sequential solutions for classical planning tasks. Theoretical analyses of these classical search algorithms, such as the well-known results of Pohl, Gaschnig and Pearl, suggest that such heuristic search algorithms can obtain better than exponential scaling behaviour, provided that the heuristics are accurate enough. Here, we show that for a number of common planning benchmark domains, including ones that admit optimal solution in polynomial time, general search algorithms such as A* must necessarily explore an exponential number of search nodes even under the optimistic assumption of almost perfect heuristic estimators, whose heuristic error is bounded by a small additive constant. Our results shed some light on the comparatively bad performance of optimal heuristic search approaches in "simple" planning domains such as GRIPPER. They suggest that in many applications, further improvements in run-time require changes to other parts of the search algorithm than the heuristic estimator.

118 citations


Journal ArticleDOI
TL;DR: In this paper, a new local search methodology, called Variable Space Search (VSSS), was proposed to solve the k-coloring problem, which considers several search spaces, with various neighborhoods and objective functions, and moves from one to another when the search is blocked at a local optimum in a given search space.

105 citations


Journal ArticleDOI
TL;DR: This paper unifies the view of graph search algorithms by showing simple, closely related characterizations of various well-known search paradigms, including BFS and DFS, and these characterizations naturally lead to other search paradigsms, namely, maximal neighborhood search and LexDFS.
Abstract: Graph searching is perhaps one of the simplest and most widely used tools in graph algorithms. Despite this, few theoretical results are known about the vertex orderings that can be produced by a specific search algorithm. A simple characterizing property, such as is known for LexBFS, can aid greatly in devising algorithms, writing proofs of correctness, and showing impossibility results. This paper unifies our view of graph search algorithms by showing simple, closely related characterizations of various well-known search paradigms, including BFS and DFS. Furthermore, these characterizations naturally lead to other search paradigms, namely, maximal neighborhood search and LexDFS.

82 citations


Journal ArticleDOI
TL;DR: Simulation results indicate the combination of the new reduction algorithm and the new search algorithm can be much more efficient than the existing algorithms, in particular when the least squares residual is large.
Abstract: A box-constrained integer least squares problem (BILS) arises from several wireless communications applications. Solving a BILS problem usually has two stages: reduction (or preprocessing) and search. This paper presents a reduction algorithm and a search algorithm. Unlike the typical reduction algorithms, which use only the information of the lattice generator matrix, the new reduction algorithm also uses the information of the given input vector and the box constraint and is very effective for search. The new search algorithm overcomes some shortcomings of the existing search algorithms and gives some other improvement. Simulation results indicate the combination of the new reduction algorithm and the new search algorithm can be much more efficient than the existing algorithms, in particular when the least squares residual is large.

78 citations


Proceedings ArticleDOI
31 Oct 2008
TL;DR: The effectiveness of Nelder-Mead Simplex, Genetic Algorithms, Simulated Annealing, Particle Swarm Optimization, Orthogonal search, and Random search are evaluated in terms of the performance of the best candidate found under varying time limits.
Abstract: This paper describes the application of various search techniques to the problem of automatic empirical code optimization The search process is a critical aspect of auto-tuning systems because the large size of the search space and the cost of evaluating the candidate implementations makes it infeasible to find the true optimum point by brute force We evaluate the effectiveness of Nelder-Mead Simplex, Genetic Algorithms, Simulated Annealing, Particle Swarm Optimization, Orthogonal search, and Random search in terms of the performance of the best candidate found under varying time limits

64 citations


Patent
23 Sep 2008
TL;DR: A search service provides a set of search results in response to a search query from a user, and provides one or more suggested search queries, for selection by the user to generate more search results.
Abstract: A search service provides a set of search results in response to a search query from a user, and provides one or more suggested search queries, for selection by the user to generate more search results, at least one of the suggested search queries havi ng a correspondence with a corresponding subset of one or more of the search results. The set of search results are sent with an indication of the corresponding suggested search queries, for presentation to the user with a visual representation of the corr espondence. Such a visual representation can mean locating the suggested search 10 query adjacent to its corresponding search result, and can enable a user to select an appropriate further search query more quickly with a minimum of clicks, or less time for a user reviewing less relevant information or more efficient use of screenspace.

64 citations


Book ChapterDOI
20 Oct 2008
TL;DR: A new property driven pruning algorithm in dynamic model checking to efficiently detect race conditions in multithreaded programs and is both sound and complete (as precise as the dynamic partial order reduction algorithm by Flanagan and Godefroid).
Abstract: We present a new property driven pruning algorithm in dynamic model checking to efficiently detect race conditions in multithreaded programs. The main idea is to use a lockset based analysis of observed executions to help prune the search space to be explored by the dynamic search. We assume that a stateless search algorithm is used to systematically execute the program in a depth-first search order. If our conservative lockset analysis shows that a search subspace is race-free, it can be pruned away by avoiding backtracks to certain states in the depth-first search. The new dynamic race detection algorithm is both sound and complete (as precise as the dynamic partial order reduction algorithm by Flanagan and Godefroid). The algorithm is also more efficient in practice, allowing it to scale much better to real-world multithreaded C programs.

57 citations


Patent
07 Aug 2008
TL;DR: A method of processing a search query includes, for each search context, determining a correlation between the scoring primitive and actual user selections of results of the previously executed search queries by a plurality of users.
Abstract: A method of processing a search query includes, for each search context of a plurality of search contexts, for each scoring primitive of a plurality of scoring primitives, and for a set of previously executed search queries that are consistent with the search context, determining a correlation between the scoring primitive and actual user selections of results of the previously executed search queries by a plurality of users. For each search context, machine learning is performed on the correlations to identify a predicted performance function comprising a weighted subset of the scoring primitives that meet predefined predictive quality criteria. Executing a user submitted search query includes associating the user submitted search query with a respective search context, and ordering at least a portion of the search results in accordance with the predicted performance function for the search context for the user submitted search query.

Journal ArticleDOI
TL;DR: This paper extends classic and modern real-time search algorithms with an automated mechanism for dynamic depth and subgoal selection with nearly a three-fold improvement in suboptimality over the existing state-of-the-art algorithms and a 15-fold increase in the amount of planning per action.
Abstract: Real-time heuristic search is a challenging type of agent-centered search because the agent's planning time per action is bounded by a constant independent of problem size. A common problem that imposes such restrictions is pathfinding in modern computer games where a large number of units must plan their paths simultaneously over large maps. Common search algorithms (e.g., A*, IDA*, D*, ARA*, AD*) are inherently not real-time and may lose completeness when a constant bound is imposed on per-action planning time. Real-time search algorithms retain completeness but frequently produce unacceptably suboptimal solutions. In this paper, we extend classic and modern real-time search algorithms with an automated mechanism for dynamic depth and subgoal selection. The new algorithms remain real-time and complete. On large computer game maps, they find paths within 7% of optimal while on average expanding roughly a single state per action. This is nearly a three-fold improvement in suboptimality over the existing state-of-the-art algorithms and, at the same time, a 15-fold improvement in the amount of planning per action.

Journal ArticleDOI
TL;DR: The proposed multi-criteria iterated greedy search algorithm iterates over a multicriteria constructive heuristic approach to yield a set of Pareto-efficient solutions (a posteriori approach) and is compared against the best-so-far heuristic for the problem under consideration.
Abstract: In this paper, we tackle the problem of total flowtime and makespan minimisation in a permutation flowshop. For this, we introduce a multi-criteria iterated greedy search algorithm. This algorithm iterates over a multicriteria constructive heuristic approach to yield a set of Pareto-efficient solutions (a posteriori approach). The proposed algorithm is compared against the best-so-far heuristic for the problem under consideration. The comparison shows the proposal to be very efficient for a wide number of multicriteria performance measures. Aside, an extensive computational experience is carried out in order to analyse the different parameters of the algorithm. The analysis shows the algorithm to be robust for most of the considered performance measures.

Book ChapterDOI
01 Jan 2008
TL;DR: This chapter describes how self-adaptation may be use to control not only the parameters defining crossover and mutation, but also how it may be used to control the very definition of local search operators used within hybrid evolutionary algorithms (so-called memetic algorithms).
Abstract: It is well known that the choice of parameter settings for meta-heuristic algorithms has a dramatic impact on their search performance and this has lead to considerable interest in various mechanisms that in some way attempt to automatically adjust the algorithm’s parameters for a given problem. Of course this raises the spectre of unsuitable parameters arising from a poor choice of learning/adaptation technique. Within the field of Evolutionary Algorithms, many approaches have been tried, most notably that of “Self-Adaptation”, whereby the heuristic’s parameters are encoded alongside the candidate solution, and acted on by the same forces of evolution. Many successful applications have been reported, particularly in the sub-field of Evolution Strategies for problems in the continuous domain. In this chapter we examine the motivation and features necessary for successful self-adaptive learning to occur. Since a number of works have dealt with the continuous domain, this chapter focusses particularly on its aspects that arise when it is applied to combinatorial problems. We describe how self-adaptation may be use to control not only the parameters defining crossover and mutation, but also how it may be used to control the very definition of local search operators used within hybrid evolutionary algorithms (so-called memetic algorithms). On this basis we end by drawing some conclusions and suggestions about how this phenomenon might be translated to work within other search metaheuristics.

Book ChapterDOI
15 Dec 2008
TL;DR: The bidirectional time-dependent search algorithm is extended in order to allow core routing, which is a very effective technique introduced for static graphs that consists in carrying out most of the search on a subset of the original node set.
Abstract: In a recent work [1], we proposed a point-to-point shortest paths algorithm which applies bidirectional search on time-dependent road networks The algorithm is based on A * and runs a backward search in order to bound the set of nodes that have to be explored by the forward search In this paper we extend the bidirectional time-dependent search algorithm in order to allow core routing, which is a very effective technique introduced for static graphs that consists in carrying out most of the search on a subset of the original node set Moreover, we tackle the dynamic scenario where the piecewise linear time-dependent arc cost functions are not fixed, but can have their coefficients updated We provide extensive computational results to show that our approach is a significant speed-up with respect to the original algorithm, and it is able to deal with the dynamic scenario requiring only a small computational effort to update the cost functions and related data structures

Journal ArticleDOI
TL;DR: A new heuristic algorithm based on filter-and-fan method incorporated with a local search, exploring in the defined neighborhood space is proposed for the resource-constrained project scheduling problem.

Journal ArticleDOI
TL;DR: This work considers the class of Depth First Search algorithms, and proposes to use upper tolerances to guide the search for optimal solutions, and shows that in most situations tolerance- based algorithms outperform cost-based algorithms.

Proceedings ArticleDOI
30 Nov 2008
TL;DR: A novel heuristic branch and bound search algorithm is introduced to explore the possible priority ordering in real-time on-chip communication and can effectively reduce the search space.
Abstract: Wormhole switching with fixed priority preemption has been proposed as a possible solution for real-time on-chip communication. However, none of current priority assignment policies works well in on-chip networks due to some inherent properties of the protocol. In this paper, a novel heuristic branch and bound search algorithm is introduced to explore the possible priority ordering. Differing from the traditional exhaust algorithm which costs exponential complexity, our algorithm can effectively reduce the search space. In addition, this algorithm can ensure that if a priority ordering exists that makes the traffic-flows schedulable, this priority ordering will be found by the search algorithm. By combining with schedulability analysis, a broad class of real-time communication with different QoS requirements can be explored and developed in a SoC/NoC communication platform.

Journal ArticleDOI
TL;DR: The CSP is represented as a meta-tree CSP structure that is used as a hierarchy of communication by the authors' distributed algorithm and it is shown that the distributed algorithm outperforms well-known centralized algorithms.

Book ChapterDOI
TL;DR: A comparison of the performance between PN, PN2, PDS, and αβ is given, and it is shown that PN-search algorithms clearly outperform αβ in solving endgame positions in the game of Lines of Action (LOA).
Abstract: Proof-Number (PN) search is a best-first adversarial search algorithm especially suited for finding the game-theoretical value in game trees. The strategy of the algorithm may be described as developing the tree into the direction where the opposition characterised by value and branching factor is to expect to be the weakest. In this chapter we start by providing a short description of the original PN-search method, and two main successors of the original PN search, i.e., PN2 search and the depth-first variant Proof-number and Disproof-number Search (PDS). A comparison of the performance between PN, PN2, PDS, and αβ is given. It is shown that PN-search algorithms clearly outperform αβ in solving endgame positions in the game of Lines of Action (LOA). However, memory problems make the plain PN search a weaker solver for harder problems. PDS and PN2 are able to solve significantly more problems than PN and αβ. But PN2 is restricted by its working memory, and PDS is considerably slower than PN2. Next, we present a new proof-number search algorithm, called PDS-PN. It is a two-level search (like PN2), which performs at the first level a depth-first PDS, and at the second level a best-first PN search. Hence, PDS-PN selectively exploits the power of both PN2 and PDS. Experiments show that within an acceptable time frame PDS-PN is more effective for really hard endgame positions. Finally, we discuss the depth-first variant df-pn. As a follow up of the comparison of the four PN variants, we compare the algorithms PDS and df-pn. However, the hardware conditions of the comparison were different. Yet, experimental results provide promising prospects for df-pn. We conclude the article by seven observations, three conclusions, and four suggestions for future research.

Proceedings ArticleDOI
Xian-Jun Shi1, Hong Lei1
19 Dec 2008
TL;DR: Experimental result shows that genetic algorithm proposed in this paper is suitable for classification rule mining and those rules discovered by the algorithm have higher classification performance to unknown data.
Abstract: Data mining has as goal to extract knowledge from large databases. To extract this knowledge, a database may be considered as a large search space, and a mining algorithm as a search strategy. In general, a search space consists of an enormous number of elements, which make it is infeasible to search exhaustively. As a search strategy, genetic algorithms have been applied successfully in many fields. In this paper, we present a genetic algorithm-based approach for mining classification rules from large database. For emphasizing on predictive accuracy, comprehensibility and interestingness of the rules and simplifying the implementation of a genetic algorithm, we discuss detail the design of encoding, genetic operator and fitness function of genetic algorithm for this task. Experimental result shows that genetic algorithm proposed in this paper is suitable for classification rule mining and those rules discovered by the algorithm have higher classification performance to unknown data.

Proceedings ArticleDOI
21 Apr 2008
TL;DR: This study shows, through mathematical proof, that the optimal setting of BF in terms of traffic cost is determined by the global statistical information of keywords, not the minimized false positive rate as claimed by previous methods.
Abstract: Current search mechanisms of DHT-based P2P systems can well handle a single keyword search problem. Other than single keyword search, multi-keyword search is quite popular and useful in many real applications. Simply using the solution for single keyword search will require distributed intersection/union operations in wide area networks, leading to unacceptable traffic cost. As it is well known that Bloom Filter (BF) is effective in reducing traffic, we would like to use BF encoding to handle multi-keyword search.Applying BF is not difficult, but how to get optimal results is not trivial. In this study we show, through mathematical proof, that the optimal setting of BF in terms of traffic cost is determined by the global statistical information of keywords, not the minimized false positive rate as claimed by previous methods. Through extensive experiments, we demonstrate how to obtain optimal settings. We further argue that the intersection order between sets is important for multi-keyword search. Thus, we design optimal order strategies based on BF for both "and" and "or" queries. To better evaluate the performance of this design, we conduct extensive simulations on TREC WT10G test collection and the query log of a commercial search engine. Results show that our design significantly reduces the search traffic of existing approach by 73%.

Book ChapterDOI
07 Jul 2008
TL;DR: This work presents two online algorithms for maintaining a topological order of a directed acyclic graph as arcs are added, and detecting a cycle when one is created, using a deterministic and a randomized method.
Abstract: We present two online algorithms for maintaining a topological order of a directed acyclic graph as arcs are added, and detecting a cycle when one is created Our first algorithm takes O(m1/2) amortized time per arc and our second algorithm takes O(n25/m) amortized time per arc, where nis the number of vertices and mis the total number of arcs For sparse graphs, our O(m1/2) bound improves the best previous bound by a factor of lognand is tight to within a constant factor for a natural class of algorithms that includes all the existing ones Our main insight is that the two-way search method of previous algorithms does not require an ordered search, but can be more general, allowing us to avoid the use of heaps (priority queues) Instead, the deterministic version of our algorithm uses (approximate) median-finding; the randomized version of our algorithm uses uniform random sampling For dense graphs, our O(n25/m) bound improves the best previously published bound by a factor of n1/4and a recent bound obtained independently of our work by a factor of logn Our main insight is that graph search is wasteful when the graph is dense and can be avoided by searching the topological order space instead Our algorithms extend to the maintenance of strong components, in the same asymptotic time bounds

Book ChapterDOI
20 May 2008
TL;DR: This paper introduces a simple hybrid algorithm for job-shop scheduling that leverages both the fast, broad search capabilities of modern tabu search and the scheduling-specific inference capabilities of constraint programming.
Abstract: Since their introduction, local search algorithms - and in particular tabu search algorithms - have consistently represented the state-of-the-art in solution techniques for the classical job-shop scheduling problem. This is despite the availability of powerful search and inference techniques for scheduling problems developed by the constraint programming community. In this paper, we introduce a simple hybrid algorithm for job-shop scheduling that leverages both the fast, broad search capabilities of modern tabu search and the scheduling-specific inference capabilities of constraint programming. The hybrid algorithm significantly improves the performance of a state-of-the-art tabu search for the job-shop problem, and represents the first instance in which a constraint programming algorithm obtains performance competitive with the best local search algorithms. Further, the variability in solution quality obtained by the hybrid is significantly lower than that of pure local search algorithms. As an illustrative example, we identify twelve new best-known solutions on Taillard's widely studied benchmark problems.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a greedy search algorithm for determining optimal break points for time-of-day based coordinated actuated traffic signal operations using a feature vector of optimal cycle length per time interval instead of traffic volume.
Abstract: This paper presents the development and evaluation of a procedure for determining optimal break points for time-of-day based coordinated actuated traffic signal operations. The proposed procedure uses a feature vector of optimal cycle length per time interval instead of traffic volume itself. Initial break points determined by the proposed feature vector are used in the greedy search algorithm to obtain optimal break points. By using the greedy search algorithm the number of evaluations in the search are dramatically reduced when compared to an exhaustive search or other common heuristic search methods such as a genetic algorithm. The proposed procedure was evaluated using a hypothetical network consisting of four signalized intersections and the results indicated that the proposed procedure effectively improved the performance of the coordinated actuated signal control. In addition, sensitivity analyses results on randomly varying demand conditions up to ±20% and randomly increased demand conditions up to 30% indicated that the newly developed break points were robust for such varying demand fluctuations.

Journal ArticleDOI
01 Sep 2008
TL;DR: This work identifies the optimum search set for expanding ring search (ERS) in wireless networks for the scenarios where the source is at the center of a circular region and the destination is randomly chosen within the entire network.
Abstract: We focus on the problem of finding the best search set for expanding ring search (ERS) in wireless networks. ERS is widely used to locate randomly selected destinations or information in wireless networks such as wireless sensor networks. In ERS, controlled flooding is employed to search for the destinations in a region limited by a time-to-live (TTL) before the searched region is expanded. The performance of such ERS schemes depends largely on the search set, the set of TTL values that are used sequentially to search for one destination. Using a cost function of searched area size, we identify, through analysis and numerical calculations, the optimum search set for the scenarios where the source is at the center of a circular region and the destination is randomly chosen within the entire network. When the location of the source node and the destination node are both randomly distributed, we provide an almost-optimal search set. This search set guarantees the search cost to be at most 1% higher than the minimum search cost, when the network radius is relatively large.

Journal ArticleDOI
TL;DR: EZSearch is a hierarchical approach that organizes the network into a hierarchy in a way fundamentally different from existing search techniques, and is based on Zigzag, a P2P overlay architecture known for its scalability and robustness under network growth and dynamics.

Patent
01 Oct 2008
TL;DR: In this article, a criterion to apply to search results to organize the search results into clusters that present content of search results is determined, and the results are presented in clusters. Each cluster represents a category of the criterion applied to the search result.
Abstract: Intelligently sorting search results includes retrieving search results according to a search. A criterion to apply to the search results to organize the search results into clusters that present content of the search results is determined. The criterion is applied to the search results, and the search results are presented in clusters. Each cluster represents a category of the criterion applied to the search results.

Proceedings ArticleDOI
18 Oct 2008
TL;DR: This paper uses a hybrid genetic learning algorithm to train Pi-sigma neural network and this algorithm was once applied to resolve a function optimizing problem, and it is proved converge to the global optimum with the probability of 1.
Abstract: This paper uses a hybrid genetic learning algorithm to train Pi-sigma neural network and this algorithm was once applied to resolve a function optimizing problem. The hybrid genetic learning algorithm incorporates the stronger global search of genetic algorithm into the stronger local search of flexible polyhedron method, and can search out the global optimum faster than standard genetic algorithm. The experiments show that the hybrid genetic algorithm can achieve better performance. At last, the hybrid genetic algorithm is proved converge to the global optimum with the probability of 1.

Book ChapterDOI
TL;DR: This work presents an algorithm that is inspired by theoretical and empirical results in social learning and swarm intelligence research, and provides experimental evidence that shows that the algorithm can find good solutions very rapidly without compromising its global search capabilities.
Abstract: We present an algorithm that is inspired by theoretical and empirical results in social learning and swarm intelligence research. The algorithm is based on a framework that we call incremental social learning. In practical terms, the algorithm is a hybrid between a local search procedure and a particle swarm optimization algorithm with growing population size. The local search procedure provides rapid convergence to good solutions while the particle swarm algorithm enables a comprehensive exploration of the search space. We provide experimental evidence that shows that the algorithm can find good solutions very rapidly without compromising its global search capabilities.