scispace - formally typeset
Search or ask a question

Showing papers on "Incremental heuristic search published in 2014"


21 May 2014
TL;DR: Given a fixed state description based on instantiated predicates, a general abstraction scheme is provided to automatically create admissible domain-independent memory-based heuristics for planning problems, where abstractions are found in factorizing the planning space.
Abstract: Heuristic search planning effectively finds solutions for large planning problems, but since the estimates are either not admissible or too weak, optimal solutions are found in rare cases only In contrast, heuristic pattern databases are known to significantly improve lower bound estimates for optimally solving challenging single-agent problems like the 24-Puzzle or Rubik’s Cube This paper studies the effect of pattern databases in the context of deterministic planning Given a fixed state description based on instantiated predicates, we provide a general abstraction scheme to automatically create admissible domain-independent memory-based heuristics for planning problems, where abstractions are found in factorizing the planning space We evaluate the impact of pattern database heuristics in A* and hill climbing algorithms for a collection of benchmark domains

299 citations


Proceedings ArticleDOI
18 Jun 2014
TL;DR: This paper proposes a local search strategy, which searches in the neighborhood of a vertex to find the best community for the vertex, and shows that, because the minimum degree measure used to evaluate the goodness of a community is not monotonic, designing efficient local search solutions is a very challenging task.
Abstract: Community search is important in social network analysis. For a given vertex in a graph, the goal is to find the best community the vertex belongs to. Intuitively, the best community for a given vertex should be in the vicinity of the vertex. However, existing solutions use \emph{global search} to find the best community. These algorithms, although straight-forward, are very costly, as all vertices in the graph may need to be visited. In this paper, we propose a \emph{local search} strategy, which searches in the neighborhood of a vertex to find the best community for the vertex. We show that, because the minimum degree measure used to evaluate the goodness of a community is not \emph{monotonic}, designing efficient local search solutions is a very challenging task. We present theories and algorithms of local search to address this challenge. The efficiency of our local search strategy is verified by extensive experiments on both synthetic networks and a variety of real networks with millions of nodes.

252 citations


Journal ArticleDOI
TL;DR: This paper presents a verifiable privacy-preserving multi-keyword text search (MTS) scheme with similarity-based ranking and proposes two secure index schemes to meet the stringent privacy requirements under strong threat models.
Abstract: With the growing popularity of cloud computing, huge amount of documents are outsourced to the cloud for reduced management cost and ease of access. Although encryption helps protecting user data confidentiality, it leaves the well-functioning yet practically-efficient secure search functions over encrypted data a challenging problem. In this paper, we present a verifiable privacy-preserving multi-keyword text search (MTS) scheme with similarity-based ranking to address this problem. To support multi-keyword search and search result ranking, we propose to build the search index based on term frequency and the vector space model with cosine similarity measure to achieve higher search result accuracy. To improve the search efficiency, we propose a tree-based index structure and various adaptive methods for multi-dimensional (MD) algorithm so that the practical search efficiency is much better than that of linear search. To further enhance the search privacy, we propose two secure index schemes to meet the stringent privacy requirements under strong threat models, i.e., known ciphertext model and known background model. In addition, we devise a scheme upon the proposed index tree structure to enable authenticity check over the returned search results. Finally, we demonstrate the effectiveness and efficiency of the proposed schemes through extensive experimental evaluation.

243 citations


21 May 2014
TL;DR: In this paper, an algorithm for planning with time and resources based on heuristic search is presented, which minimizes makespan using an admissible heuristic derived automatically from the problem instance.
Abstract: We present an algorithm for planning with time and resources, based on heuristic search. The algorithm minimizes makespan using an admissible heuristic derived automatically from the problem instance. Estimators for resource consumption are derived in the same way. The goals are twofold: to show the flexibility of the heuristic search approach to planning and to develop a planner that combines expressivity and performance. Two main issues are the definition of regression in a temporal setting and the definition of the heuristic estimating completion time. A number of experiments are presented for assessing the performance of the resulting planner.

163 citations


Proceedings Article
02 Jul 2014
TL;DR: In this article, the authors present a novel heuristic search framework, called Multi-Heuristic A* (MHA*), which simultaneously uses multiple, arbitrarily inadmissible heuristic functions and one consistent heuristic to search for complete and bounded suboptimal solutions.
Abstract: We present a novel heuristic search framework, called Multi-Heuristic A* (MHA*), that simultaneously uses multiple, arbitrarily inadmissible heuristic functions and one consistent heuristic to search for complete and bounded suboptimal solutions. This simplifies the de- sign of heuristics and enables the search to effectively combine the guiding powers of different heuristic func- tions. We support these claims with experimental results on full-body manipulation for PR2 robots.

118 citations


Proceedings ArticleDOI
03 Jul 2014
TL;DR: This work mathematically model dynamics in session search, including decision states, query changes, clicks, and rewards, as a cooperative game between the user and the search engine as a dual-agent stochastic game.
Abstract: Session search is a complex search task that involves multiple search iterations triggered by query reformulations. We observe a Markov chain in session search: user's judgment of retrieved documents in the previous search iteration affects user's actions in the next iteration. We thus propose to model session search as a dual-agent stochastic game: the user agent and the search engine agent work together to jointly maximize their long term rewards. The framework, which we term "win-win search", is based on Partially Observable Markov Decision Process. We mathematically model dynamics in session search, including decision states, query changes, clicks, and rewards, as a cooperative game between the user and the search engine. The experiments on TREC 2012 and 2013 Session datasets show a statistically significant improvement over the state-of-the-art interactive search and session search algorithms.

98 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a heuristic search-based approach to motion planning for manipulation that does not deal with the high dimensionality of the problem and achieves the necessary efficiency by exploiting the following three key principles: (a) representation of the planning problem with what they call a manipulation lattice graph; (b) use of the ARA* search with provable bounds on solution suboptimality.
Abstract: Heuristic searches such as the A* search are a popular means of finding least-cost plans due to their generality, strong theoretical guarantees on completeness and optimality, simplicity in implementation and consistent behavior. In planning for robotic manipulation, however, these techniques are commonly thought of as impractical due to the high dimensionality of the planning problem. In this paper, we present a heuristic search-based approach to motion planning for manipulation that does deal effectively with the high dimensionality of the problem. Our approach achieves the necessary efficiency by exploiting the following three key principles: (a) representation of the planning problem with what we call a manipulation lattice graph; (b) use of the ARA* search which is an anytime heuristic search with provable bounds on solution suboptimality; and (c) use of informative yet fast-to-compute heuristics. The paper presents the approach together with its theoretical properties and shows how to apply it to single-arm and dual-arm motion planning with upright constraints on a PR2 robot operating in non-trivial cluttered spaces. An extensive experimental analysis in both simulation and on a physical PR2 shows that, in terms of runtime, our approach is on a par with other most common sampling-based approaches despite the high dimensionality of the problems. In addition, the experimental analysis shows that due to its deterministic cost minimization, the approach generates motions that are of good quality and are consistent, in other words, the resulting plans tend to be similar for similar tasks. For many problems, the consistency of the generated motions is important as it helps make the actions of the robot more predictable for a human controlling or interacting with the robot.

81 citations


Proceedings ArticleDOI
06 Feb 2014
TL;DR: This paper develops a heuristic search algorithm called Binary Guided Random Testing (BGRT), and shows that while concrete-testing-based error estimation methods based on maintaining shadow values at higher precision can search out higher error-inducing inputs, suit able heuristicsearch guidance is key to finding higher errors.
Abstract: Tools for floating-point error estimation are fundamental to program understanding and optimization. In this paper, we focus on tools for determining the input settings to a floating point routine that maximizes its result error. Such tools can help support activities such as precision allocation, performance optimization, and auto-tuning. We benchmark current abstraction-based precision analysis methods, and show that they often do not work at scale, or generate highly pessimistic error estimates, often caused by non-linear operators or complex input constraints that define the set of legal inputs. We show that while concrete-testing-based error estimation methods based on maintaining shadow values at higher precision can search out higher error-inducing inputs, suit able heuristic search guidance is key to finding higher errors. We develop a heuristic search algorithm called Binary Guided Random Testing (BGRT). In 45 of the 48 total benchmarks, including many real-world routines, BGRT returns higher guaranteed errors. We also evaluate BGRT against two other heuristic search methods called ILS and PSO, obtaining better results.

79 citations


Journal ArticleDOI
TL;DR: A general approach to distributed state-space search in which each agent performs only the part of the state expansion relevant to it, which yields a distributed version of the a* algorithm that is the first cost-optimal distributed algorithm for privacy-preserving planning.
Abstract: This paper deals with the problem of classical planning for multiple cooperative agents who have private information about their local state and capabilities they do not want to reveal. Two main approaches have recently been proposed to solve this type of prob- lem - one is based on reduction to distributed constraint satisfaction, and the other on partial-order planning techniques. In classical single-agent planning, constraint-based and partial-order planning techniques are currently dominated by heuristic forward search. The question arises whether it is possible to formulate a distributed heuristic forward search algorithm for privacy-preserving classical multi-agent planning. Our work provides a positive answer to this question in the form of a general approach to distributed state-space search in which each agent performs only the part of the state expansion relevant to it. The resulting algorithms are simple and efficient - outperforming previous algorithms by orders of magnitude - while offering similar flexibility to that of forward-search algorithms for single-agent planning. Furthermore, one particular variant of our general approach yields a distributed version of the a* algorithm that is the first cost-optimal distributed algorithm for privacy-preserving planning.

78 citations


Proceedings Article
21 Jun 2014
TL;DR: The paper presents the approach together with its theoretical properties and shows how to apply it to single-arm and dual-arm motion planning with upright constraints on a PR2 robot operating in non-trivial cluttered spaces.
Abstract: Heuristic searches such as A* search are a popular means of finding least-cost plans due to their generality, strong theoretical guarantees on completeness and optimality, simplicity in implementation and consistent behavior. In planning for robotic manipulation, however, these techniques are commonly thought of as impractical due to the high-dimensionality of the planning problem. In this paper, we present a heuristic search-based approach to motion planning for manipulation that does deal effectively with the high-dimensionality of the problem. The paper presents a summary of the approach along with applications to single-arm and dual-arm motion planning with upright constraints on a PR2 robot operating in non-trivial cluttered spaces. An extensive experimental analysis in both simulation and on a physical PR2 shows that, in terms of runtime, our approach is on par with other most common sampling-based approaches and due to its deterministic cost-minimization, the computed motions are of good quality and are consistent, i.e. the resulting plans tend to be similar for similar tasks. For complete details of our approach, please refer to (Cohen, Chitta, and Likhachev 2013).

61 citations


Journal ArticleDOI
TL;DR: A novel approach to automatically defining an effective search space over structured outputs, which is able to leverage the availability of powerful classification learning algorithms, is described and the limited-discrepancy search space is defined and related to the quality of learned classifiers.
Abstract: We consider a framework for structured prediction based on search in the space of complete structured outputs. Given a structured input, an output is produced by running a time-bounded search procedure guided by a learned cost function, and then returning the least cost output uncovered during the search. This framework can be instantiated for a wide range of search spaces and search procedures, and easily incorporates arbitrary structured-prediction loss functions. In this paper, we make two main technical contributions. First, we describe a novel approach to automatically defining an effective search space over structured outputs, which is able to leverage the availability of powerful classification learning algorithms. In particular, we define the limited-discrepancy search space and relate the quality of that space to the quality of learned classifiers. We also define a sparse version of the search space to improve the effciency of our overall approach. Second, we give a generic cost function learning approach that is applicable to a wide range of search procedures. The key idea is to learn a cost function that attempts to mimic the behavior of conducting searches guided by the true loss function. Our experiments on six benchmark domains show that a small amount of search in limited discrepancy search space is often sufficient for significantly improving on state-of-the-art structured-prediction performance. We also demonstrate significant speed improvements for our approach using sparse search spaces with little or no loss in accuracy.

Journal ArticleDOI
TL;DR: This paper constructs ImageWeb, a sparse graph consisting of all the images in the database, in which two images are connected if and only if one is ranked among the top of another’s initial search result, and uses HITS, a query-dependent algorithm to re-rank the images according to the affinity values.

Journal ArticleDOI
TL;DR: An improved search strategy and its application to FMS scheduling in the P-timed Petri net framework is proposed and it is proved that the resulting combinational heuristic function is still admissible and more informed than any of its constituents.

Journal ArticleDOI
TL;DR: A new fast search algorithm based on the hierarchical search approach, where the number of searched locations is reduced compared to the Full Search, and the performance of the proposed hierarchal search algorithm is close to the full search with 83.4% reduction in complexity and with a matching quality over 98%.

Journal ArticleDOI
TL;DR: A population based Local Search (PB-LS) heuristic that is embedded within a local search algorithm (as a mechanism to exploit the search space) is proposed, which is able to both diversify and intensify the search more effectively, when compared to other local search and population based algorithms.
Abstract: Population based algorithms are generally better at exploring a search space than local search algorithms (i.e. searches based on a single heuristic). However, the limitation of many population based algorithms is in exploiting the search space. We propose a population based Local Search (PB-LS) heuristic that is embedded within a local search algorithm (as a mechanism to exploit the search space). PB-LS employs two operators. The first is applied to a single solution to determine the force between the incumbent solution and the trial current solution (i.e. a single direction force), whilst the second operator is applied to all solutions to determine the force in all directions. The progress of the search is governed by these forces, either in a single direction or in all directions. Our proposed algorithm is able to both diversify and intensify the search more effectively, when compared to other local search and population based algorithms. We use university course timetabling (Socha benchmark datasets) as a test domain. In order to evaluate the effectiveness of PB-LS, we perform a comparison between the performances of PB-LS with other approaches drawn from the scientific literature. Results demonstrate that PB-LS is able to produce statistically significantly higher quality solutions, outperforming many other approaches on the Socha dataset.

Proceedings Article
27 Jul 2014
TL;DR: A search algorithm is introduced that utilizes type systems in a new way-for exploration within a GBFS multiqueue framework in satisficing planning and shows the benefits of such exploration for overcoming deficiencies of the heuristic.
Abstract: Utilizing multiple queues in Greedy Best-First Search (GBFS) has been proven to be a very effective approach to satisficing planning. Successful techniques include extra queues based on Helpful Actions (or Preferred Operators), as well as using Multiple Heuristics. One weakness of all standard GBFS algorithms is their lack of exploration. All queues used in these methods work as priority queues sorted by heuristic values. Therefore, misleading heuristics, especially early in the search process, can cause the search to become ineffective. Type systems, as introduced for heuristic search by Lelis et al, are a development of ideas for exploration related to the classic stratified sampling approach. The current work introduces a search algorithm that utilizes type systems in a new way-for exploration within a GBFS multiqueue framework in satisficing planning. A careful case study shows the benefits of such exploration for overcoming deficiencies of the heuristic. The proposed new baseline algorithm Type-GBFS solves almost 200 more problems than baseline GBFS over all International Planning Competition problems. Type-LAMA, a new planner which integrates Type-GBFS into LAMA-2011, solves 36.8 more problems than LAMA-2011.

Proceedings ArticleDOI
26 Aug 2014
TL;DR: The main aim of this paper is to bridge the gap between multistage information seeking models, documenting the search process on a general level, and search systems and interfaces, serving as the concrete tools to perform searches.
Abstract: The ever expanding digital information universe makes us rely on search systems to sift through immense amounts of data to satisfy our information needs. Our searches using these systems range from simple lookups to complex and multifaceted explorations. A multitude of models of the information seeking process, for example Kuhlthau's ISP model, divide the information seeking process for complex search tasks into multiple stages. Current search systems, in contrast, still predominantly use a "one-size-fits-all" approach: one interface is used for all stages of a search, even for complex search endeavors. The main aim of this paper is to bridge the gap between multistage information seeking models, documenting the search process on a general level, and search systems and interfaces, serving as the concrete tools to perform searches. To find ways to reduce the gap, we look at existing models of the information seeking process, at search interfaces supporting complex search tasks, and at the use of interface features over time. Our main contribution is that we conceptually bring together macro level information seeking stages and micro level search system features. We highlight the impact of search stages on the flow of interaction with user interface features, providing new handles for the design of multistage search systems.

Proceedings Article
02 Jul 2014
TL;DR: A reasonable theoretical model of heuristics is discussed and it is shown that, under this model, the expected size of local minima is higher for a cost- to-go heuristic than a distance-to-Go heuristic, offering a possible explanation as to why distance-To-goHeuristics tend to outperform cost-to -go heuristic heurism.
Abstract: In work on satisficing search, there has been substantial attention devoted to how to solve problems associated with local minima or plateaus in the heuristic function. One technique that has been shown to be quite promising is using an alternative heuristic function that does not estimate cost-to-go, but rather estimates distance-to-go. Empirical results generally favor using the distance-to-go heuristic over the cost-to-go heuristic, but there is currently little beyond intuition to explain the difference. We begin by empirically showing that the success of the distance-to-go heuristic appears related to its having smaller local minima. We then discuss a reasonable theoretical model of heuristics and show that, under this model, the expected size of local minima is higher for a cost- to-go heuristic than a distance-to-go heuristic, offering a possible explanation as to why distance-to-go heuristics tend to outperform cost-to-go heuristics.

Posted Content
TL;DR: A new branching heuristic is presented, which generalizes existing work on this class of random 3-SAT formulae and introduces a variant of discrepancy search, called ALDS, which traverses the search tree in a near-optimal order when combined with the new heuristic.
Abstract: Delft University of Technology, Delft, The NetherlandsAbstract. When combined properly, search techniques can reveal the full poten-tial of sophisticated branching heuristics. We demonstrate this observation on thewell-known class of random 3-SAT formulae. First, a new branching heuristic ispresented, which generalizes existing work on this class. Much smaller searchtrees can be constructed by using this heuristic. Second, we introduce a variantof discrepancy search, called ALDS. Theoretical and practical evidence supportthat ALDS traverses the search tree in a near-optimal order when combined withthe new heuristic. Both techniques, search and heuristic, have been implementedin the look-ahead solvermarch. The SAT 2009 competition results show thatmarch is by far the strongest complete solver on random k-SAT formulae.

Journal ArticleDOI
TL;DR: It is shown that under mild conditions regarding the randomness of the search and the use of a time-out, the search agent will always find the object in spite of the fact that the search space is infinite.
Abstract: —Searching in the Internet for some object charac-terised by its attributes in the form of data, such as a hotel ina certain city whose price is less than something, is one of ourmost common activities when we access the Web. We discuss thisproblem in a general setting, and compute the average amount oftime and the energy it takes to find an object in an infinitely largesearch space. We consider the use of N search agents which actconcurrently. Both the case where the search agent knows whichway it needs to go to find the object, and the case where thesearch agent is perfectly ignorant and may even head away fromthe object being sought. We show that under mild conditionsregarding the randomness of the search and the use of a time-out,the search agent will always find the object despite the fact thatthe search space is infinite. We obtain a formula for the averagesearch time and the average energy expended by N search agentsacting concurrently and independently of each other. We see thatthe time-out itself can be used to minimise the search time andthe amount of energy that is consumed to find an object. Anapproximate formula is derived for the number of search agentsthat can help us guarantee that an object is found in a giventime, and we discuss how the competition between search agentsand other agents that try to hide the data object, can be usedby opposing parties to guarantee their own success.Index Terms—The Internet; Big Data; the Web; Search Time;Energy Consumption; Diffusion Process; Brownian Motion; L´evyFlights.

Journal ArticleDOI
TL;DR: This paper proposes a parallel generic approach based on multithreading for solving the 15 puzzle problem and finds that the parallel multithreaded A* heuristic search algorithm, in particular, outperforms the sequential approach in terms of time complexity and speedup.
Abstract: Heuristic search is used in many problems and applications, such as the 15 puzzle problem, the travelling salesman problem and web search engines. In this paper, the A* heuristic search algorithm is reconsidered by proposing a parallel generic approach based on multithreading for solving the 15 puzzle problem. Using multithreading, sequential computers are provided with virtual parallelization, yielding faster execution and easy communication. These advantageous features are provided through creating a dynamic number of concurrent threads at the run time of an application. The proposed approach is evaluated analytically and experimentally and compared with its sequential counterpart in terms of various performance metrics. It is revealed by the experimental results that multithreading is a viable approach for parallel A* heuristic search. For instance, it has been found that the parallel multithreaded A* heuristic search algorithm, in particular, outperforms the sequential approach in terms of time complexity and speedup.

Journal ArticleDOI
TL;DR: A new heuristic is proposed, based on the two-phase Pareto local search, with the aim of generating a good approximation of the Pare to efficient solutions.
Abstract: In this paper, we study the multiobjective version of the set covering problem. To our knowledge, this problem has only been addressed in two papers before, and with two objectives and heuristic methods. We propose a new heuristic, based on the two-phase Pareto local search, with the aim of generating a good approximation of the Pareto efficient solutions. In the first phase of this method, the supported efficient solutions or a good approximation of these solutions is generated. Then, a neighborhood embedded in the Pareto local search is applied to generate non-supported efficient solutions. In order to get high quality results, two elaborate local search techniques are considered: a large neighborhood search and a variable neighborhood search. We intensively study the parameters of these two techniques. We compare our results with state-of-the-art results and we show that with our method, better results are obtained for different indicators.

Journal ArticleDOI
TL;DR: A novel heuristic search algorithm called HSAMMV is proposed to solve the multiple measurement vectors problem, which is modeled as a combinatorial optimization in the framework of simulated annealing algorithm.

Proceedings ArticleDOI
23 Oct 2014
TL;DR: A novel algorithm for multi-target search is described, inspired from water vortex dynamics and based on the principle of pheromone-based communication, which improves the search performance in comparison with random walk and S-random walk (stigmergic random walk) strategies.
Abstract: We explore the on-line problem of coverage where multiple agents have to find a target whose position is unknown, and without a prior global information about the environment. In this paper a novel algorithm for multi-target search is described, it is inspired from water vortex dynamics and based on the principle of pheromone-based communication. According to this algorithm, called S-MASA (Stigmergic Multi Ant Search Area), the agents search nearby their base incrementally using turns around their center and around each other, until the target is found, with only a group of simple distributed cooperative Ant like agents, which communicate indirectly via depositing/detecting markers. This work improves the search performance in comparison with random walk and S-random walk (stigmergic random walk) strategies, we show the obtained results using computer simulations.

21 May 2014
TL;DR: This paper performs an empirical evaluation of two existing variants of LRTA* that were developed to speed up its convergence, namely HLRTA* and FALCONS and shows that these two real-time search methods have complementary strengths and can be combined.
Abstract: Real-time search methods, such as LRTA*, have been used to solve awide variety of planning problems because they can make decisions fastand still converge to a minimum-cost plan if they solve the sameplanning task repeatedly. In this paper, we perform an empiricalevaluation of two existing variants of LRTA* that were developed tospeed up its convergence, namely HLRTA* and FALCONS. Our experimentalresults demonstrate that these two real-time search methods havecomplementary strengths and can be combined. We call the new real-timesearch method eFALCONS and show that it converges with fewer actionsto a minimum-cost plan than LRTA*, HLRTA*, and FALCONS.

Proceedings ArticleDOI
22 Mar 2014
TL;DR: This paper introduces in this paper a meta-controller that automates the run-time selection of heuristic search techniques and their parameters and examines two different meta- controller implementations that each use online learning.
Abstract: This paper builds on SASSY, a system for automatically generating SOA software architectures that optimize a given utility function of multiple QoS metrics. In SASSY, SOA software systems are automatically re-architected when services fail or degrade. Optimizing both architecture and service provider selection presents a pair of nested NP-hard problems. Here we adapt hill-climbing, beam search, simulated annealing, and evolutionary programming to both architecture optimization and service provider selection. Each of these techniques has several parameters that influence their efficiency. We introduce in this paper a meta-controller that automates the run-time selection of heuristic search techniques and their parameters. We examine two different meta-controller implementations that each use online learning. The first implementation identifies the best heuristic search combination from various prepared combinations. The second implementation analyzes the current self-architecting problem (e.g. changes in performance metrics, service degradations/failures) and looks for similar, previously encountered re-architecting problems to find an effective heuristic search combination for the current problem. A large set of experiments demonstrates the effectiveness of the first meta-controller implementation and indicates opportunities for improving the second meta-controller implementation.

Proceedings Article
21 Jun 2014
TL;DR: This paper presents Multipath Adaptive A* (MPAA*), a simple, easy-to-implement modification of AdaptiveA* (AA*) that reuses paths found in previous searches to speed up subsequent searches, and that almost always outperforms D*Lite.
Abstract: Focused D* and D*-Lite are two popular incremental heuristic search algorithm amenable to goal-directed navigation in partially known terrain. Recently it has been shown that, unlike commonly believed, a version of A* is in many cases faster than D*-Lite, posing the question of whether or not there exist other variants of A* which could outperform algorithms in the D* family on most problems. In this paper we present Multipath Adaptive A* (MPAA*), a simple, easy-to-implement modification of Adaptive A* (AA*) that reuses paths found in previous searches to speed up subsequent searches, and that almost always outperforms D*Lite. We evaluate MPAA* against D*-Lite on random maps and standard game, room, and maze maps, assuming partially known terrain. In environments comparable to indoor and outdoor navigation (room and game maps) MPAA* is 35% faster than D*Lite on average, while on random maps MPAA* is over 3 times faster than D*Lite. D*Lite is faster than MPAA* only in mazes; notwithstanding, we show that if a small percentage of obstacle cells in a maze are made traversable, MPAA* outperforms D*Lite. In addition, we prove MPAA* is optimal and that it finds a solution if one exists. We conclude that for most real-life goal-directed navigation applications MPAA* should be preferred to D*Lite.

Proceedings ArticleDOI
21 Sep 2014
TL;DR: The problem of optimal search is formulated for the following two cases: a node holds exactly matching content with some probability, and some content partially matching the query; and how unreliable response paths affect the optimal search depth and the corresponding search performance is investigated.
Abstract: Searching content in mobile opportunistic networks is a difficult problem due to the dynamically changing topology and intermittent connections. Moreover, due to the lack of global view of the network, it is arduous to determine whether the best response is discovered or search should be spread to other nodes. A node that has received a search query has to take two decisions: (i) whether to continue the search further or stop it at the current node (current search depth) and, independently of that, (ii) whether to send a response back or not. As each transmission and extra hop costs in terms of energy, bandwidth and time, a balance between the expected value of the response and the costs incurred must be sought. In order to better understand this inherent trade-off, we assume a simplified setting where both the query and response follow the same path. We formulate the problem of optimal search for the following two cases: a node holds (i) exactly matching content with some probability, and (ii) some content partially matching the query. We design static search in which the search depth is set at query initiation, dynamic search in which search depth is determined locally during query forwarding, and learning dynamic search which leverages the observations to estimate suitability of content for the query. Additionally, we show how unreliable response paths affect the optimal search depth and the corresponding search performance. Finally, we investigate the principal factors affecting the optimal search strategy.

Proceedings Article
02 Jul 2014
TL;DR: In this paper, the authors examine several popular search algorithms for MIN problems and discover the curious ways in which they misbehave on MAX problems, and propose modifications that preserve the original intentions behind the algorithms but allow them to solve MAX problems.
Abstract: Most work in heuristic search considers problems where a low cost solution is preferred (MIN problems). In this paper, we investigate the complementary setting where a solution of high reward is preferred (MAX problems). Example MAX problems include finding the longest simple path in a graph, maximal coverage, and various constraint optimization problems. We examine several popular search algorithms for MIN problems — optimal, suboptimal, and bounded suboptimal - and discover the curious ways in which they misbehave on MAX problems. We propose modifications that preserve the original intentions behind the algorithms but allow them to solve MAX problems, and compare them theoretically and empirically. Interesting results include the failure of bidirectional search and a discovered close relationships between Dijkstra's algorithm, weighted A*, and depth-first search. This work demonstrates that MAX problems demand their own heuristic search algorithms, which are worthy objects of study in their own right.

Journal ArticleDOI
TL;DR: This paper presents an alternative data structure multi-level link list and applies the heuristic technique to solve shortest path problem and indicates that use of this type of data structure helps in improving the performance of algorithms drastically.
Abstract: Couple of decades back, there was a tremendous development in the field of algorithms, which were aimed at finding efficient solutions for widespread applications. The benefits of these algorithms were observed in their optimality and simplicity with speed. Many of the algorithms were readdressed to solve the problem of finding shortest path. Heuristic search techniques make use of problem specific knowledge to find efficient solutions. Most of these techniques determine the next best possible state leading towards the goal state by using evaluation function. This paper shows the practical performance of the following algorithms, to find the shortest path:Hill Climbing, Steepest-ascent, and Best-First and A*. While implementing these algorithms, we used the data structures which were indicated in the original papers.In this paper we present an alternative data structure multi-level link list and apply the heuristic technique to solve shortest path problem. This was tested for class of heuristic search family-- A* and Best First Search approaches. The results indicate that use of this type of data structure helps in improving the performance of algorithms drastically.