scispace - formally typeset
Search or ask a question
Topic

Iterative deepening depth-first search

About: Iterative deepening depth-first search is a research topic. Over the lifetime, 1139 publications have been published within this topic receiving 32574 citations.


Papers
More filters
Journal Article
TL;DR: This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid, and shows that random search is a natural baseline against which to judge progress in the development of adaptive (sequential) hyper- parameter optimization algorithms.
Abstract: Grid search and manual search are the most widely used strategies for hyper-parameter optimization. This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid. Empirical evidence comes from a comparison with a large previous study that used grid search and manual search to configure neural networks and deep belief networks. Compared with neural networks configured by a pure grid search, we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time. Granting random search the same computational budget, random search finds better models by effectively searching a larger, less promising configuration space. Compared with deep belief networks configured by a thoughtful combination of manual search and grid search, purely random search over the same 32-dimensional configuration space found statistically equal performance on four of seven data sets, and superior performance on one of seven. A Gaussian process analysis of the function from hyper-parameters to validation set performance reveals that for most data sets only a few of the hyper-parameters really matter, but that different hyper-parameters are important on different data sets. This phenomenon makes grid search a poor choice for configuring algorithms for new data sets. Our analysis casts some light on why recent "High Throughput" methods achieve surprising success--they appear to search through a large number of hyper-parameters because most hyper-parameters do not matter much. We anticipate that growing interest in large hierarchical models will place an increasing burden on techniques for hyper-parameter optimization; this work shows that random search is a natural baseline against which to judge progress in the development of adaptive (sequential) hyper-parameter optimization algorithms.

6,935 citations

Journal ArticleDOI
TL;DR: This heuristic depth-first iterative-deepening algorithm is the only known algorithm that is capable of finding optimal solutions to randomly generated instances of the Fifteen Puzzle within practical resource limits.

1,698 citations

Posted Content
TL;DR: It is shown that all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions, which allows for mathematical benchmarks for assessing a particular search algorithm's performance.
Abstract: We show that all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions. In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other functions where B outperforms A. Starting from this we analyze a number of the other a priori characteristics of the search problem, like its geometry and its information-theoretic aspects. This analysis allows us to derive mathematical benchmarks for assessing a particular search algorithm's performance. We also investigate minimax aspects of the search problem, the validity of using characteristics of a partial search over a cost function to predict future behavior of the search algorithm on that cost function, and time-varying cost functions. We conclude with some discussion of the justifiablility of biologically-inspired search methods.

1,098 citations

Proceedings Article
06 Jan 2007
TL;DR: The key contribution of this paper is the introduction of an effective solver for computing success probabilities, which essentially combines SLD-resolution with methods for computing the probability of Boolean formulae.
Abstract: We introduce ProbLog, a probabilistic extension of Prolog. A ProbLog program defines a distribution over logic programs by specifying for each clause the probability that it belongs to a randomly sampled program, and these probabilities are mutually independent. The semantics of ProbLog is then defined by the success probability of a query, which corresponds to the probability that the query succeeds in a randomly sampled program. The key contribution of this paper is the introduction of an effective solver for computing success probabilities. It essentially combines SLD-resolution with methods for computing the probability of Boolean formulae. Our implementation further employs an approximation algorithm that combines iterative deepening with binary decision diagrams. We report on experiments in the context of discovering links in real biological networks, a demonstration of the practical usefulness of the approach.

685 citations

Journal ArticleDOI
TL;DR: LPA* is developed, an incremental version of A* that combines ideas from the artificial intelligence and the algorithms literature and repeatedly finds shortest paths from a given start vertex to a given goal vertex while the edge costs of a graph change or vertices are added or deleted.

584 citations


Network Information
Related Topics (5)
Genetic algorithm
67.5K papers, 1.2M citations
81% related
Graph (abstract data type)
69.9K papers, 1.2M citations
79% related
Optimization problem
96.4K papers, 2.1M citations
78% related
Scheduling (computing)
78.6K papers, 1.3M citations
77% related
Server
79.5K papers, 1.4M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20234
202218
20215
20208
201910
20187