scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Partitioning Search Spaces of a Randomized Search

01 Apr 2011-Fundamenta Informaticae (IOS Press)-Vol. 107, Iss: 2, pp 289-311
TL;DR: This paper studies the following question: given an instance of the propositional satisfiability problem, a randomized satisfiability solver, and a cluster of n computers, what is the best way to use the computers to solve the instance?
Abstract: This paper studies the following question: given an instance of the propositional satisfiability problem, a randomized satisfiability solver, and a cluster of n computers, what is the best way to use the computers to solve the instance? Two approaches, simple distribution and search space partitioning as well as their combinations are investigated both analytically and empirically. It is shown that the results depend heavily on the type of the problem (unsatisfiable, satisfiable with few solutions, and satisfiable with many solutions) as well as on how good the search space partitioning function is. In addition, the behavior of a real search space partitioning function is evaluated in the same framework. The results suggest that in practice one should combine the simple distribution and search space partitioning approaches.

Summary (1 min read)

Jump to:  and [B Experimental Results 21]

B Experimental Results 21

  • In this work1 the authors develop distributed techniques for solving challenging instances of the propositional satisfiability problem (SAT).
  • Section 3 studies analytically the expected run time of a plain partitioning approach where a SAT instance is partitioned and then a randomized SAT solver is used to solve the resulting instances.
  • Again, the authors denote the random variable describing the run time of the resulting plain partitioning approach by T npart.
  • The authors denote the random variable describing the run time of this approach by T nrep-part.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

TKK Reports in Information and Computer Science
Espoo 2009 TKK-ICS-R22
PARTITIONING SEARCH SPACES OF A RANDOMIZED SEARCH
Antti E. J. Hyva¨rinen, Tommi Junttila and Ilkka Niemela¨
ABTEKNILLINEN KORKEAKOULU
TEKNISKA HÖGSKOLAN
HELSINKI UNIVERSITY OF TECHNOLOGY
TECHNISCHE UNIVERSITÄT HELSINKI
UNIVERSITE DE TECHNOLOGIE D’HELSINKI


TKK Reports in Information and Computer Science
Espoo 2009 TKK-ICS-R22
PARTITIONING SEARCH SPACES OF A RANDOMIZED SEARCH
Antti E. J. Hyva¨rinen, Tommi Junttila and Ilkka Niemela¨
Helsinki University of Technology
Faculty of Information and Natural Sciences
Department of Information and Computer Science
Teknillinen korkeakoulu
Informaatio- ja luonnontieteiden tiedekunta
Tietojenka¨sittelytieteen laitos

Distribution:
Helsinki University of Technology
Faculty of Information and Natural Sciences
Department of Information and Computer Science
P.O.Box 5400
FI-02015 TKK
FINLAND
URL: http://ics.tkk.fi
Tel. +358 9 470 01
Fax +358 9 470 23369
E-mail: series@ics.tkk.fi
c Antti E. J. Hyva¨rinen, Tommi Junttila and Ilkka Niemela¨
ISBN 978-952-248-230-3 (Print)
ISBN 978-952-248-231-0 (Online)
ISSN 1797-5034 (Print)
ISSN 1797-5042 (Online)
URL: http://lib.tkk.fi/Reports/2009/isbn9789522482310.pdf
TKK ICS
Espoo 2009

ABSTRACT: This work studies the following question: given an instance
of the propositional satisfiability problem, a randomized satisfiability solver,
and a cluster of n computers, what is the best way to use the computers to
solve the instance? Two approaches, simple distribution and search space
partitioning as well as their combinations are investigated both analytically
and empirically. It is shown that the results depend heavily on the type of
the problem (unsatisfiable, satisfiable with few solutions, and satisfiable with
many solutions) as well as on how good the search space partitioning func-
tion is. In addition, the behavior of a real search space partitioning function
is evaluated in the same framework. The results suggest that in practice one
should combine the simple distribution and search space partitioning ap-
proaches.
KEYWORDS: Constraint-based Search, Distributed Search, Randomized Search,
Search-Space Partitioning, SAT Solvers

Citations
More filters
Book ChapterDOI
05 Jul 2016
TL;DR: A major revision of the OpenSMT solver developed since 2008 is described, providing a design that supports extensions, several critical bug fixes and performance improvements.
Abstract: This paper describes a major revision of the OpenSMT solver developed since 2008. The version 2 significantly improves its predecessor by providing a design that supports extensions, several critical bug fixes and performance improvements. The distinguishing feature of the new version is the support for a wide range of parallelization algorithms both on multi-core and cloud-computing environments. Presently the solver implements the quantifier free theories of uninterpreted functions and equalities and linear real arithmetics, and is released under the MIT license.

39 citations

Book ChapterDOI
12 Sep 2011
TL;DR: An abstract framework is presented which extends a previously presented iterative partitioning approach with clause learning, a key technique applied in modern SATsolvers, and two techniques that alter the clause learning of modern SAT solvers to fit the framework are presented.
Abstract: This work studies the solving of challenging SAT problem instances in distributed computing environments that have massive amounts of parallel resources but place limits on individual computations We present an abstract framework which extends a previously presented iterative partitioning approach with clause learning, a key technique applied in modern SAT solvers In addition we present two techniques that alter the clause learning of modern SAT solvers to fit the framework An implementation of the proposed framework is then analyzed experimentally using a well-known set of benchmark instances The results are very encouraging For example, the implementation is able to solve challenging SAT instances not solvable in reasonable time by state-of-the-art sequential and parallel SAT solvers

32 citations

Journal ArticleDOI
TL;DR: The new multi-threaded version of the state-of-the-art answer set solver clasp is presented, which detail its component and communication architecture and illustrate how they support the principal functionalities of clasp.
Abstract: We present the new multi-threaded version of the state-of-the-art answer set solver clasp. We detail its component and communication architecture and illustrate how they support the principal functionalities of clasp. Also, we provide some insights into the data representation used for different constraint types handled by clasp. All this is accompanied by an extensive experimental analysis of the major features related to multi-threading in clasp.

30 citations


Cites background from "Partitioning Search Spaces of a Ran..."

  • ...This indicates the difficulty of making fair splits in view of irregular search spaces, while running different configurations in parallel improves the chance of success (cf. (Hyvärinen et al. 2011))....

    [...]

Journal ArticleDOI
TL;DR: A novel (open-source) SAT solver is introduced, the tawSolver, which performs best on the SAT instances studied here, and which is actually the original DLL-solver by Davis et al. (1962), but with an efficient implementation and a modern heuristic typical for look-ahead solvers, applying the theory developed by the second author (Kullmann, 2009).

28 citations


Cites background from "Partitioning Search Spaces of a Ran..."

  • ...Often these approaches are combined in various ways; see [63,22,38,39] for recent examples....

    [...]

Book ChapterDOI
17 Nov 2009
TL;DR: This paper studies the following question: given an instance of the propositional satisfiability problem, a randomized satisfiability solver, and a cluster of n computers, what is the best way to use the computers to solve the instance?
Abstract: This paper studies the following question: given an instance of the propositional satisfiability problem, a randomized satisfiability solver, and a cluster of n computers, what is the best way to use the computers to solve the instance? Two approaches, simple distribution and search space partitioning as well as their combinations are investigated both analytically and empirically. It is shown that the results depend heavily on the type of the problem (unsatisfiable, satisfiable with few solutions, and satisfiable with many solutions) as well as on how good the search space partitioning function is. In addition, the behavior of a real search space partitioning function is evaluated in the same framework. The results suggest that in practice one should combine the simple distribution and search space partitioning approaches.

21 citations


Cites background from "Partitioning Search Spaces of a Ran..."

  • ...1This is an extended version of the paper to appear in the 11th Conference of the Italian Association for Artificial Intelligence (AI*IA 2009) [6] with two new appendices containing proofs for the propositions and additional experimental results....

    [...]

References
More filters
Book ChapterDOI
05 May 2003
TL;DR: This article presents a small, complete, and efficient SAT-solver in the style of conflict-driven learning, as exemplified by Chaff, and includes among other things a mechanism for adding arbitrary boolean constraints.
Abstract: In this article, we present a small, complete, and efficient SAT-solver in the style of conflict-driven learning, as exemplified by Chaff. We aim to give sufficient details about implementation to enable the reader to construct his or her own solver in a very short time.This will allow users of SAT-solvers to make domain specific extensions or adaptions of current state-of-the-art SAT-techniques, to meet the needs of a particular application area. The presented solver is designed with this in mind, and includes among other things a mechanism for adding arbitrary boolean constraints. It also supports solving a series of related SAT-problems efficiently by an incremental SAT-interface.

2,985 citations

Proceedings ArticleDOI
07 Jun 1993
TL;DR: The authors describe a simple universal strategy S/sup univ/, with the property that, for any algorithm A, T(A,S/Sup univ/)=O (l/sub A/log(l/ sub A/)), which is the best performance that can be achieved, up to a constant factor, by any universal strategy.
Abstract: Let A be a Las Vegas algorithm, i.e., A is a randomized algorithm that always produces the correct answer when its stops but whose running time is a random variable. The authors consider the problem of minimizing the expected time required to obtain an answer from A using strategies which simulate A as follows: run A for a fixed amount of time t/sub 1/, then run A independent for a fixed amount of time t/sub 2/, etc. The simulation stops if A completes its execution during any of the runs. Let S=(t/sub 1/, t/sub 2/,. . .) be a strategy, and let l/sub A/=inf/sub S/T(A,S), where T(A,S) is the expected value of the running time of the simulation of A under strategy S. The authors describe a simple universal strategy S/sup univ/, with the property that, for any algorithm A, T(A,S/sup univ/)=O(l/sub A/log(l/sub A/)). Furthermore, they show that this is the best performance that can be achieved, up to a constant factor, by any universal strategy. >

460 citations

Journal ArticleDOI
TL;DR: It will be seen how, in a portfolio setting, it can be advantageous to use a more “risk-seeking” strategy with a high variance in run time, such as a randomized depth-first search approach in mixed integer programming versus the more traditional best-bound approach.

457 citations

Journal ArticleDOI
TL;DR: It is shown that these runtime distributions of backtrack procedures for propositional satisfiability and constraint satisfaction are best characterized by a general class of distributions that can have infinite moments (i.e., an infinite mean, variance, etc.).
Abstract: We study the runtime distributions of backtrack procedures for propositional satisfiability and constraint satisfaction. Such procedures often exhibit a large variability in performance. Our study reveals some intriguing properties of such distributions: They are often characterized by very long tails or “heavy tails”. We will show that these distributions are best characterized by a general class of distributions that can have infinite moments (i.e., an infinite mean, variance, etc.). Such nonstandard distributions have recently been observed in areas as diverse as economics, statistical physics, and geophysics. They are closely related to fractal phenomena, whose study was introduced by Mandelbrot. We also show how random restarts can effectively eliminate heavy-tailed behavior. Furthermore, for harder problem instances, we observe long tails on the left-hand side of the distribution, which is indicative of a non-negligible fraction of relatively short, successful runs. A rapid restart strategy eliminates heavy-tailed behavior and takes advantage of short runs, significantly reducing expected solution time. We demonstrate speedups of up to two orders of magnitude on SAT and CSP encodings of hard problems in planning, scheduling, and circuit synthesis.

433 citations

Journal ArticleDOI
03 Jan 1997-Science
TL;DR: This method, based on notions of risk in economics, offers a computational portfolio design procedure that can be used for a wide range of problems, including the combinatorics of DNA sequencing and the completion of tasks in environments with resource contention, such as the World Wide Web.
Abstract: A general method for combining existing algorithms into new programs that are unequivocally preferable to any of the component algorithms is presented. This method, based on notions of risk in economics, offers a computational portfolio design procedure that can be used for a wide range of problems. Tested by solving a canonical NP-complete problem, the method can be used for problems ranging from the combinatorics of DNA sequencing to the completion of tasks in environments with resource contention, such as the World Wide Web.

407 citations

Frequently Asked Questions (1)
Q1. What are the contributions mentioned in the paper "Partitioning search spaces of a randomized search" ?

This work studies the following question: given an instance of the propositional satisfiability problem, a randomized satisfiability solver, and a cluster of n computers, what is the best way to use the computers to solve the instance ? The results suggest that in practice one should combine the simple distribution and search space partitioning approaches.