scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 2010"


Posted Content
TL;DR: In this article, the authors give a poly-logarithmic lower bound on the complexity of local computation for a large class of optimization problems including minimum vertex cover, minimum dominating set, maximum matching, maximal independent set, and maximal matching.
Abstract: The question of what can be computed, and how efficiently, are at the core of computer science. Not surprisingly, in distributed systems and networking research, an equally fundamental question is what can be computed in a \emph{distributed} fashion. More precisely, if nodes of a network must base their decision on information in their local neighborhood only, how well can they compute or approximate a global (optimization) problem? In this paper we give the first poly-logarithmic lower bound on such local computation for (optimization) problems including minimum vertex cover, minimum (connected) dominating set, maximum matching, maximal independent set, and maximal matching. In addition we present a new distributed algorithm for solving general covering and packing linear programs. For some problems this algorithm is tight with the lower bounds, for others it is a distributed approximation scheme. Together, our lower and upper bounds establish the local computability and approximability of a large class of problems, characterizing how much local information is required to solve these tasks.

134 citations


Book
Yuri I. Manin1
29 Apr 2010
TL;DR: In this article, the Continuum Problem and Forcing were studied in the context of formal languages and computable sets, and they were shown to be computable and provenable.
Abstract: PROVABILITY.- to Formal Languages.- Truth and Deducibility.- The Continuum Problem and Forcing.- The Continuum Problem and Constructible Sets.- COMPUTABILITY.- Recursive Functions and Church#x2019 s Thesis.- Diophantine Sets and Algorithmic Undecidability.- PROVABILITY AND COMPUTABILITY.- G#x00F6 del#x2019 s Incompleteness Theorem.- Recursive Groups.- Constructive Universe and Computation.- MODEL THEORY.- Model Theory.

67 citations


Proceedings ArticleDOI
20 Oct 2010
TL;DR: This paper presents a new technique for satisfiability solving of Boolean combinations of non-linear constraints that are convex, and applies fundamental results from the theory of convex programming to realize a satisfiability modulo theory (SMT) solver.
Abstract: Certain formal verification tasks require reasoning about Boolean combinations of non-linear arithmetic constraints over the real numbers. In this paper, we present a new technique for satisfiability solving of Boolean combinations of non-linear constraints that are convex. Our approach applies fundamental results from the theory of convex programming to realize a satisfiability modulo theory (SMT) solver. Our solver, CalCS, uses a lazy combination of SAT and a theory solver. A key step in our algorithm is the use of complementary slackness and duality theory to generate succinct infeasibility proofs that support conflict-driven learning. Moreover, whenever non-convex constraints are produced from Boolean reasoning, we provide a procedure that generates conservative approximations of the original set of constraints by using geometric properties of convex sets and supporting hyperplanes. We validate CalCS on several benchmarks including formulas generated from bounded model checking of hybrid automata and static analysis of floating-point software.

59 citations


Proceedings ArticleDOI
01 Jan 2010
TL;DR: The result strengthens the evidence that the complexity of a rewrite system is truthfully represented through the length of derivations and allows the classification of nondeterministic polytime-computation based on runtime complexity analysis of rewrite systems.
Abstract: In earlier work, we have shown that for confluent TRSs, innermost polynomial runtime complexity induces polytime computability of the functions defined. In this paper, we generalise this result to full rewriting, for that we exploit graph rewriting. We give a new proof of the adequacy of graph rewriting for full rewriting that allows for a precise control of the resources copied. In sum we completely describe an implementation of rewriting on a Turing machine (TM for short). We show that the runtime complexity of the TRS and the runtime complexity of the TM is polynomially related. Our result strengthens the evidence that the complexity of a rewrite system is truthfully represented through the length of derivations. Moreover our result allows the classification of nondeterministic polytime-computation based on runtime complexity analysis of rewrite systems.

58 citations


Posted Content
TL;DR: This work proposes a model for deterministic distributed function computation by a network of identical and anonymous nodes, in this model, each node has bounded computation and storage capabilities that do not grow with the network size.
Abstract: We propose a model for deterministic distributed function computation by a network of identical and anonymous nodes. In this model, each node has bounded computation and storage capabilities that do not grow with the network size. Furthermore, each node only knows its neighbors, not the entire graph. Our goal is to characterize the class of functions that can be computed within this model. In our main result, we provide a necessary condition for computability which we show to be nearly sufficient, in the sense that every function that satisfies this condition can at least be approximated. The problem of computing suitably rounded averages in a distributed manner plays a central role in our development; we provide an algorithm that solves it in time that grows quadratically with the size of the network.

56 citations


Book ChapterDOI
20 Sep 2010
TL;DR: This paper studies deterministic computations under unstructured mobility, that is when the edges of the graph appear infinitely often but without any (known) pattern, and draws a complete computability map for this problem when mobility is unstructuring.
Abstract: Most highly dynamic infrastructure-less networks have in common that the assumption of connectivity does not necessarily hold at a given instant. Still, communication routes can be available between any pair of nodes over time and space. These networks (variously called delay-tolerant, disruptive-tolerant, challenged) are naturally modeled as time-varying graphs (or evolving graphs), where the existence of an edge is a function of time. In this paper we study deterministic computations under unstructured mobility, that is when the edges of the graph appear infinitely often but without any (known) pattern. In particular, we focus on the problem of broadcasting with termination detection. We explore the problem with respect to three possible metrics: the date of message arrival (foremost), the time spent doing the broadcast (fastest), and the number of hops used by the broadcast (shortest). We prove that the solvability and complexity of this problem vary with the metric considered, as well as with the type of knowledge a priori available to the entities. These results draw a complete computability map for this problem when mobility is unstructured.

53 citations


Proceedings Article
01 Jan 2010
TL;DR: For example, this article showed that for local-effect actions, progression is always first-order definable and computable and gave a simple proof for this via the concept of forgetting.
Abstract: In a seminal paper, Lin and Reiter introduced the notion of progression for basic action theories in the situation calculus. Unfortunately, progression is not first-order definable in general. Recently, Vassos, Lakemeyer, and Levesque showed that in case actions have only local effects, progression is firstorder representable. However, they could show computability of the first-order representation only for a restricted class. Also, their proofs were quite involved. In this paper, we present a result stronger than theirs that for local-effect actions, progression is always first-order definable and computable. We give a very simple proof for this via the concept of forgetting. We also show first-order definability and computability results for a class of knowledge bases and actions with non-local effects. Moreover, for a certain class of local-effect actions and knowledge bases for representing disjunctive information, we show that progression is not only firstorder definable but also efficiently computable.

43 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of computing invariant measures from an abstract point of view, where computing a measure means finding an algorithm which can output descriptions of the measure up to any precision.
Abstract: We consider the question of computing invariant measures from an abstract point of view. Here, computing a measure means finding an algorithm which can output descriptions of the measure up to any precision. We work in a general framework (computable metric spaces) where this problem can be posed precisely. We will find invariant measures as fixed points of the transfer operator. In this case, a general result ensures the computability of isolated fixed points of a computable map. We give general conditions under which the transfer operator is computable on a suitable set. This implies the computability of many “regular enough” invariant measures and among them many physical measures. On the other hand, not all computable dynamical systems have a computable invariant measure. We exhibit two examples of computable dynamics, one having a physical measure which is not computable and one for which no invariant measure is computable, showing some subtlety in this kind of problems.

42 citations


Posted Content
TL;DR: A measure of shape is proposed which is appropriate for the study of a complicated geometric structure, defined using the topology of neighborhoods of the structure, and one aspect of this measure gives a new notion of fractal dimension.
Abstract: We propose a measure of shape which is appropriate for the study of a complicated geometric structure, defined using the topology of neighborhoods of the structure. One aspect of this measure gives a new notion of fractal dimension. We demonstrate the utility and computability of this measure by applying it to branched polymers, Brownian trees, and self-avoiding random walks.

38 citations


Posted Content
TL;DR: In this article, an algorithm based on the discrete Fourier transform (DFT) and its fast computability via the fast Fourier transformation (FFT) was proposed for lattice-supported distributions.
Abstract: Object orientation provides a flexible framework for the implementation of the convolution of arbitrary distributions of real-valued random variables. We discuss an algorithm which is based on the discrete Fourier transformation (DFT) and its fast computability via the fast Fourier transformation (FFT). It directly applies to lattice-supported distributions. In the case of continuous distributions an additional discretization to a linear lattice is necessary and the resulting lattice-supported distributions are suitably smoothed after convolution. We compare our algorithm to other approaches aiming at a similar generality as to accuracy and speed. In situations where the exact results are known, several checks confirm a high accuracy of the proposed algorithm which is also illustrated at approximations of non-central $\chi^2$-distributions. By means of object orientation this default algorithm can be overloaded by more specific algorithms where possible, in particular where explicit convolution formulae are available. Our focus is on R package distr which includes an implementation of this approach overloading operator "+" for convolution; based on this convolution, we define a whole arithmetics of mathematical operations acting on distribution objects, comprising, among others, operators "+", "-", "*", "/", and "^".

37 citations


Book ChapterDOI
25 Apr 2010
TL;DR: This work shows how counting and enumeration problems can be tackled by an appropriate extension of the datalog approach to close the gap between theoretical tractability and practical computability for MSO-definable decision problems.
Abstract: By Courcelle's Theorem we know that any property of finite structures definable in monadic second-order logic (MSO) becomes tractable over structures with bounded treewidth. This result was extended to counting problems by Arnborg et al. and to enumeration problems by Flum et al. Despite the undisputed importance of these results for proving fixed-parameter tractability, they do not directly yield implementable algorithms. Recently, Gottlob et al. presented a new approach using monadic datalog to close the gap between theoretical tractability and practical computability for MSO-definable decision problems. In the current work we show how counting and enumeration problems can be tackled by an appropriate extension of the datalog approach.

Journal ArticleDOI
TL;DR: In this article, a new deductive system CL12 is presented, which is based on the semantics of computability logic (CL) and proves its soundness and completeness with respect to the semantics.
Abstract: Computability logic (CL) is a recently launched program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth that logic has more traditionally been. Formulas in it represent computational problems, "truth" means existence of an algorithmic solution, and proofs encode such solutions. Within the line of research devoted to finding axiomatizations for ever more expressive fragments of CL, the present paper introduces a new deductive system CL12 and proves its soundness and completeness with respect to the semantics of CL. Conservatively extending classical predicate calculus and offering considerable additional expressive and deductive power, CL12 presents a reasonable, computationally meaningful, constructive alternative to classical logic as a basis for applied theories. To obtain a model example of such theories, this paper rebuilds the traditional, classical-logic based Peano arithmetic into a computability-logic-based counterpart. Among the purposes of the present contribution is to provide a starting point for what, as the author wishes to hope, might become a new line of research with a potential of interesting findings?an exploration of the presumably quite unusual metatheory of CL-based arithmetic and other CL-based applied systems. ?

01 Jan 2010
TL;DR: Computation with advice as discussed by the authors is a generalization of both computation with discrete advice and Type-2 Nondeterminism, and it has been shown that correct solutions are guessable with positive probability.
Abstract: Computation with advice is suggested as generalization of both computation with discrete advice and Type-2 Nondeterminism. Several embodiments of the generic concept are discussed, and the close connection to Weihrauch reducibility is pointed out. As a novel concept, computability with random advice is studied; which corresponds to correct solutions being guessable with positive probability. In the framework of computation with advice, it is possible to define computational complexity for certain concepts of hypercomputation. Finally, some examples are given which illuminate the interplay of uniform and non-uniform techniques in order to investigate both computability with advice and the Weihrauch lattice.

Book
01 Jan 2010
TL;DR: The author shows that theoretical computer science is a fascinating discipline, full of spectacular contributions and miracles, and the development of the computer scientist's way of thinking as well as fundamental concepts such as approximation and randomization in algorithmics and the basic ideas of cryptography and interconnection network design.
Abstract: Juraj Hromkovic takes the reader on an elegant route through the theoretical fundamentals of computer science. The author shows that theoretical computer science is a fascinating discipline, full of spectacular contributions and miracles. The book also presents the development of the computer scientist's way of thinking as well as fundamental concepts such as approximation and randomization in algorithmics, and the basic ideas of cryptography and interconnection network design.

Proceedings ArticleDOI
20 Oct 2010
TL;DR: This paper provides more refined translations of equivalence checking problems arising from hardware verification into EPR formulas, designed in such a way that models of EPR problems can be translated into bit-vector models demonstrating non-equivalence.
Abstract: Word-level bounded model checking and equivalence checking problems are naturally encoded in the theory of bit-vectors and arrays. The standard practice of deciding formulas of such theories in the hardware industry is either SAT- (using bit-blasting) or SMT-based methods. These methods perform reasoning on a low level but perform it very efficiently. To find alternative potentially promising model checking and equivalence checking methods, a natural idea is to lift reasoning from the bit and bit-vector levels to higher levels. In such an attempt, in [14] we proposed translating memory designs into the Effectively PRopositional (EPR) fragment of first-order logic. The first experiments with using such a translation have been encouraging but raised some questions. Since the high-level encoding we used was incomplete (yet avoiding bit-blasting) some equivalences could not be proved. Another problem was that there was no natural correspondence between models of EPR formulas and bit-vector based models that would demonstrate non-equivalence and hence design errors. This paper addresses these problems by providing more refined translations of equivalence checking problems arising from hardware verification into EPR formulas. We provide three such translations and formulate their properties. All three translations are designed in such a way that models of EPR problems can be translated into bit-vector models demonstrating non-equivalence. We also evaluate the best EPR solvers on industrial equivalence checking problems and compare them with SMT solvers designed and tuned for such formulas specifically. We present empirical evidence demonstrating that EPR-based methods and solvers are competitive.

Journal ArticleDOI
TL;DR: In this article, the authors consider reachability games over general hybrid systems, and distinguish between two possible observation frameworks for those games: either the precise dynamics of the system is seen by the players (this is the perfect observation framework), or only the starting point and the delays are known to the players, this is the partial observation framework.
Abstract: In this paper, we consider reachability games over general hybrid systems, and distinguish between two possible observation frameworks for those games: either the precise dynamics of the system is seen by the players (this is the perfect observation framework), or only the starting point and the delays are known by the players (this is the partial observation framework). In the first more classical framework, we show that time-abstract bisimulation is not adequate for solving this problem, although it is sufficient in the case of timed automata . That is why we consider an other equivalence, namely the suffix equivalence based on the encoding of trajectories through words. We show that this suffix equivalence is in general a correct abstraction for games. We apply this result to o-minimal hybrid systems, and get decidability and computability results in this framework. For the second framework which assumes a partial observation of the dynamics of the system, we propose another abstraction, called the superword encoding, which is suitable to solve the games under that assumption. In that framework, we also provide decidability and computability results.

Book
16 Dec 2010
TL;DR: Computability Theory: An Introduction to Recursion Theory, provides a concise, comprehensive, and authoritative introduction to contemporary computability theory, techniques, and results.
Abstract: Computability Theory: An Introduction to Recursion Theory, provides a concise, comprehensive, and authoritative introduction to contemporary computability theory, techniques, and results. The basic concepts and techniques of computability theory are placed in their historical, philosophical and logical context. This presentation is characterized by an unusual breadth of coverage and the inclusion of advanced topics not to be found elsewhere in the literature at this level. The text includes both the standard material for a first course in computability and more advanced looks at degree structures, forcing, priority methods, and determinacy. The final chapter explores a variety of computability applications to mathematics and science. Computability Theory is an invaluable text, reference, and guide to the direction of current research in the field. Nowhere else will you find the techniques and results of this beautiful and basic subject brought alive in such an approachable way. Frequent historical information presented throughout More extensive motivation for each of the topics than other texts currently available Connects with topics not included in other textbooks, such as complexity theory

Book ChapterDOI
19 Apr 2010
TL;DR: It is shown that polynomial (innermost) runtime complexity of TRSs induces polytime computability of the functions defined, and it is proved the adequacy of (innerest) graph rewriting for (inner most) term rewriting.
Abstract: Recently, many techniques have been introduced that allow the (automated) classification of the runtime complexity of term rewrite systems (TRSs for short). In this paper we show that polynomial (innermost) runtime complexity of TRSs induces polytime computability of the functions defined. In this way we show a tight correspondence between the number of steps performed in a given rewrite system and the computational complexity of an implementation of rewriting. The result uses graph rewriting as a first step towards the implementation of term rewriting. In particular, we prove the adequacy of (innermost) graph rewriting for (innermost) term rewriting.

Book
30 May 2010
TL;DR: The suggested axiomatic methodology is applied to evaluation of possibilities of computers and their networks, with main emphasis on such properties as computability, decidability, and acceptability.
Abstract: We are living in a world where complexity of systems created and studied by people grows beyond all imaginable limits. Computers, their software and their networks are among the most complicated systems of our time. Science is the only efficient tool for dealing with this overwhelming complexity. One of the methodologies developed in science is the axiomatic approach. It proved to be very powerful in mathematics. In this paper, we develop further an axiomatic approach in computer science initiated by Manna, Blum and other researchers. In the traditional constructive setting, different classes of algorithms (programs, processes or automata) are studied separately, with some indication of relations between these classes. Thus, the constructive approach gave birth to the theory of Turing machines, theory of partial recursive functions, theory of finite automata, and other theories of constructive models of algorithms. The axiomatic context allows one to research classes of classes of algorithms, automata, and processes. As a result, axiomatic approach goes higher in the hierarchy of co mputer and network models, reducing in such a way complexity of their study. The suggested axiomatic methodology is applied to evaluation of possibilities of computers and their networks. People more and more rely on computers and other information process ing systems. So, it is vital to know better than now what computers and other information processing systems can do and what they can’t do. The main emphasis is done on such properties as computability, decidability, and acceptability.

Proceedings Article
01 Jan 2010
TL;DR: In this paper, a geometric language for quantum protocols and algorithms is presented, which can also be used to explore simple nonstandard models, such as quantum channels and hidden subgroup algorithms.
Abstract: Modern cryptography is based on various assumptions about computational hardness and feasibility. But while computability is a very robust notion (cf Church's Thesis), feasibility seems quite sensitive to the available computational resources. A prime example are, of course, quantum channels, which provide feasible solutions of some otherwise hard problems; but ants' pheromones, used as a computational resource, also provide feasible solutions of other hard problems. So at least in principle, modern cryptography is concerned with the power and availability of computational resources. The standard models, used in cryptography and in quantum computation, leave a lot to be desired in this respect. They do, of course, support many interesting solutions of deep problems; but besides the fundamental computational structures, they also capture some low level features of particular implementations. In technical terms of program semantics, our standard models are not *fully abstract*. (Related objections can be traced back to von Neumann's "I don't believe in Hilbert spaces" letters from 1937.) I shall report on some explorations towards extending the modeling tools of program semantics to develop a geometric language for quantum protocols and algorithms. Besides hiding the irrelevant implementation details, its abstract descriptions can also be used to explore simple nonstandard models. If the time permits, I shall describe a method to implement teleportation, as well as the hidden subgroup algorithms, using just abelian groups and relations.


Journal ArticleDOI
01 Jan 2010
TL;DR: Computation with advice as discussed by the authors is a generalization of both computation with discrete advice and Type-2 Nondeterminism, and it has been shown that correct solutions are guessable with positive probability.
Abstract: Computation with advice is suggested as generalization of both computation with discrete advice and Type-2 Nondeterminism. Several embodiments of the generic concept are discussed, and the close connection to Weihrauch reducibility is pointed out. As a novel concept, computability with random advice is studied; which corresponds to correct solutions being guessable with positive probability. In the framework of computation with advice, it is possible to define computational complexity for certain concepts of hypercomputation. Finally, some examples are given which illuminate the interplay of uniform and non-uniform techniques in order to investigate both computability with advice and the Weihrauch lattice.

Journal ArticleDOI
TL;DR: Computation with advice is suggested as generalization of both computation with discrete advice and Type-2 Nondeterminism and the close connection to Weihrauch reducibility is pointed out.
Abstract: Computation with advice is suggested as generalization of both computation with discrete advice and Type-2 Nondeterminism. Several embodiments of the generic concept are discussed, and the close connection to Weihrauch reducibility is pointed out. As a novel concept, computability with random advice is studied; which corresponds to correct solutions being guessable with positive probability. In the framework of computation with advice, it is possible to define computational complexity for certain concepts of hypercomputation. Finally, some examples are given which illuminate the interplay of uniform and non-uniform techniques in order to investigate both computability with advice and the Weihrauch lattice.

Journal ArticleDOI
TL;DR: The paper investigates the structural tractability of the problem of enumerating (possibly projected) solutions, where tractability means here computable with polynomial delay (WPD), since in general exponentially many solutions may be computed.
Abstract: The problem of deciding whether CSP instances admit solutions has been deeply stud- ied in the literature, and several structural tractability results have been derived so far. However, constraint satisfaction comes in practice as a computation problem where the focus is either on finding one solution, or on enumerating all solutions, possi bly projected to some given set of output variables. The paper investigates the structural tractabi lity of the problem of enumerating (possi- bly projected) solutions, where tractability means here computable with polynomial delay (WPD), since in general exponentially many solutions may be computed. A general framework based on the notion of tree projection of hypergraphs is considered, which generalizes all known decomposition methods. Tractability results have been obtained both for classes of structures where output vari- ables are part of their specification, and for classes of stru ctures where computability WPD must be ensured for any possible set of output variables. These results are shown to be tight, by exhibit- ing dichotomies for classes of structures having bounded arity and where the tree decomposition method is considered.

Proceedings Article
01 Jun 2010
TL;DR: It is shown that a function is securely computable if and only if its entropy is smaller than the secret key capacity.
Abstract: We study a problem of secure computation by multiple parties of a given function of their cumulative observations, using public communication but without revealing the value of the function to an eavesdropper with access to this communication. A Shannon theoretic formulation is introduced to characterize necessary and sufficient conditions for secure computability. Drawing on innate connections of this formulation to the problem of secret key generation by the same parties using public communication, we show that a function is securely computable if and only if its entropy is smaller than the secret key capacity. Conditions for secure computability at a lone terminal are also derived by association with an appropriate secret key generation problem.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper presents a logic restructuring technique named node addition and removal (NAR), which works by adding a node into a circuit to replace an existing node and then removing the replaced node and applies the NAR approach to circuit minimization together with two techniques: redundancy removal and mandatory assignment reuse.
Abstract: This paper presents a logic restructuring technique named node addition and removal (NAR). It works by adding a node into a circuit to replace an existing node and then removing the replaced node. Previous node-merging techniques focus on replacing one node with an existing node in a circuit, but fail to replace a node that has no substitute node. To enhance the node-merging techniques on logic restructuring and optimization, we propose an NAR approach in this work. We first present two sufficient conditions that state the requirements of added nodes for safely replacing a target node. Then, an NAR approach is proposed to fast detect the added nodes by performing logic implications based on these conditions. We also apply the NAR approach to circuit minimization together with two techniques: redundancy removal and mandatory assignment reuse. We conduct experiments on a set of IWLS 2005 benchmarks. The experimental results show that our approach can enhance the state-of-the-art ATPG-based node-merging approach. Additionally, our approach has a competitive capability of circuit minimization with 44 times speedup compared to a SAT-based node-merging approach.

Journal ArticleDOI
TL;DR: By giving an appropriate representation to objects, based on a hierarchical coding of information, this work exemplifies how it is remarkably easy to compute complex objects.
Abstract: How best to quantify the information of an object, whether natural or artifact, is a problem of wide interest. A related problem is the computability of an object. We present practical examples of a new way to address this problem. By giving an appropriate representation to our objects, based on a hierarchical coding of information, we exemplify how it is remarkably easy to compute complex objects. Our algorithmic complexity is related to the length of the class of objects, rather than to the length of the object.

Proceedings ArticleDOI
01 Jan 2010

Book ChapterDOI
06 Sep 2010
TL;DR: This paper investigates the structural tractability of the problem of enumerating (possibly projected) solutions, where tractability means here computable with polynomial delay (WPD), since in general exponentially many solutions may be computed.
Abstract: The problem of deciding whether CSP instances admit solutions has been deeply studied in the literature, and several structural tractability results have been derived so far. However, constraint satisfaction comes in practice as a computation problem where the focus is either on finding one solution, or on enumerating all solutions, possibly projected over some given set of output variables. The paper investigates the structural tractability of the problem of enumerating (possibly projected) solutions, where tractability means here computable with polynomial delay (WPD), since in general exponentially many solutions may be computed. A general framework based on the notion of tree projection of hypergraphs is considered, which generalizes all known decomposition methods. Tractability results have been obtained both for classes of structures where output variables are part of their specification, and for classes of structures where computability WPD must be ensured for any possible set of output variables. These results are shown to be tight, by exhibiting dichotomies for classes of structures having bounded arity and where the tree decomposition method is considered.

Posted Content
TL;DR: A complete characterisation is given for each type of termination detection and it is shown that they define a strict hierarchy, which emphasise the difference between computability of a distributed task and termination detection.
Abstract: Contrary to the sequential world, the processes involved in a distributed system do not necessarily know when a computation is globally finished. This paper investigates the problem of the detection of the termination of local computations. We define four types of termination detection: no detection, detection of the local termination, detection by a distributed observer, detection of the global termination. We give a complete characterisation (except in the local termination detection case where a partial one is given) for each of this termination detection and show that they define a strict hierarchy. These results emphasise the difference between computability of a distributed task and termination detection. Furthermore, these characterisations encompass all standard criteria that are usually formulated : topological restriction (tree, rings, or triangu- lated networks ...), topological knowledge (size, diameter ...), and local knowledge to distinguish nodes (identities, sense of direction). These results are now presented as corollaries of generalising theorems. As a very special and important case, the techniques are also applied to the election problem. Though given in the model of local computations, these results can give qualitative insight for similar results in other standard models. The necessary conditions involve graphs covering and quasi-covering; the sufficient conditions (constructive local computations) are based upon an enumeration algorithm of Mazurkiewicz and a stable properties detection algorithm of Szymanski, Shi and Prywes.