scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 2016"


Journal ArticleDOI
TL;DR: The first polylogarithmic lower bound on such local computation for (optimization) problems including minimum vertex cover, minimum (connected) dominating set, maximum matching, maximal independent set, and maximal matching is given.
Abstract: The question of what can be computed, and how efficiently, is at the core of computer science. Not surprisingly, in distributed systems and networking research, an equally fundamental question is what can be computed in a distributed fashion. More precisely, if nodes of a network must base their decision on information in their local neighborhood only, how well can they compute or approximate a global (optimization) problem? In this paper we give the first polylogarithmic lower bound on such local computation for (optimization) problems including minimum vertex cover, minimum (connected) dominating set, maximum matching, maximal independent set, and maximal matching. In addition, we present a new distributed algorithm for solving general covering and packing linear programs. For some problems this algorithm is tight with the lower bounds, whereas for others it is a distributed approximation scheme. Together, our lower and upper bounds establish the local computability and approximability of a large class of problems, characterizing how much local information is required to solve these tasks.

177 citations


Journal Article
TL;DR: In this paper, the authors argue for minimal conditions that must be met if two frameworks are to be compared; if frameworks are radical enough, comparison becomes hopeless, and the meaning of basic mathematical terms (like set, function, and number) are not stable across frameworks.
Abstract: The Church-Turing Thesis is widely regarded as true, because of evidence that there is only one genuine notion of computation. By contrast, there are nowadays many different formal logics, and different corresponding foundational frameworks. Which ones can deliver a theory of computability? This question sets up a difficult challenge: the meanings of basic mathematical terms (like "set", "function", and "number") are not stable across frameworks. While it is easy to compare what different frameworks say, it is not so easy to compare what they mean. We argue for some minimal conditions that must be met if two frameworks are to be compared; if frameworks are radical enough, comparison becomes hopeless. Our aim is to clarify the dialectical situation in this bourgeoning area of research, shedding light on the nature of non-classical logic and the notion of computation alike.

83 citations


Book
21 Jun 2016
TL;DR: This book presents classical computability theory from Turing and Post to current results and methods, and their use in studying the information content of algebraic structures, models, andTheir relation to Peano arithmetic.
Abstract: Turing's famous 1936 paper introduced a formal definition of a computing machine, a Turing machine. This model led to both the development of actual computers and to computability theory, the study of what machines can and cannot compute. This book presents classical computability theory from Turing and Post to current results and methods, and their use in studying the information content of algebraic structures, models, and their relation to Peano arithmetic. The author presents the subject as an art to be practiced, and an art in the aesthetic sense of inherent beauty which all mathematicians recognize in their subject. Part I gives a thorough development of the foundations of computability, from the definition of Turing machines up to finite injury priority arguments. Key topics include relative computability, and computably enumerable sets, those which can be effectively listed but not necessarily effectively decided, such as the theorems of Peano arithmetic. Part II includes the study of computably open and closed sets of reals and basis and nonbasis theorems for effectively closed sets. Part III covers minimal Turing degrees. Part IV is an introduction to games and their use in proving theorems. Finally, Part V offers a short history of computability theory. The author has honed the content over decades according to feedback from students, lecturers, and researchers around the world. Most chapters include exercises, and the material is carefully structured according to importance and difficulty. The book is suitable for advanced undergraduate and graduate students in computer science and mathematics and researchers engaged with computability and mathematical logic.

69 citations


Book ChapterDOI
01 Jan 2016
TL;DR: An extended survey of the different strands of research on higher type computability to date is given, bringing together material from recursion theory, constructive logic and computer science.
Abstract: We discuss the conceptual problem of identifying the natural notions of computability at higher types (over the natural numbers). We argue for an eclectic approach, in which one considers a wide range of possible approaches to defining higher type computability and then looks for regularities. As a first step in this programme, we give an extended survey of the different strands of research on higher type computability to date, bringing together material from recursion theory, constructive logic and computer science. The paper thus serves as a reasonably complete overview of the literature on higher type computability. Two sequel papers will be devoted to developing a more systematic account of the material reviewed here.

42 citations


Journal ArticleDOI
TL;DR: It is shown that a Martin-Lof random set for which the effective version of the Lebesgue density theorem fails computes every K-trivial set, and any witness for the solution of the covering problem, namely an incomplete random set which computes all K-Trivial sets must be very close to being Turing complete.
Abstract: We show that a Martin-Lof random set for which the effective version of the Lebesgue density theorem fails computes every K-trivial set. Combined with a recent result by Day and Miller, this gives a positive solution to the ML-covering problem (Question 4.6 in Randomness and computability: Open questions. Bull. Symbolic Logic, 12(3):390–410, 2006). On the other hand, we settle stronger variants of the covering problem in the negative. We show that any witness for the solution of the covering problem, namely an incomplete random set which computes all K-trivial sets, must be very close to being Turing complete. For example, such a random set must be LR-hard. Similarly, not every K-trivial set is computed by the two halves of a random set. The work passes through a notion of randomness which characterises computing K-trivial sets by random sets. This gives a “smart” K-trivial set, all randoms from whom this set is computed have to compute all K-trivial sets.

31 citations


Posted Content
TL;DR: In this paper, the authors investigate the topological aspects of algebraic computation models, in particular the BSS-model, and establish that the solvability complexity index is (mostly) independent of the computational model.
Abstract: We investigate the topological aspects of some algebraic computation models, in particular the BSS-model. Our results can be seen as bounds on how different BSS-computability and computability in the sense of computable analysis can be. The framework for this is Weihrauch reducibility. As a consequence of our characterizations, we establish that the solvability complexity index is (mostly) independent of the computational model, and that there thus is common ground in the study of non-computability between the BSS and TTE setting.

21 citations


Journal ArticleDOI
TL;DR: This work defines infinitely many possible relaxations of several traditional data structures and objects: queues, stacks, multisets and registers, and examines their relative computational power.
Abstract: Most concurrent data structures being designed today are versions of known sequential data structures. However, in various cases it makes sense to relax the semantics of traditional concurrent data structures in order to get simpler and possibly more efficient and scalable implementations. For example, when solving the classical producer-consumer problem by implementing a concurrent queue, it might be enough to allow the dequeue operation (by a consumer) to return and remove one of the two oldest values in the queue, and not necessarily the oldest one. We define infinitely many possible relaxations of several traditional data structures and objects: queues, stacks, multisets and registers, and examine their relative computational power.

21 citations


Journal ArticleDOI
TL;DR: Three additional sound and complete systems in the same style and sense are elaborated: one for polynomial space computability, one for elementary recursive time (and/or space) computable, and one for primitive recursiveTime computability.
Abstract: The earlier paper "Introduction to clarithmetic I" constructed an axiomatic system of arithmetic based on computability logic, and proved its soundness and extensional completeness with respect to polynomial time computability. The present paper elaborates three additional sound and complete systems in the same style and sense: one for polynomial space computability, one for elementary recursive time (and/or space) computability, and one for primitive recursive time (and/or space) computability.

20 citations


Book ChapterDOI
01 Jan 2016
TL;DR: A nontechnical account of the mathematical theory of randomness, which connects computability and complexity theory with mathematical logic, proof theory, probability and measure theory, analysis, computer science, and philosophy, and has surprising applications in a variety of fields.
Abstract: We give a nontechnical account of the mathematical theory of randomness. The theory of randomness is founded on computability theory, and it is nowadays often referred to as algorithmic randomness. It comes in two varieties: A theory of finite objects, that emerged in the 1960s through the work of Solomonoff, Kolmogorov, Chaitin and others, and a theory of infinite objects (starting with von Mises in the early 20th century, culminating in the notions introduced by Martin-Lof and Schnorr in the 1960s and 1970s) and there are many deep and beautiful connections between the two. Research in algorithmic randomness connects computability and complexity theory with mathematical logic, proof theory, probability and measure theory, analysis, computer science, and philosophy. It also has surprising applications in a variety of fields, including biology, physics, and linguistics. Founded on the theory of computation, the study of randomness has itself profoundly influenced computability theory in recent years.

20 citations


Book ChapterDOI
27 Sep 2016
TL;DR: The t-resilient asynchronous computability theorem stated here characterizes the tasks that have t-Resilient protocols in a shared-memory model and generalizes the prior (wait-free) asynchronous computable theorem of Herlihy and Shavit to a broader class of failure models.
Abstract: A task is a distributed coordination problem where processes start with private inputs, communicate with one another, and then halt with private outputs. A protocol that solves a task is t-resilient if it tolerates halting failures by t or fewer processes. The t-resilient asynchronous computability theorem stated here characterizes the tasks that have t-resilient protocols in a shared-memory model. This result generalizes the prior (wait-free) asynchronous computability theorem of Herlihy and Shavit to a broader class of failure models, and requires introducing several novel concepts.

20 citations


Proceedings ArticleDOI
25 Jul 2016
TL;DR: This paper presents a hierarchy of synchronization instructions, classified by their space complexity in solving obstruction-free consensus, and proves an essentially tight characterization of the power of buffered read and write instructions.
Abstract: For many years, Herlihy's elegant computability based Consensus Hierarchy has been our best explanation of the relative power of various types of multiprocessor synchronization objects when used in deterministic algorithms. However, key to this hierarchy is treating these instructions as distinct objects, an approach that is far from the real-world, where multiprocessor programs apply synchronization instructions to collections of arbitrary memory locations. We were surprised to realize that, when considering instructions applied to memory locations, the computability based hierarchy collapses. This leaves open the question of how to better captures the power of various synchronization instructions.In this paper, we provide an approach to answering this question. We present a hierarchy of synchronization instructions, classified by their space complexity in solving obstruction-free consensus. Our hierarchy provides a classification of combinations of known instructions that seems to fit with our intuition of how useful some are in practice, while questioning the effectiveness of others. We prove an essentially tight characterization of the power of buffered read and write instructions. Interestingly, we show a similar result for multi-location atomic assignments.

Proceedings ArticleDOI
05 Jul 2016
TL;DR: The main results relate Kolmogorov’s entropy of a compact metric space X polynomially to the uniform relativized complexity of approximating various families of continuous functions on X, and offer some guidance towards suitable notions of complexity for higher types.
Abstract: We promote the theory of computational complexity on metric spaces: as natural common generalization of (i) the classical discrete setting of integers, binary strings, graphs etc. as well as of (ii) the bit-complexity theory on real numbers and functions according to Friedman, Ko (1982ff), Cook, Braverman et al.; as (iii) resource-bounded refinement of the theories of computability on, and representations of, continuous universes by Pour-El& Richards (1989) and Weihrauch (1993ff); and as (iv) computational perspective on quantitative concepts from classical Analysis: Our main results relate (i.e. upper and lower bound) Kolmogorov’s entropy of a compact metric space X polynomially to the uniform relativized complexity of approximating various families of continuous functions on X. The upper bounds are attained by carefully crafted oracles and bit-cost analyses of algorithms perusing them. They all employ the same representation (i.e. encoding, as infinite binary sequences, of the elements) of such spaces, which thus may be of own interest. The lower bounds adapt adversary arguments from unit-cost Information-Based Complexity to the bit model. They extend to, and indicate perhaps surprising limitations even of, encodings via binary string functions (rather than sequences) as introduced by Kawamura&Cook (SToC’2010, §3.4). These insights offer some guidance towards suitable notions of complexity for higher types.

Journal ArticleDOI
TL;DR: This paper introduces the family of generalized symmetry breaking (GSB) tasks, which includes election, renaming, and many other symmetry breaking tasks, and studies how nondeterminism properties of objects solving tasks affects the computability power of GSB tasks.
Abstract: Processes in a concurrent system need to coordinate using an underlying shared memory or a message-passing system in order to solve agreement tasks such as, for example, consensus or set agreement. However, coordination is often needed to break the symmetry of processes that are initially in the same state---for example, to get exclusive access to a shared resource, to get distinct names, or to elect a leader. This paper introduces and studies the family of generalized symmetry breaking (GSB) tasks, which includes election, renaming, and many other symmetry breaking tasks, and studies how nondeterminism properties of objects solving tasks affects the computability power of GSB tasks. The aim is to develop the understanding of symmetry breaking tasks and their relation with agreement tasks and to study nondeterminism properties of objects solving tasks and how these properties affect the computability power of symmetry breaking tasks. Among various results characterizing the family of GSB tasks, it is show...

Posted Content
TL;DR: Three algorithms and two impossibility results are provided that characterize, for any ring size, the necessary and sufficient number of robots to perform perpetual exploration of highly dynamic rings.
Abstract: We consider systems made of autonomous mobile robots evolving in highly dynamic discrete environment i.e., graphs where edges may appear and disappear unpredictably without any recurrence, stability, nor periodicity assumption. Robots are uniform (they execute the same algorithm), they are anonymous (they are devoid of any observable ID), they have no means allowing them to communicate together, they share no common sense of direction, and they have no global knowledge related to the size of the environment. However, each of them is endowed with persistent memory and is able to detect whether it stands alone at its current location. A highly dynamic environment is modeled by a graph such that its topology keeps continuously changing over time. In this paper, we consider only dynamic graphs in which nodes are anonymous, each of them is infinitely often reachable from any other one, and such that its underlying graph (i.e., the static graph made of the same set of nodes and that includes all edges that are present at least once over time) forms a ring of arbitrary size. In this context, we consider the fundamental problem of perpetual exploration: each node is required to be infinitely often visited by a robot. This paper analyzes the computability of this problem in (fully) synchronous settings, i.e., we study the deterministic solvability of the problem with respect to the number of robots. We provide three algorithms and two impossibility results that characterize, for any ring size, the necessary and sufficient number of robots to perform perpetual exploration of highly dynamic rings.

Posted Content
TL;DR: The 2015 Logic Blog contains a large variety of results connected to logic, some of them unlikely to be submitted to a journal.
Abstract: The blog focusses on algorithmic randomness and its connections to quantum information theory, group theory and its connections to logic, and computability analogs of cardinal characteristics.

Posted Content
TL;DR: It is proved that the computable Busy Beaver Plus function defined on any Turing submachine is not computable by any program running on this submachine, demonstrating the existence of a "paradox" of computability a la Skolem.
Abstract: In this article, we will show that uncomputability is a relative property not only of oracle Turing machines, but also of subrecursive classes. We will define the concept of a Turing submachine, and a recursive relative version for the Busy Beaver function which we will call Busy Beaver Plus function. Therefore, we will prove that the computable Busy Beaver Plus function defined on any Turing submachine is not computable by any program running on this submachine. We will thereby demonstrate the existence of a "paradox" of computability a la Skolem: a function is computable when "seen from the outside" the subsystem, but uncomputable when "seen from within" the same subsystem. Finally, we will raise the possibility of defining universal submachines, and a hierarchy of negative Turing degrees.

BookDOI
22 Jan 2016
TL;DR: This book provides an overview of the confluence of ideas in Turings era and work and examines the impact of his work on mathematical logic and theoretical computer science.
Abstract: This book provides an overview of the confluence of ideas in Turings era and work and examines the impact of his work on mathematical logic and theoretical computer science. It combines contributions by well-known scientists on the history and philosophy of computability theory as well as on generalised Turing computability. By looking at the roots and at the philosophical and technical influence of Turings work, it is possible to gather new perspectives and new research topics which might be considered as a continuation of Turings working ideas well into the 21st century.

Journal ArticleDOI
TL;DR: In this article, an integrated mathematical model of multi-period cell formation and part operation tradeoff in a dynamic cellular manufacturing system is proposed in consideration with multiple part process route, which simultaneously generates machine cells and part families and selects the optimum process route instead of the user specifying predetermined routes.
Abstract: In this paper, an integrated mathematical model of multi-period cell formation and part operation tradeoff in a dynamic cellular manufacturing system is proposed in consideration with multiple part process route. This paper puts emphasize on the production flexibility (production/subcontracting part operation) to satisfy the product demand requirement in different period segments of planning horizon considering production capacity shortage and/or sudden machine breakdown. The proposed model simultaneously generates machine cells and part families and selects the optimum process route instead of the user specifying predetermined routes. Conventional optimization method for the optimal cell formation problem requires substantial amount of time and memory space. Hence a simulated annealing based genetic algorithm is proposed to explore the solution regions efficiently and to expedite the solution search space. To evaluate the computability of the proposed algorithm, different problem scenarios are adopted from literature. The results approve the effectiveness of the proposed approach in designing the manufacturing cell and minimization of the overall cost, considering various manufacturing aspects such as production volume, multiple process route, production capacity, machine duplication, system reconfiguration, material handling and subcontracting part operation.

Journal ArticleDOI
TL;DR: In this paper, the notion of computability of Folner sets for finitely generated amenable groups with unsolvable word problems has been defined and shown to be computable for the Kharlampovich group and a group that is an extension of an amenable group with solvable word problem.
Abstract: We define the notion of computability of Folner sets for finitely generated amenable groups. We prove, by an explicit description, that the Kharlampovich group, a finitely presented solvable group with unsolvable word problem, has computable Folner sets. We also prove computability of Folner sets for a group that is extension of an amenable group with solvable word problem by a finitely generated group with computable Folner sets with subrecursive distortion function. Moreover we obtain some known and some new upper bounds for the Folner function in these particular extensions.

Posted Content
TL;DR: Two notions of effective reducibility for set-theoretical statements, based on computability with Ordinal Turing Machines (OTMs), are introduced, one of which resembles Turing reducibles while the other is modelled after Weihrauch reducible.
Abstract: We introduce two notions of effective reducibility for set-theoretical statements, based on computability with Ordinal Turing Machines (OTMs), one of which resembles Turing reducibility while the other is modelled after Weihrauch reducibility. We give sample applications by showing that certain (algebraic) constructions are not effective in the OTM-sense and considerung the effective equivalence of various versions of the axiom of choice.

Journal ArticleDOI
TL;DR: It seems that a robust and well defined notion of time complexity exists for the GPAC, or equivalently for computations by polynomial ordinary differential equations, as well as a rather nice and robust notion of ODE programming.

Journal ArticleDOI
TL;DR: In this article, it was shown that the Julia set of the Feigenbaum map is computable in polynomial time with a recurrent critical point, which is the first poly-time computable Julia set with a critical point.
Abstract: We present the first example of a poly-time computable Julia set with a recurrent critical point: we prove that the Julia set of the Feigenbaum map is computable in polynomial time.

Journal ArticleDOI
TL;DR: In this article, the authors present an algorithm to compute the exact value of the packing measure of self-similar sets satisfying the so-called Strong Separation Condition (SSC) and prove its convergence to the value of a packing measure.
Abstract: We present an algorithm to compute the exact value of the packing measure of self-similar sets satisfying the so called Strong Separation Condition (SSC) and prove its convergence to the value of the packing measure. We also test the algorithm with examples that show both the accuracy of the algorithm for the most regular cases and the possibility of using the additional information provided by it to obtain formulas for the packing measure of certain self-similar sets. For example, we are able to obtain a formula for the packing measure of any Sierpinski gasket with contraction factor in the interval .

Journal ArticleDOI
TL;DR: It is shown that the CEP is equivalent to the statement that every type II1 tracial von Neumann algebra has a computable universal theory.
Abstract: The Connes Embedding Problem (CEP) asks whether every separable II1 factor embeds into an ultrapower of the hyperfinite II1 factor. We show that the CEP is equivalent to the statement that every type II1 tracial von Neumann algebra has a computable universal theory.

Posted Content
TL;DR: In this article, the authors argue that as long as possible world semantics is left out, one can compute the semantic representation(s) of a given statement, including aspects of lexical meaning.
Abstract: This paper is a reflexion on the computability of natural language semantics. It does not contain a new model or new results in the formal semantics of natural language: it is rather a computational analysis of the logical models and algorithms currently used in natural language semantics, defined as the mapping of a statement to logical formulas - formulas, because a statement can be ambiguous. We argue that as long as possible world semantics is left out, one can compute the semantic representation(s) of a given statement, including aspects of lexical meaning. We also discuss the algorithmic complexity of this process.

Book ChapterDOI
01 Mar 2016
TL;DR: In this paper, a survey of the development of recursion theory over the last 60 years is presented, with a focus on the application of the notion of effective computability in practice.
Abstract: Introduction We offer here some historical notes on the conceptual routes taken in the development of recursion theory over the last 60 years, and their possible significance for computational practice. These illustrate, incidentally, the vagaries to which mathematical ideas may be susceptible on the one hand, and – once keyed into a research program – their endless exploitation on the other. At the hands primarily of mathematical logicians, the subject of effective computability , or recursion theory as it has come to be called (for historical reasons to be explained in the next section), has developed along several interrelated but conceptually distinctive lines. While this began with what were offered as analyses of the absolute limits of effective computability, the immediate primary aim was to establish negative results of the effective unsolvability of various problems in logic and mathematics. From this the subject turned to refined classifications of unsolvability for which a myriad of techniques were developed. The germinal step, conceptually, was provided by Turing's notion of computability relative to an ‘oracle’. At the hands of Post, this provided the beginning of the subject of degrees of unsolvability , which became a massive research program of great technical difficulty and combinatorial complexity. Less directly provided by Turing's notion, but implicit in it, were notions of uniform relative computability , which led to various important theories of recursive functionals . Finally the idea of computability has been relativized by extension, in various ways, to more or less arbitrary structures , leading to what has come to be called generalized recursion theory . Marching in under the banner of degree theory, these strands were to some extent woven together by the recursion theorists, but the trend has been to pull the subject of effective computability even farther away from questions of actual computation. The rise in recent years of computation theory as a subject with that as its primary concern forces a reconsideration of notions of computability theory both in theory and practice. Following the historical sections, I shall make the case for the primary significance for practice of the various notions of relative (rather than absolute) computability, but not of most methods or results obtained thereto in recursion theory.

22 Nov 2016
TL;DR: It is shown that obligatory hybrid networks of evolutionary processors have the same computability power as Turing machines only using one operation per node, no rewriting and no filters.
Abstract: In this paper obligatory hybrid networks of evolutionary processors (a variant of hybrid networks of evolutionary processors model) are proposed. In the obligatory hybrid network of evolutionary processors a node discards the strings to which no operations are applicable. We show that such networks have the same computability power as Turing machines only using one operation per node (deletion on the left end and insertion on the right end of the string) no rewriting and no filters.

Journal ArticleDOI
TL;DR: It is proved that the discreteness problem for two-generated nonelementary subgroups of SL(2, ℂ) is undecidable in the Blum–Shub–Smale (BSS) computability model.
Abstract: We prove that the discreteness problem for two-generated nonelementary subgroups of SL(2, ℂ) is undecidable in the Blum–Shub–Smale (BSS) computability model.

Book ChapterDOI
27 Jun 2016
TL;DR: The model of interactive Turing machines (ITMs) has been proposed to characterise which stream translations are interactively computable; the model of reactive Turing machines as mentioned in this paper is proposed to characterize which behaviours are reactively executable.
Abstract: The model of interactive Turing machines (ITMs) has been proposed to characterise which stream translations are interactively computable; the model of reactive Turing machines (RTMs) has been proposed to characterise which behaviours are reactively executable. In this article we provide a comparison of the two models. We show, on the one hand, that the behaviour exhibited by ITMs is reactively executable, and, on the other hand, that the stream translations naturally associated with RTMs are interactively computable. We conclude from these results that the theory of reactive executability subsumes the theory of interactive computability. Inspired by the existing model of ITMs with advice, which provides a model of evolving computation, we also consider RTMs with advice and we establish that a facility of advice considerably upgrades the behavioural expressiveness of RTMs: every countable transition system can be simulated by some RTM with advice up to a fine notion of behavioural equivalence.

Journal ArticleDOI
TL;DR: This paper examines maximal computability structures on subspaces of Euclidean space, they are examined, their characterization is given and conditions under which a maximal computable structure on such a space is unique are investigated.
Abstract: A computability structure on a metric space is a set of sequences which satisfy certain conditions. Of a particular interest are those computability structures which contain a dense sequence, so called separable computability structures. In this paper we observe maximal computability structures which are more general than separable computability structures and we examine their properties. In particular, we examine maximal computability structures on subspaces of Euclidean space, we give their characterization and we investigate conditions under which a maximal computability structure on such a space is unique. We also give a characterization of separable computability structures on a segment.