scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 2017"


Book ChapterDOI
27 Sep 2017
TL;DR: The strong (uniform computability) Turing completeness of chemical reaction networks over a finite set of molecular species under the differential semantics is derived, solving a long standing open problem.
Abstract: When seeking to understand how computation is carried out in the cell to maintain itself in its environment, process signals and make decisions, the continuous nature of protein interaction processes forces us to consider also analog computation models and mixed analog-digital computation programs. However, recent results in the theory of analog computability and complexity establish fundamental links with classical programming. In this paper, we derive from these results the strong (uniform computability) Turing completeness of chemical reaction networks over a finite set of molecular species under the differential semantics , solving a long standing open problem. Furthermore we derive from the proof a compiler of mathematical functions into elementary chemical reactions. We illustrate the reaction code generated by our compiler on trigonometric functions, and on various sigmoid functions which can serve as markers of presence or absence for implementing program control instructions in the cell and imperative programs. Then we start comparing our compiler-generated circuits to the natural circuit of the MAPK signaling network, which plays the role of an analog-digital converter in the cell with a Hill type sigmoid input/output functions.

63 citations


Posted ContentDOI
07 Sep 2017-bioRxiv
TL;DR: In this article, the authors introduce a conceptual framework and an interventional calculus to steer, manipulate, and reconstruct the dynamics and generating mechanisms of non-linear dynamical systems from partial and disordered observations based on the algorithmic contribution of each of the systems elements to the whole by exploiting principles from the theory of computability and algorithmic randomness.
Abstract: We introduce a conceptual framework and an interventional calculus to steer, manipulate, and reconstruct the dynamics and generating mechanisms of non-linear dynamical systems from partial and disordered observations based on the algorithmic contribution of each of the systems elements to the whole by exploiting principles from the theory of computability and algorithmic randomness. This calculus entails finding and applying controlled interventions to an evolving object to estimate how its algorithmic information content is affected in terms of positive or negative shifts towards and away from randomness in connection to causation. The approach is an alternative to statistical approaches for inferring causal relationships and formulating theoretical expectations from perturbation analysis. We find that the algorithmic information landscape of a system runs parallel to its dynamic landscape, affording an avenue for moving systems on one plane so they may have controlled effects on the other plane. Based on these methods, we advance tools for reprogramming a system that do not require full knowledge or access to the system's actual kinetic equations or to probability distributions. This new approach yields a suite of powerful parameter-free algorithms of wide applicability, ranging from the discovery of causality, dimension reduction, feature selection, model generation, a maximal algorithmic-randomness principle and a system's (re)programmability index. We apply these methods to static (e.coli Transcription Factor network) and to evolving genetic regulatory networks (differentiating naive from Th17 cells, and the CellNet database). We highlight the capability of the methods to pinpoint key elements related to cell function and cell development, conforming with biological knowledge and experimentally validated data, and demonstrate how the method can reshape a system's dynamics in a controlled manner through algorithmic causal mechanisms.

28 citations


Journal ArticleDOI
TL;DR: This work specifies an algorithm which identifies in the limit a computable measure for which the sequence is typical, in the sense of Martin-Lof (there may be more than one such measure), and analyses the associated predictions in both cases.
Abstract: Within psychology, neuroscience and artificial intelligence, there has been increasing interest in the proposal that the brain builds probabilistic models of sensory and linguistic input: that is, to infer a probabilistic model from a sample. The practical problems of such inference are substantial: the brain has limited data and restricted computational resources. But there is a more fundamental question: is the problem of inferring a probabilistic model from a sample possible even in principle? We explore this question and find some surprisingly positive and general results. First, for a broad class of probability distributions characterised by computability restrictions, we specify a learning algorithm that will almost surely identify a probability distribution in the limit given a finite i.i.d. sample of sufficient but unknown length. This is similarly shown to hold for sequences generated by a broad class of Markov chains, subject to computability assumptions. The technical tool is the strong law of large numbers. Second, for a large class of dependent sequences, we specify an algorithm which identifies in the limit a computable measure for which the sequence is typical, in the sense of Martin-Lof (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We analyse the associated predictions in both cases. We also briefly consider special cases, including language learning, and wider theoretical implications for psychology.

25 citations


Proceedings ArticleDOI
01 Jun 2017
TL;DR: In this paper, the authors considered the problem of finding the minimum number of robots required to traverse a ring of arbitrary size in order to solve the perpetual exploration problem, where each node is required to be infinitely often visited by a robot.
Abstract: We consider systems made of autonomous mobile robots evolving in highly dynamic discrete environment i.e., graphs where edges may appear and disappear unpredictably without any recurrence, stability, nor periodicity assumption. Robots are uniform (they execute the same algorithm), they are anonymous (they are devoid of any observable ID), they have no means allowing them to communicate together, they share no common sense of direction, and they have no global knowledge related to the size of the environment. However, each of them is endowed with persistent memory and is able to detect whether it stands alone at its current location. A highly dynamic environment is modeled by a graph such that its topology keeps continuously changing over time. In this paper, we consider only dynamic graphs in which nodes are anonymous, each of them is infinitely often reachable from any other one, and such that its underlying graph (i.e., the static graph made of the same set of nodes and that includes all edges that are present at least once over time) forms a ring of arbitrary size. In this context, we consider the fundamental problem of perpetual exploration: each node is required to be infinitely often visited by a robot. This paper analyzes the computability of this problem in (fully) synchronous settings, i.e., we study the deterministic solvability of the problem with respect to the number of robots. We provide three algorithms and two impossibility results that characterize, for any ring size, the necessary and sufficient number of robots to perform perpetual exploration of highly dynamic rings.

18 citations


Journal ArticleDOI
TL;DR: It is shown that the so-obtained extension of the language of formulas with restricted quantifiers over structures with hereditary finite lists is a conservative enrichment.
Abstract: For constructing an enrichment of the language with restricted quantifiers, we extend the construction of conditional terms. We show that the so-obtained extension of the language of formulas with restricted quantifiers over structures with hereditary finite lists is a conservative enrichment.

16 citations


Journal ArticleDOI
TL;DR: This paper validate, from an epistemological viewpoint, models and simulations of phenomenological emergence where the sequence of events constitutes the natural, analogical non-Turing computation which a cognitive complex system can reproduce through learning.
Abstract: We consider processes of emergence within the conceptual framework of the Information Loss principle and the concepts of (1) systems conserving information; (2) systems compressing information; and (3) systems amplifying information. We deal with the supposed incompatibility between emergence and computability tout-court. We distinguish between computational emergence, when computation acquires properties, and emergent computation, when computation emerges as a property. The focus is on emergence processes occurring within computational processes. Violations of Turing-computability such as non-explicitness and incompleteness are intended to represent partially the properties of phenomenological emergence, such as logical openness, given by the observer’s cognitive role; structural dynamics where change regards rules rather than only values; and multi-modelling where multiple non-equivalent models are required to model such structural dynamics. In this way, we validate, from an epistemological viewpoint, models and simulations of phenomenological emergence where the sequence of events constitutes the natural, analogical non-Turing computation which a cognitive complex system can reproduce through learning. Reproducibility through learning is different from Turing-like computational iteration. This paper aims to open a new, non-reductionist understanding of the conceptual relationship between emergence and computability.

16 citations


Journal ArticleDOI
TL;DR: In this paper, two combinatorial algorithms are presented to compute the exact Tukey depth of a single point with complexity O(n −p-1/log (n) log (n)-right).
Abstract: Tukey depth function is one of the most famous multivariate tools serving robust purposes. It is also very well known for its computability problems in dimensions $$p \ge 3$$ . In this paper, we address this computing issue by presenting two combinatorial algorithms. The first is naive and calculates the Tukey depth of a single point with complexity $$O\left( n^{p-1}\log (n)\right) $$ , while the second further utilizes the quasiconcave of the Tukey depth function and hence is more efficient than the first. Both require very minimal memory and run much faster than the existing ones. All experiments indicate that they compute the exact Tukey depth.

15 citations


Journal Article
TL;DR: In this paper, the authors investigated second-order representations of integrable functions on the unit interval and showed that these representations are equivalent to the standard representations of these spaces as metric spaces and still render integration polynomial-time computable.
Abstract: This paper investigates second-order representations in the sense of Kawamura and Cook for spaces of integrable functions that regularly show up in analysis. It builds upon prior work about the space of continuous functions on the unit interval: Kawamura and Cook introduced a representation inducing the right complexity classes and proved that it is the weakest second-order representation such that evaluation is polynomial-time computable. The first part of this paper provides a similar representation for the space of integrable functions on a bounded subset of Euclidean space: The weakest representation rendering integration over boxes is polynomial-time computable. In contrast to the representation of continuous functions, however, this representation turns out to be discontinuous with respect to both the norm and the weak topology. The second part modifies the representation to be continuous and generalizes it to Lp-spaces. The arising representations are proven to be computably equivalent to the standard representations of these spaces as metric spaces and to still render integration polynomial-time computable. The family is extended to cover Sobolev spaces on the unit interval, where less basic operations like differentiation and some Sobolev embeddings are shown to be polynomial-time computable. Finally as a further justification quantitative versions of the Arzel\`a-Ascoli and Fr\'echet-Kolmogorov Theorems are presented and used to argue that these representations fulfill a minimality condition. To provide tight bounds for the Fr\'echet-Kolmogorov Theorem, a form of exponential time computability of the norm of Lp is proven.

14 citations


Journal ArticleDOI
06 Sep 2017
TL;DR: A new concept of “low-entropy cloud computing systems” is proposed, and they are contrasted to virtualization clouds and partitioned clouds, in terms of user experience, application development efficiency, execution efficiency, and resource matching.
Abstract: Current cloud computing systems, whether virtualization clouds or partitioned clouds, face the challenge of simultaneously satisfying user experience and system efficiency requirements. Both the industry and the academia are investigating next-generation cloud computing systems to address this problem. This paper points out a main cause of this problem: existing cloud systems have high computing system entropy (i.e., disorder and uncertainty), which manifest as four classes of disorders. We propose a new concept of “low-entropy cloud computing systems, and contrast them to virtualization clouds and partitioned clouds, in terms of user experience, application development efficiency, execution efficiency, and resource matching. We discuss four new features and techniques of low-entropy clouds: (1) a notion of production computability that, unlike Turing computability and algorithmic tractability, formalizes the user experience requirements of cloud computing practices; (2) a conjecture, named the DIP (differentiation, isolation, prioritization) conjecture, that tries to capture the necessary and sufficient conditions for a cloud computing system to realize production computability; (3) the labeled von Neumann architecture that has the potential to support the DIP capabilities and thus simultaneously satisfy user experience and system efficiency requirements; and (4) a co-design technique allowing a cloud computing system to adaptively match deep-learning workloads to neural network accelerator hardware.

14 citations


Book ChapterDOI
01 Jan 2017
TL;DR: This paper surveys recent work on how classical asymptotic density interacts with the theory of computability and includes a few easy proofs to illustrate the flavor of the subject.
Abstract: The purpose of this paper is to survey recent work on how classical asymptotic density interacts with the theory of computability. We have tried to make the survey accessible to those who are not specialists in computability theory and we mainly state results without proof, but we include a few easy proofs to illustrate the flavor of the subject.

13 citations


Posted Content
TL;DR: In this article, the existence, uniqueness and computability of solutions for a class of discrete time recursive utilities models were studied, and it was shown that the natural iterative algorithm is convergent if and only if a solution exists.
Abstract: We study existence, uniqueness and computability of solutions for a class of discrete time recursive utilities models. By combining two streams of the recent literature on recursive preferences---one that analyzes principal eigenvalues of valuation operators and another that exploits the theory of monotone concave operators---we obtain conditions that are both necessary and sufficient for existence and uniqueness of solutions. We also show that the natural iterative algorithm is convergent if and only if a solution exists. Consumption processes are allowed to be nonstationary.

Posted Content
TL;DR: Set-Constrained Delivery Broadcast (SCD-broadcast) as mentioned in this paper is a new communication abstraction, which allows each process to broadcast messages and deliver a sequence of sets of messages in such a way that, if a process delivers a set of messages including a message m and later delivers another message including m, no process can deliver first a set m including m and then another message m.
Abstract: This paper introduces a new communication abstraction, called Set-Constrained Delivery Broadcast (SCD-broadcast), whose aim is to provide its users with an appropriate abstraction level when they have to implement objects or distributed tasks in an asynchronous message-passing system prone to process crash failures. This abstraction allows each process to broadcast messages and deliver a sequence of sets of messages in such a way that, if a process delivers a set of messages including a message m and later delivers a set of messages including a message m ' , no process delivers first a set of messages including m ' and later a set of message including m. After having presented an algorithm implementing SCD-broadcast, the paper investigates its programming power and its computability limits. On the "power" side it presents SCD-broadcast-based algorithms, which are both simple and efficient, building objects (such as snapshot and conflict-free replicated data), and distributed tasks. On the "computability limits" side it shows that SCD-broadcast and read/write registers are computationally equivalent.

Posted Content
TL;DR: In this paper, a rigorous model of human computation and associated measures of complexity is proposed to better understand what humans can compute in their heads, where the adversary is restricted to at most 10^24 (Avogadro number of) steps.
Abstract: What can humans compute in their heads? We are thinking of a variety of Crypto Protocols, games like Sudoku, Crossword Puzzles, Speed Chess, and so on. The intent of this paper is to apply the ideas and methods of theoretical computer science to better understand what humans can compute in their heads. For example, can a person compute a function in their head so that an eavesdropper with a powerful computer --- who sees the responses to random input --- still cannot infer responses to new inputs? To address such questions, we propose a rigorous model of human computation and associated measures of complexity. We apply the model and measures first and foremost to the problem of (1) humanly computable password generation, and then consider related problems of (2) humanly computable "one-way functions" and (3) humanly computable "pseudorandom generators". The theory of Human Computability developed here plays by different rules than standard computability, and this takes some getting used to. For reasons to be made clear, the polynomial versus exponential time divide of modern computability theory is irrelevant to human computation. In human computability, the step-counts for both humans and computers must be more concrete. Specifically, we restrict the adversary to at most 10^24 (Avogadro number of) steps. An alternate view of this work is that it deals with the analysis of algorithms and counting steps for the case that inputs are small as opposed to the usual case of inputs large-in-the-limit.

Journal ArticleDOI
TL;DR: This work specifies an algorithm which identifies in the limit a computable measure for which the sequence is typical, in the sense of Martin-Löf (there may be more than one such measure), and analyzes the associated predictions in both cases.

Journal ArticleDOI
TL;DR: In this article, the notion of computability of Folner sets for finitely generated amenable groups with unsolvable word problems has been defined, and it has been shown that the Kharlampovich groups have a computable Folner set.
Abstract: We define the notion of computability of Folner sets for finitely generated amenable groups. We prove, by an explicit description, that the Kharlampovich groups, finitely presented solvable groups with unsolvable Word Problem, have computable Folner sets. We also prove computability of Folner sets for extensions — with subrecursive distortion functions — of amenable groups with solvable Word Problem by finitely generated groups with computable Folner sets. Moreover, we obtain some known and some new upper bounds for the Folner function for these particular extensions.

DOI
01 Sep 2017
TL;DR: The notion of strongly polynomial-time computability of functionals on Baire space has been introduced in this article, which is a stronger requirement than polynomially time computability.
Abstract: This paper introduces a more restrictive notion of feasibility of functionals on Baire space than the established one from second-order complexity theory. Thereby making it possible to consider functions on the natural numbers as running times of oracle Turing machines and avoiding second-order polynomials, which are notoriously difficult to handle. Furthermore, all machines that witness this stronger kind of feasibility can be clocked and the different traditions of treating partial functionals from computable analysis and second-order complexity theory are equated in a precise sense. The new notion is named "strong polynomial-time computability", and proven to be a strictly stronger requirement than polynomial-time computability. It is proven that within the framework for complexity of operators from analysis introduced by Kawamura and Cook the classes of strongly polynomial-time computable functionals and polynomial-time computable functionals coincide.

Journal ArticleDOI
TL;DR: This work obtains characterizations of the algorithmically random reals in higher randomness classes, as probabilities of certain events that can happen when an oracle universal machine runs probabilistically on a random oracle.

Journal ArticleDOI
TL;DR: This paper studies the computability of unifiability and the unification types in several epistemic logics to determine the types of logical systems that capture elements of reasoning about knowledge.
Abstract: Epistemic logics are essential to the design of logical systems that capture elements of reasoning about knowledge. In this paper, we study the computability of unifiability and the unification types in several epistemic logics.

Dissertation
28 Jun 2017
TL;DR: In this paper, a dual approach is proposed to explore shift spaces with a dual objective: on the one hand, the authors are interested in their dynamical properties and on the other hand, they study these abjects as computational models.
Abstract: Shift spaces are sets of colorings of a group which avoid a set of forbidden patterns and are endowed with a shift action. These spaces appear naturally as discrete versions of dynamical systems: they are obtained by partitioning the phase space and mapping each element into the sequence of partitions visited by its orbit.Severa! breakthroughs in this domain have pointed out the intricate relationship between dynamics of shift spaces and their computability properties. One remarkable example is the classification of the entropies of multidimensional subshifts of finite type as the set of right recursively enumerable numbers. This work explores shift spaces with a dual approach: on the one hand we are interested in their dynamical properties and on the ether hand we studythese abjects as computational models.Four salient results have been obtained as a result of this approach: (1) a combinatorial condition ensuring non-emptiness of subshifts on arbitrary countable groups; (2) a simulation theorem which realizes effective actions of finitely generated groups as factors of a subaction of a subshift of finite type; (3) a characterization of effectiveness with oracles using generalized Turing machines and (4) the undecidability of the torsion problem for two group invariants of shift spaces.As byproducts of these results we obtain a simple proof of the existence of strongly aperiodic subshifts in countable groups. Furthermore, we realize them as subshifts of finite type in the case of a semidirect product of a d-dimensional integer lattice with a finitely generated group with decida ble word problem whenever d> 1.

Journal ArticleDOI
TL;DR: It is shown that AIXI is not limit computable, thus it cannot be approximated using finite computation, however there are limit computables e -optimal approximations to AIXi and computation bounds for knowledge-seeking agents are derived.


Proceedings ArticleDOI
04 Aug 2017
TL;DR: The intertwining importance and connections of three principles of data science in the title in data-driven decisions will be discussed and the three principles will be demonstrated in the context of two neuroscience projects and through analytical connections.
Abstract: In this talk, I'd like to discuss the intertwining importance and connections of three principles of data science in the title in data-driven decisions. Making prediction as its central task and embracing computation as its core, machine learning has enabled wide-ranging data-driven successes. Prediction is a useful way to check with reality. Good prediction implicitly assumes stability between past and future. Stability (relative to data and model perturbations) is also a minimum requirement for interpretability and reproducibility of data driven results (cf. Yu, 2013). It is closely related to uncertainty assessment. Obviously, both prediction and stability principles can not be employed without feasible computational algorithms, hence the importance of computability. The three principles will be demonstrated in the context of two neuroscience projects and through analytical connections. In particular, the first project adds stability to predictive modeling used for reconstruction of movies from fMRI brain signlas for interpretable models. The second project use predictive transfer learning that combines AlexNet, GoogleNet and VGG with single V4 neuron data for state-of-the-art prediction performance. Our results lend support, to a certain extent, to the resemblance of these CNNs to brain and at the same time provide stable pattern interpretations of neurons in the difficult primate visual cortex V4.

Proceedings ArticleDOI
01 Feb 2017
TL;DR: In this article, it was shown that Rubel's result holds for polynomial ODEs as well as for ODE-based models of continuous-time models of computations.
Abstract: An astonishing fact was established by Lee A. Rubel (1981): there exists a fixed non-trivial fourth-order polynomial differential algebraic equation (DAE) such that for any positive continuous function phi on the reals, and for any positive continuous function epsilon(t), it has a C^infinity solution with | y(t) - phi(t) | < epsilon(t) for all t. Lee A. Rubel provided an explicit example of such a polynomial DAE. Other examples of universal DAE have later been proposed by other authors. However, while these results may seem very surprising, their proofs are quite simple and are frustrating for a computability theorist, or for people interested in modeling systems in experimental sciences. First, the involved notions of universality is far from usual notions of universality in computability theory. In particular, the proofs heavily rely on the fact that constructed DAE does not have unique solutions for a given initial data. This is very different from usual notions of universality where one would expect that there is clear unambiguous notion of evolution for a given initial data, for example as in computability theory. Second, the proofs usually rely on solutions that are piecewise defined. Hence they cannot be analytic, while analycity is often a key expected property in experimental sciences. Third, the proofs of these results can be interpreted more as the fact that (fourth-order) polynomial algebraic differential equations is a too loose a model compared to classical ordinary differential equations. In particular, one may challenge whether the result is really a universality result. The question whether one can require the solution that approximates phi to be the unique solution for a given initial data is a well known open problem [Rubel 1981, page 2], [Boshernitzan 1986, Conjecture 6.2]. In this article, we solve it and show that Rubel's statement holds for polynomial ordinary differential equations (ODEs), and since polynomial ODEs have a unique solution given an initial data, this positively answers Rubel's open problem. More precisely, we show that there exists a fixed polynomial ODE such that for any phi and epsilon(t) there exists some initial condition that yields a solution that is epsilon-close to phi at all times. The proof uses ordinary differential equation programming. We believe it sheds some light on computability theory for continuous-time models of computations. It also demonstrates that ordinary differential equations are indeed universal in the sense of Rubel and hence suffer from the same problem as DAEs for modelization: a single equation is capable of modelling any phenomenon with arbitrary precision, meaning that trying to fit a model based on polynomial DAEs or ODEs is too general (if ithas a sufficient dimension).

Posted Content
TL;DR: It is discovered that all claims that certain UCOMP devices can perform hypercomputation, super-Turing computation or solve NP-complete problems in polynomial time rely on the provision of one or more unphysical resources.
Abstract: We discuss some claims that certain UCOMP devices can perform hypercomputation (compute Turing-uncomputable functions) or perform super-Turing computation (solve NP-complete problems in polynomial time). We discover that all these claims rely on the provision of one or more unphysical resources.

Journal ArticleDOI
TL;DR: In this article, a new paradigm for model updating based on formulating the structured inverse eigenvalue problem as a Constraint Satisfaction Problem is proposed to solve under-determined and non-unique inverse problems as well as accommodate measurement uncertainty through relaxation of constraint equations.

Journal ArticleDOI
TL;DR: A proof is provided and the related quantitative results and applications of Martin-Löf randomness and Hoyrup and Rojas (2009, Proposition 5) are discussed.
Abstract: Algorithmic randomness theory starts with a notion of an individual random object. To be reasonable, this notion should have some natural properties; in particular, an object should be random with respect to the image distribution F(P) (for some distribution P and some mapping F) if and only if it has a P-random F-preimage. This result (for computable distributions and mappings, and Martin-Lof randomness) was known for a long time (folklore); for layerwise computable mappings it was mentioned in Hoyrup and Rojas (2009, Proposition 5) (even for more general case of computable metric spaces). In this paper we provide a proof and discuss the related quantitative results and applications.

Book ChapterDOI
01 Jan 2017
TL;DR: This work considers the development of computability in the 1930s from what it has been called the formalism free point of view and employs a dual conceptual framework: confluence together with grounding.
Abstract: We consider the development of computability in the 1930s from what we have called the formalism free point of view. We employ a dual conceptual framework: confluence together with grounding.

Posted Content
TL;DR: A more restrictive notion of feasibility of functionals on Baire space than the established one from second-order complexity theory is introduced, making it possible to consider functions on the natural numbers as running times of oracle Turing machines and avoiding second- order polynomials, which are notoriously difficult to handle.
Abstract: This paper introduces a more restrictive notion of feasibility of functionals on Baire space than the established one from second-order complexity theory. Thereby making it possible to consider functions on the natural numbers as running times of oracle Turing machines and avoiding second-order polynomials, which are notoriously difficult to handle. Furthermore, all machines that witness this stronger kind of feasibility can be clocked and the different traditions of treating partial operators from computable analysis and second-order complexity theory are equated in a precise sense. The new notion is named "strong polynomial-time computability", and proven to be a strictly stronger requirement than polynomial-time computability. It is proven that within the framework for complexity of operators from analysis introduced by Kawamura and Cook the classes of strongly polynomial-time computable operators and polynomial-time computable operators coincide.

Posted Content
TL;DR: It turns out that the availability of interpreters and specializers, that make amonoidal category into a monoidal computer, is equivalent with the existence of a universal state space that carries a weakly final state machine for all types of input and output.
Abstract: Monoidal computer is a categorical model of intensional computation, where many different programs correspond to the same input-output behavior. The upshot of yet another model of computation is that a categorical formalism should provide a much needed high level language for theory of computation, flexible enough to allow abstracting away the low level implementation details when they are irrelevant, or taking them into account when they are genuinely needed. A salient feature of the approach through monoidal categories is the formal graphical language of string diagrams, which supports visual reasoning about programs and computations. In the present paper, we provide a coalgebraic characterization of monoidal computer. It turns out that the availability of interpreters and specializers, that make a monoidal category into a monoidal computer, is equivalent with the existence of a *universal state space*, that carries a weakly final state machine for any pair of input and output types. Being able to program state machines in monoidal computers allows us to represent Turing machines, to capture their execution, count their steps, as well as, e.g., the memory cells that they use. The coalgebraic view of monoidal computer thus provides a convenient diagrammatic language for studying computability and complexity.

Book
31 Aug 2017
TL;DR: Hirschfeldt et al. as mentioned in this paper showed that the Homogeneous Model Theorem (HMT) and AMT are equivalent in the sense of reverse mathematics, as well as in a strong computability theoretic sense.
Abstract: Goncharov and Peretyat’kin independently gave necessary and su cient conditions for when a set of types of a complete theory T is the type spectrum of some homogeneous model of T . Their result can be stated as a principle of second order arithmetic, which we call the Homogeneous Model Theorem (HMT), and analyzed from the points of view of computability theory and reverse mathematics. Previous computability theoretic results by Lange suggested a close connection between HMT and the Atomic Model Theorem (AMT), which states that every complete atomic theory has an atomic model. We show that HMT and AMT are indeed equivalent in the sense of reverse mathematics, as well as in a strong computability theoretic sense. We do the same for an analogous result of Peretyat’kin giving necessary and su cient conditions for when a set of types is the type spectrum of some model. Along the way, we analyze a number of related principles. Some of these turn out to fall into well-known reverse mathematical classes, such as ACA0, I⌃2, and B⌃2. Others, however, exhibit complex interactions with first order induction and bounding principles. In particular, we isolate several principles that are provable from I⌃2, are (more than) arithmetically conservative over RCA0, and imply I⌃ 0 2 over B⌃2. In an attempt to capture the combinatorics of this class of principles, we introduce the principle ⇧1GA, as well as its generalization ⇧ 0 n GA, which is conservative over RCA0 and equivalent to I⌃0 n+1 over B⌃ 0 n+1. Received by the editor July 22, 2015. 2010 Mathematics Subject Classification. Primary 03B30; Secondary 03C07, 03C15, 03C50, 03C57, 03D45, 03F30, 03F35. Hirschfeldt was partially supported by the National Science Foundation of the United States, grants DMS-0801033 and DMS-1101458. Lange was partially supported by NSF Grants DMS-0802961 and DMS-1100604. Shore was partially supported by NSF Grants DMS-0554855, DMS-0852811, and DMS1161175, John Templeton Foundation Grant 13408, and a short term visiting position at the University of Chicago as part of the Mathematics Department’s visitor program. v CHAPTER