scispace - formally typeset
Search or ask a question

Showing papers on "Turing machine published in 2014"


Alex Graves1, Greg Wayne1, Ivo Danihelka1
20 Oct 2014
TL;DR: A combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent.
Abstract: We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.

1,471 citations


Posted Content
Alex Graves1, Greg Wayne1, Ivo Danihelka1
TL;DR: Neural Turing Machines as discussed by the authors extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes, analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end.
Abstract: We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.

1,328 citations


Book
12 Mar 2014
TL;DR: This introduction to the basic theoretical models of computability develops their rich and varied structure and culminates in discussions of effective computability, decidability, and Godel's incompleteness theorems.
Abstract: From the Publisher: This introduction to the basic theoretical models of computability develops their rich and varied structure. The first part is devoted to finite automata and their properties. Afterwards, pushdown automata are utilized as a broader class of models, enabling the analysis of context-free languages. In the remaining chapters, Turing machines are introduced, and the book culminates in discussions of effective computability, decidability, and Godel's incompleteness theorems.

372 citations


Journal ArticleDOI
08 May 2014-PLOS ONE
TL;DR: In this article, the authors present a numerical approach to the problem of approximating the Kolmogorov-Chaitin complexity of short strings, motivated by the notion of algorithmic probability.
Abstract: Drawing on various notions from theoretical computer science, we present a novel numerical approach, motivated by the notion of algorithmic probability, to the problem of approximating the Kolmogorov-Chaitin complexity of short strings. The method is an alternative to the traditional lossless compression algorithms, which it may complement, the two being serviceable for different string lengths. We provide a thorough analysis for all binary strings of length and for most strings of length by running all Turing machines with 5 states and 2 symbols ( with reduction techniques) using the most standard formalism of Turing machines, used in for example the Busy Beaver problem. We address the question of stability and error estimation, the sensitivity of the continued application of the method for wider coverage and better accuracy, and provide statistical evidence suggesting robustness. As with compression algorithms, this work promises to deliver a range of applications, and to provide insight into the question of complexity calculation of finite (and short) strings. Additional material can be found at the Algorithmic Nature Group website at http://www.algorithmicnature.org. An Online Algorithmic Complexity Calculator implementing this technique and making the data available to the research community is accessible at http://www.complexitycalculator.com.

125 citations


Book ChapterDOI
17 Aug 2014
TL;DR: The notion of differing-inputs obfuscation is studied in the setting where the attacker is also given some auxiliary information related to the circuits, showing that this notion leads to many interesting applications.
Abstract: The notion of differing-inputs obfuscation (diO) was introduced by Barak et al. (CRYPTO 2001). It guarantees that, for any two circuits C0, C1, if it is difficult to come up with an input x on which C0(x) ≠ C1(x), then it should also be difficult to distinguish the obfuscation of C0 from that of C1. This is a strengthening of indistinguishability obfuscation, where the above is only guaranteed for circuits that agree on all inputs: C0(x) = C1(x) for all x. Two recent works of Ananth et al. (ePrint 2013) and Boyle et al. (TCC 2014) study the notion of diO in the setting where the attacker is also given some auxiliary information related to the circuits, showing that this notion leads to many interesting applications.

113 citations


Proceedings ArticleDOI
05 Jan 2014
TL;DR: In this paper, it was shown that Winfree's abstract tile assembly model is not intrinsically universal when restricted to use non-cooperative tile binding, and the result holds in both two and three dimensions.
Abstract: We prove a negative result on the power of a model of algorithmic self-assembly for which finding general techniques and results has been notoriously difficult. Specifically, we prove that Winfree's abstract Tile Assembly Model is not intrinsically universal when restricted to use noncooperative tile binding. This stands in stark contrast to the recent result that the abstract Tile Assembly Model is indeed intrinsically universal when cooperative binding is used (FOCS 2012). Noncooperative self-assembly, also known as "temperature 1", is where all tiles bind to each other if they match on at least one side. On the other hand, cooperative self-assembly requires that some tiles bind on at least two sides.Our result shows that the change from non-cooperative to cooperative binding qualitatively improves the range of dynamics and behaviors found in these models of nanoscale self-assembly. The result holds in both two and three dimensions; the latter being quite surprising given that three-dimensional noncooperative tile assembly systems simulate Turing machines. This shows that Turing universal behavior in self-assembly does not imply the ability to simulate all algorithmic self-assembly processes. In addition to the negative result, we exhibit a three-dimensional noncooperative self-assembly tile set capable of simulating any two-dimensional noncooperative self-assembly system. This tile set implies that, in a restricted sense, non-cooperative self-assembly is intrinsically universal for itself.

105 citations


Posted Content
TL;DR: Bonsma et al. as mentioned in this paper showed that a large class of reconfiguration problems known to be PSPACE-complete on graphs of bounded treedepth remain PSPACEcomplete even when limited to graphs with bounded bandwidth.
Abstract: We show that several reconfiguration problems known to be PSPACE-complete remain so even when limited to graphs of bounded bandwidth. The essential step is noticing the similarity to very limited string rewriting systems, whose ability to directly simulate Turing Machines is classically known. This resolves a question posed open in [Bonsma P., 2012]. On the other hand, we show that a large class of reconfiguration problems becomes tractable on graphs of bounded treedepth, and that this result is in some sense tight.

75 citations


Book ChapterDOI
01 Jan 2014
TL;DR: The notion of degree of unsolvability was introduced by Post in [Post, 1944] and has been used extensively in computability theory as mentioned in this paper, where a set A is computable relative to a set B, and B is Turing reducible to A.
Abstract: Modern computability theory began with Turing [Turing, 1936], where he introduced the notion of a function computable by a Turing machine. Soon after, it was shown that this definition was equivalent to several others that had been proposed previously and the Church-Turing thesis that Turing computability captured precisely the informal notion of computability was commonly accepted. This isolation of the concept of computable function was one of the greatest advances of twentieth century mathematics and gave rise to the field of computability theory. Among the first results in computability theory was Church and Turing’s work on the unsolvability of the decision problem for first-order logic. Computability theory to a great extent deals with noncomputable problems. Relativized computation, which also originated with Turing, in [Turing, 1939], allows the comparison of the complexity of unsolvable problems. Turing formalized relative computation with oracle Turing machines. If a set A is computable relative to a set B, we say that A is Turing reducible to B. By identifying sets that are reducible to each other, we are led to the notion of degree of unsolvability first introduced by Post in [Post, 1944]. The degrees form a partially ordered set whose study is called degree theory. Most of the unsolvable problems that have arisen outside of computability theory are computably enumerable (c.e.). The c.e. sets can intuitively be viewed as unbounded search problems, a typical example being those formulas provable in some effectively given formal system. Reducibility allows us to isolate the most difficult c.e. problems, the complete problems. The standard method for showing that a c.e. problem is undecidable is to show that it is complete. Post [Post, 1944] asked if this technique always works, i.e., whether there is a noncomputable, incomplete c.e. set. This problem came to be known as Post’s Problem and it was origin of degree theory. Degree theory became one of the core areas of computability theory and attracted some of the most brilliant logicians of the second half of the twentieth century. The fascination with the field stems from the quite sophisticated techniques needed to solve the problems that arose, many of which are quite easy to state. The hallmark of the field is the priority method introduced by

71 citations


Journal ArticleDOI
TL;DR: The results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks and show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.
Abstract: We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power — as the static analog neural networks — irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

58 citations


Book ChapterDOI
24 Feb 2014
TL;DR: In this paper, a combination of definitional, constructive, and impossibility results regarding obfuscation for evasive functions are provided for a class of functions called evasive circuits, which are a collection of random circuits such that for every input x, a random circuit from a random program from the program can output 0 on x with overwhelming probability.
Abstract: An evasive circuit family is a collection of circuits \(\mathcal{C}\) such that for every input x, a random circuit from \(\mathcal{C}\) outputs 0 on x with overwhelming probability. We provide a combination of definitional, constructive, and impossibility results regarding obfuscation for evasive functions:

49 citations


Proceedings ArticleDOI
14 Jul 2014
TL;DR: The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties, and the first complete positive answer to this long-standing problem of λ-calculus.
Abstract: Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is λ-calculus a reasonable machine? Is there a way to measure the computational complexity of a λ-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of λ-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating λ-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modelled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the λ-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the λ-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to β-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.

Journal ArticleDOI
Dmitry A. Zaitsev1
01 Jan 2014
TL;DR: A universal Petri net with 14 places, 42 transitions, and 218 arcs was built in the class of deterministic inhibitor Petri nets (DIPNs); it is based on the minimal Turing machine (TM) of Woods and Neary with 6 states, 4 symbols, and 23 instructions, directly simulated by a PetriNet.
Abstract: A universal Petri net with 14 places, 42 transitions, and 218 arcs was built in the class of deterministic inhibitor Petri nets (DIPNs); it is based on the minimal Turing machine (TM) of Woods and Neary with 6 states, 4 symbols, and 23 instructions, directly simulated by a Petri net. Several techniques were developed, including bi-tag system (BTS) construction on a DIPN, special encoding of TM tape by two stacks, and concise subnets that implement arithmetic encoding operations. The simulation using the BTS has cubic time and linear space complexity, while the resulting universal net runs in exponential time and quadratic space with respect to the target net transitions' firing sequence length. The technique is applicable for simulating any TM by the Petri net.

Posted Content
TL;DR: In this paper, the authors show how to build indistinguishability obfuscation (iO) for Turing Machines where the overhead is polynomial in the security parameter λ, machine description |M | and input size |x| (with only a negligible correctness error).
Abstract: We show how to build indistinguishability obfuscation (iO) for Turing Machines where the overhead is polynomial in the security parameter λ, machine description |M | and input size |x| (with only a negligible correctness error). In particular, we avoid growing polynomially with the maximum space of a computation. Our construction is based on iO for circuits, one way functions and injective pseudo random generators. Our results are based on new “selective enforcement” techniques. Here we first create a primitive called positional accumulators that allows for a small commitment to a much larger storage. The commitment is unconditionally sound for a select piece of the storage. This primitive serves as an “iO-friendly” tool that allows us to make two different programs equivalent at different stages of a proof. The pieces of storage that are selected depend on what hybrid stage we are at in a proof. We first build up our enforcement ideas in a simpler context of “message hiding encodings” and work our way up to indistinguishability obfuscation. ∗Supported by NSF CNS-0952692, CNS-1228599 and CNS-1414082. DARPA through the U.S. Office of Naval Research under Contract N00014-11-1-0382, Google Faculty Research award, the Alfred P. Sloan Fellowship, Microsoft Faculty Fellowship, and Packard Foundation Fellowship.

Journal ArticleDOI
TL;DR: It is proved that every s-state has a poly(s)-state that agrees with it on all inputs of length ≤2s if and only if NLL⊆LL/polylog.
Abstract: We strengthen a previously known connection between the size complexity of two-way finite automata ([InlineEquation not available: see fulltext.]) and the space complexity of Turing machines (tms). Specifically, we prove that Here, [InlineEquation not available: see fulltext.] and [InlineEquation not available: see fulltext.] are the deterministic and nondeterministic [InlineEquation not available: see fulltext.], NL and L/poly are the standard classes of languages recognizable in logarithmic space by nondeterministic tms and by deterministic tms with access to polynomially long advice, and NLL and LL/polylog are the corresponding complexity classes for space O(loglogn) and advice length poly(logn). Our arguments strengthen and extend an old theorem by Berman and Lingas and can be used to obtain variants of the above statements for other modes of computation or other combinations of bounds for the input length, the space usage, and the length of advice.

Book ChapterDOI
01 Jan 2014
TL;DR: The brain is a strongly recurrent structure that suggests a major role of self-feeding dynamics in the processes of perceiving, acting and learning, and in maintaining the organism alive.
Abstract: The brain is a strongly recurrent structure. This massive recurrence suggests a major role of self-feeding dynamics in the processes of perceiving, acting and learning, and in maintaining the organism alive

Book ChapterDOI
01 Sep 2014
TL;DR: The generalized powerset construction is used to define a generic (trace) semantics for \(\mathbb{T}\)-automata, and it is shown by numerous examples that it correctly instantiates for some known classes of machines/languages captured by the Chomsky hierarchy.
Abstract: The Chomsky hierarchy plays a prominent role in the foundations of theoretical computer science relating classes of formal languages of primary importance. In this paper we use recent developments on coalgebraic and monad-based semantics to obtain a generic notion of a \(\mathbb{T}\)-automaton, where \(\mathbb{T}\) is a monad, which allows the uniform study of various notions of machines (e.g. finite state machines, multi-stack machines, Turing machines, weighted automata). We use the generalized powerset construction to define a generic (trace) semantics for \(\mathbb{T}\)-automata, and we show by numerous examples that it correctly instantiates for some known classes of machines/languages captured by the Chomsky hierarchy. Moreover, our approach provides new generic techniques for studying expressivity power of various machine-based models.

Journal ArticleDOI
TL;DR: A theory of measurement in which an experimenter and his or her experimental procedure are modeled by algorithms that interact with physical equipment through a simple abstract interface is begun, and lower bounds on the computational power of Turing machines in polynomial time using nonuniform complexity classes are given.
Abstract: We have begun a theory of measurement in which an experimenter and his or her experimental procedure are modeled by algorithms that interact with physical equipment through a simple abstract interface. The theory is based upon using models of physical equipment as oracles to Turing machines. This allows us to investigate the computability and computational complexity of measurement processes. We examine eight different experiments that make measurements and, by introducing the idea of an observable indicator, we identify three distinct forms of measurement process and three types of measurement algorithm. We give axiomatic specifications of three forms of interfaces that enable the three types of experiment to be used as oracles to Turing machines, and lemmas that help certify an experiment satisfies the axiomatic specifications. For experiments that satisfy our axiomatic specifications, we give lower bounds on the computational power of Turing machines in polynomial time using nonuniform complexity classes. These lower bounds break the barrier defined by the Church-Turing Thesis.

Book ChapterDOI
01 May 2014
TL;DR: Turing's legacy: developments from Turing's ideas in logic Rod Downey and higher generalizations of the Turing model Dag Normann.
Abstract: Turing's legacy: developments from Turing's ideas in logic Rod Downey 1. Computability and analysis: the legacy of Alan Turing Jeremy Avigad and Vasco Brattka 2. Alan Turing and the other theory of computation (expanded) Lenore Blum 3. Turing in Quantumland Harry Buhrman 4. Computability theory, algorithmic randomness and Turing's anticipation Rod Downey 5. Computable model theory Ekaterina B. Fokina, Valentina Harizanov and Alexander Melnikov 6. Towards common-sense reasoning via conditional simulation: legacies of Turing in artificial intelligence Cameron E. Freer, Daniel M. Roy and Joshua B. Tenenbaum 7. Mathematics in the age of the Turing machine Thomas C. Hales 8. Turing and the development of computational complexity Steven Homer and Alan L. Selman 9. Turing machines to word problems Charles F. Miller, III 10. Musings on Turing's thesis Anil Nerode 11. Higher generalizations of the Turing model Dag Normann 12. Step by recursive step: Church's analysis of effective calculability Wilfried Sieg 13. Turing and the discovery of computability Robert Irving Soare 14. Transfinite machine models P. D. Welch.

Book ChapterDOI
07 Sep 2014
TL;DR: This work argues that this model closely matches several implementations of time in computer environments, and gives complexity-theoretic security proofs for OTP protocols and HMQV-like one-round AKE protocols.
Abstract: The notion of time plays an important role in many practically deployed cryptographic protocols, ranging from One-Time-Password OTP tokens to the Kerberos protocol. However, time is difficult to model in a Turing machine environment. We propose the first such model, where time is modelled as a global counter $\cal T$ . We argue that this model closely matches several implementations of time in computer environments. The usefulness of the model is shown by giving complexity-theoretic security proofs for OTP protocols and HMQV-like one-round AKE protocols.

Journal ArticleDOI
TL;DR: This work model people as finite automata, and provides a simple algorithm that, on a problem that captures a number of settings of interest, provably performs optimally as the number of states in the automaton increases.
Abstract: There have been two major lines of research aimed at capturing resource-bounded players in game theory. The first, initiated by Rubinstein (1986), charges an agent for doing costly computation; the second, initiated by Neyman (1985), does not charge for computation, but limits the computation that agents can do, typically by modeling agents as finite automata. We review recent work on applying both approaches in the context of decision theory. For the first approach, we take the objects of choice in a decision problem to be Turing machines, and charge players for the “complexity” of the Turing machine chosen (e.g., its running time). This approach can be used to explain well-known phenomena like first-impression-matters biases (i.e., people tend to put more weight on evidence they hear early on) and belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions) as the outcomes of quite rational decisions. For the second approach, we model people as finite automata, and provide a simple algorithm that, on a problem that captures a number of settings of interest, provably performs optimally as the number of states in the automaton increases.

Book ChapterDOI
10 Mar 2014
TL;DR: This paper considers the computational power of a new variant of networks of evolutionary processors which seems to be more suitable for a software and hardware implementation and shows that tag systems and Turing machines can be simulated by these networks with a constant number of nodes.
Abstract: In this paper, we consider the computational power of a new variant of networks of evolutionary processors which seems to be more suitable for a software and hardware implementation. Each processor as well as the data navigating throughout the network are now considered to be polarized. While the polarization of every processor is predefined, the data polarization is dynamically computed by means of a valuation mapping. Consequently, the protocol of communication is naturally defined by means of this polarization. We show that tag systems can be simulated by these networks with a constant number of nodes, while Turing machines can be simulated, in a time-efficient way, by these networks with a number of nodes depending linearly on the tape alphabet of the Turing machine.

Journal ArticleDOI
TL;DR: A new constructive proof of this fact that mathematical programming is Turing complete is presented, and its usefulness is showcased by discussing an application to finding the hardest input of any given program running on a Minsky Register Machine.
Abstract: Mathematical programming is Turing complete, and can be used as a general-purpose declarative language. We present a new constructive proof of this fact, and showcase its usefulness by discussing an application to finding the hardest input of any given program running on a Minsky Register Machine. We also discuss an application of mathematical programming to software verification obtained by relaxing one of the properties of Turing complete languages.

Journal ArticleDOI
TL;DR: In this paper, it was shown that a constant amount of space is sufficient to simulate a polynomial-space bounded Turing machine by P systems with active membranes, where the size of the alphabet and the number of membrane labels of each P system are also taken into account.
Abstract: We show that a constant amount of space is sufficient to simulate a polynomial-space bounded Turing machine by P systems with active membranes. We thus obtain a new characterisation of PSPACE, which raises interesting questions about the definition of space complexity for P systems. We then propose an alternative definition, where the size of the alphabet and the number of membrane labels of each P system are also taken into account. Finally we prove that, when less than a logarithmic number of membrane labels is available, moving the input objects around the membrane structure without rewriting them is not enough to even distinguish inputs of the same length.

Journal ArticleDOI
TL;DR: A general polynomial time Church-Turing Thesis for feasible computations by analogue-digital systems, having the non-uniform complexity class BPP//log* as theoretical upper bound, and proves that the higher polytime limit P/poly can be attained via non-computable analogue- digital interface protocols.
Abstract: We argue that dynamical systems involving discrete and continuous data can be modelled by Turing machines with oracles that are physical processes. Using the theory introduced in Beggs et al. [2,3], we consider the scope and limits of polynomial time computations by such systems. We propose a general polynomial time Church-Turing Thesis for feasible computations by analogue-digital systems, having the non-uniform complexity class BPP//log* as theoretical upper bound. We show why BPP//log* should be replace P/poly, which was proposed by Siegelmann for neural nets [23,24]. Then we examine whether other sources of hypercomputation can be found in analogue-digital systems besides the oracle itself. We prove that the higher polytime limit P/poly can be attained via non-computable analogue-digital interface protocols.

01 Jan 2014
TL;DR: This thesis explores automata learning, which is an umbrella term for techniques that derive finite automata from external information sources, in the areas of verification and synthesis, and introduces the notion of quantified data automata and develops an active learning algorithm for these automata.
Abstract: The objective of this thesis is to explore automata learning, which is an umbrella term for techniques that derive finite automata from external information sources, in the areas of verification and synthesis. We consider four application scenarios that turn out to be particularly well-suited: Regular Model Checking, quantified invariants of linear data structures, automatic reachability games, and labeled safety games. The former two scenarios stem from the area of verification whereas the latter two stem from the area of synthesis (more precisely, from the area of infinite-duration two-player games over graphs, as popularized by McNaughton). Regular Model Checking is a special kind of Model Checking in which the program to verify is modeled in terms of finite automata. We develop various (semi-)algorithms for computing invariants in Regular Model Checking: a white-box algorithm, which takes the whole program as input; two semi-black-box algorithms, which have access to a part of the program and learn missing information from a teacher; finally, two black-box algorithms, which obtain all information about the program from a teacher. For the black-box algorithms, we employ a novel paradigm, called ICE-learning, which is a generic learning setting for learning invariants. Quantified invariants of linear data structures occur in Floyd-Hoare-style verification of programs manipulating arrays and lists. To learn such invariants, we introduce the notion of quantified data automata and develop an active learning algorithm for these automata. Based on a finite sample of configurations that manifest on executions of the program in question, we learn a quantified data automaton and translate it into a logic formula expressing the invariant. The latter potentially requires an additional abstraction step to ensure that the resulting formula falls into a decidable logic. Automatic reachability games are classical reachability games played over automatic graphs; automatic graphs are defined by means of asynchronous transducers and subsume various types of graphs, such as finite graphs, pushdown graphs, and configuration graphs of Turing machines. We first consider automatic reachability games over finite graphs and present a symbolic fixed-point algorithm for computing attractors that uses deterministic finite automata to represent sets of vertices. Since such a

Book ChapterDOI
17 Oct 2014
TL;DR: This work shows that for any given deterministic MFA, one can construct a reversible MFA with the same number of heads that accepts the same language as the former, and applies this conversion method to a Turing machine.
Abstract: A two-way multi-head finite automaton (MFA) is a variant of a finite automaton consisting of a finite-state control, a finite number of heads that can move in two directions, and a read-only input tape. Here, we show that for any given deterministic MFA we can construct a reversible MFA with the same number of heads that accepts the same language as the former. We then apply this conversion method to a Turing machine. By this, we can obtain, in a simple way, an equivalent reversible Turing machine that is garbage-less, uses the same number of tape symbols, and uses the same amount of the storage tape.

Proceedings ArticleDOI
15 Jul 2014
TL;DR: This work studies protocols so that populations of distributed processes can construct networks and proves several universality results by presenting generic protocols that are capable of simulating a Turing Machine (TM) and exploiting it in order to construct a large class of networks.
Abstract: In this work, we study protocols so that populations of distributed processes can construct networks. In order to highlight the basic principles of distributed network construction we keep the model minimal in all respects. In particular, we assume finite-state processes that all begin from the same initial state and all execute the same protocol. Moreover, we assume pairwise interactions between the processes that are scheduled by a fair adversary. In order to allow processes to construct networks, we let them activate and deactivate their pairwise connections. When two processes interact, the protocol takes as input the states of the processes and the state of their connection and updates all of them. Initially all connections are inactive and the goal is for the processes, after interacting and activating/deactivating connections for a while, to end up with a desired stable network. We give protocols (optimal in some cases) and lower bounds for several basic network construction problems such as spanning line, spanning ring, spanning star, and regular network. The expected time to convergence of our protocols is analyzed under a uniform random scheduler. Finally, we prove several universality results by presenting generic protocols that are capable of simulating a Turing Machine (TM) and exploiting it in order to construct a large class of networks. We additionally show how to partition the population into k supernodes, each being a line of log k nodes, for the largest such $k$. This amount of local memory is sufficient for the supernodes to obtain unique names and exploit their names and their memory to realize nontrivial constructions.

Posted Content
TL;DR: This work argues against the claim that experimental boson-sampling would provide evidence against, or disprove, the Extended Church-Turing thesis -- that any physically realisable system can be efficiently simulated on a Turing machine.
Abstract: Boson-sampling is a highly simplified, but non-universal, approach to implementing optical quantum computation. It was shown by Aaronson and Arkhipov that this protocol cannot be efficiently classically simulated unless the polynomial hierarchy collapses, which would be a shocking result in computational complexity theory. Based on this, numerous authors have made the claim that experimental boson-sampling would provide evidence against, or disprove, the Extended Church-Turing thesis -- that any physically realisable system can be efficiently simulated on a Turing machine. We argue against this claim on the basis that, under a general, physically realistic independent error model, boson-sampling does not implement a provably hard computational problem in the asymptotic limit of large systems.

Journal ArticleDOI
TL;DR: This paper makes use of a known design for a DNA nanorobotic device due to Reif and Sahu for executing FSM computations using DNAzymes and describes in detail finite state automaton built on 10-23 DNAzyme, and gives its procedure of design and computation.
Abstract: A finite-state machine (FSM) is an abstract mathematical model of computation used to design both computer programs and sequential logic circuits. Considered as an abstract model of computation, FSM is weak; it has less computational power than some other models of computation such as the Turing machine. This paper discusses the finite-state automata based on Deoxyribonucleic Acid (DNA) and different implementations of DNA FSMs. Moreover, a comparison was made to clarify the advantages and disadvantages of each kind of presented DNA FSMS. Since it is a major goal for nanoscince, nanotechnology and super molecular chemistry is to design synthetic molecular devices that are programmable and run autonomously. Programmable means that the behavior of the device can be modified without redesigning the whole structure. Autonomous means that it runs without externally mediated change to the work cycle. In this paper we present an odd Parity Checker Prototype Using DNAzyme FSM. Our paper makes use of a known design for a DNA nanorobotic device due to Reif and Sahu for executing FSM computations using DNAzymes. The main contribution of our paper is a description of how to program that device to do a FSM computation known as odd parity checking. We describe in detail finite state automaton built on 10-23 DNAzyme, and give its procedure of design and computation. The design procedure has two major phases: designing the language potential alphabet DNA strands, and depending on the first phase to design the DNAzyme possible transitions.

Journal ArticleDOI
TL;DR: It is shown that for such sets of random strings, any finite set of their truth- Table degrees do not meet to the degree~0, even within the c.e. truth-table degrees, but when taking the meet over all such truth- table degrees, the infinite meet is indeed~0.
Abstract: We investigate the truth-table degrees of (co-)c.e.\ sets, in particular, sets of random strings. It is known that the set of random strings with respect to any universal prefix-free machine is Turing complete, but that truth-table completeness depends on the choice of universal machine. We show that for such sets of random strings, any finite set of their truth-table degrees do not meet to the degree~0, even within the c.e. truth-table degrees, but when taking the meet over all such truth-table degrees, the infinite meet is indeed~0. The latter result proves a conjecture of Allender, Friedman and Gasarch. We also show that there are two Turing complete c.e. sets whose truth-table degrees form a minimal pair.