scispace - formally typeset
Search or ask a question

Showing papers on "Turing machine published in 2010"


01 Jan 2010
TL;DR: The Liquid State Machine has emerged as a computational model that is more adequate than the Turing machine for describing computations in biological networks of neurons.
Abstract: The Liquid State Machine (LSM) has emerged as a computational model that is more adequate than the Turing machine for describing computations in biological networks of neurons. Characteristic features of this new model are (i) that it is a model for adaptive computational systems, (ii) that it provides a method for employing randomly connected circuits, or even “found” physical objects for meaningful computations, (iii) that it provides a theoretical context where heterogeneous, rather than stereotypical, local gates or processors increase the computational power of a circuit, (iv) that it provides a method for multiplexing different computations (on a common input) within the same circuit. This chapter reviews the motivation for this model, its theoretical background, and current work on implementations of this model in innovative artificial computing devices.

173 citations


Book ChapterDOI
14 Jun 2010
TL;DR: This work proposes a chemical implementation of stack machines -- a Turing-universal model of computation similar to Turing machines -- using DNA strand displacement cascades as the underlying chemical primitive, controlled by strand displacement logic.
Abstract: Bennett's proposed chemical Turing machine is one of the most important thought experiments in the study of the thermodynamics of computation. Yet the sophistication of molecular engineering required to physically construct Bennett's hypothetical polymer substrate and enzymes has deterred experimental implementations. Here we propose a chemical implementation of stack machines -- a Turing-universal model of computation similar to Turing machines -- using DNA strand displacement cascades as the underlying chemical primitive. More specifically, the mechanism described herein is the addition and removal of monomers from the end of a DNA polymer, controlled by strand displacement logic. We capture the motivating feature of Bennett's scheme: that physical reversibility corresponds to logically reversible computation, and arbitrarily little energy per computation step is required. Further, as a method of embedding logic control into chemical and biological systems, polymer-based chemical computation is significantly more efficient than geometry-free chemical reaction networks.

132 citations


Proceedings ArticleDOI
Miklós Ajtai1
05 Jun 2010
TL;DR: In this paper, it was shown that simulation with an oblivious, coin-flipping RAM, with only a factor of ln increase in time and space requirements, is possible, even without any cryptographic assumptions.
Abstract: ithmic increase in the time and space requirements is possible on a probabilistic (coin flipping) RAM without using any cryptographic assumptions. The simulation will fail with a negligible probability. If n memory locations are used, then the probability of failure is at most n-log n. Pippenger and Fischer has shown in 1979, see [7], that a Turing machine with one-dimensional tapes, performing a computation of length n can be simulated on-line by an oblivious Turing machine with two dimensional tapes, in time O(n log n), where a Turing machine is oblivious if the movements of it heads as a function of time are independent of its input. For RAMs the notion of obliviousness was defined by Goldreich in 1987 in [2], and he proved a simulation theorem about it. A RAM is oblivious if the distribution of its memory access pattern, which memory cells are accessed at which time, is independent of the program running on the RAM, provided that the time used by the program is fixed. That is, an adversary watching the memory access will not know anything about the program running on the machine apart from its total time. Ostrovsky, improving Goldreich's theorem, has shown in 1990, see [4], [5], [3], that a RAM using n memory cells can a be simulated by an oblivious RAM with a random oracle (where the random bits can be accessed repeatedly) so that the increase of the space and time requirement is only about a factor of ln (Goldreich's factor was about exp[(log n)1/2]). In both cases the oblivious RAM with a random oracle, can be replaced, by an oblivious probabilistic (coin-flipping) RAM, provided that we accept some unproven cryptographic assumptions, e.g., the existence of a one-way function. In this paper we show that simulation with an oblivious, coin-flipping RAM, with only a factor of ln increase in time and space requirements, is possible, even without any cryptographic assumptions.

79 citations


Book ChapterDOI
01 Jan 2010
TL;DR: This chapter discusses how to combine higher-order functions with quantum computation, and investigates the interplay between classical objects and quantum objects in a higherorder context.
Abstract: The lambda calculus, developed in the 1930’s by Church and Curry, is a formalism for expressing higher-order functions. In a nutshell, a higher-order function is a function that inputs or outputs a “black box”, which is itself a (possibly higher-order) function. Higher-order functions are a computationally powerful tool. Indeed, the pure untyped lambda calculus has the same computational power as Turing machines [Tur36]. At the same time, higher-order functions are a useful abstraction for programmers. They form the basis of functional programming languages such as LISP, ML, Scheme, and Haskell. In this chapter, we discuss how to combine higher-order functions with quantum computation. We believe that this is an interesting question for a number of reasons. First, the combination of higher-order functions with quantum phenomena raises the prospect of entangled functions. Certain well-known quantum phenomena can be naturally described in terms of entangled functions, and we will give some examples of this in Section 1.2. Another interesting aspect of higher-order quantum computation is the interplay between classical objects and quantum objects in a higherorder context. A priori, quantum computation operates on two distinct kinds of data: classical data, which can be read, written, duplicated, and discarded as usual, and quantum data, which has state preparation, unitary maps, and measurements as primitive operations. The higherorder computational paradigm introduces a third kind of data, namely functions, and one may ask whether functions behave like classical data, quantum data, or something intermediate. The answer is that there will actually be two kinds of functions: those

79 citations


Proceedings ArticleDOI
30 Sep 2010
TL;DR: A new design is described for a game bot programming competition, the BotPrize, which will make it simpler to run, and, it is hoped, open up new opportunities for innovative use of the testing platform.
Abstract: Interesting, human-like opponents add to the entertainment value of a video game, and creating such opponents is a difficult challenge for programmers. Can artificial intelligence and computational intelligence provide the means to convincingly simulate a human opponent? Or are simple programming tricks and deceptions more effective? To answer these questions, the author designed and organised a game bot programming competition, the BotPrize, in which competitors submit bots that try to pass a “Turing Test for Bots”. In this paper, we describe a new design for the competition, which will make it simpler to run, and, we hope, open up new opportunities for innovative use of the testing platform. We illustrate the potential of the new platform by describing an implementation of a bot that is designed to learn how to appear more human using feedback obtained during play.

78 citations


Journal ArticleDOI
TL;DR: This paper discusses structural-complexity issues of one-tape Turing machines of various types that halt in linear time, where the running time of a machine is defined as the length of any longest computation path.

60 citations


Proceedings ArticleDOI
01 Jan 2010
TL;DR: The result strengthens the evidence that the complexity of a rewrite system is truthfully represented through the length of derivations and allows the classification of nondeterministic polytime-computation based on runtime complexity analysis of rewrite systems.
Abstract: In earlier work, we have shown that for confluent TRSs, innermost polynomial runtime complexity induces polytime computability of the functions defined. In this paper, we generalise this result to full rewriting, for that we exploit graph rewriting. We give a new proof of the adequacy of graph rewriting for full rewriting that allows for a precise control of the resources copied. In sum we completely describe an implementation of rewriting on a Turing machine (TM for short). We show that the runtime complexity of the TRS and the runtime complexity of the TM is polynomially related. Our result strengthens the evidence that the complexity of a rewrite system is truthfully represented through the length of derivations. Moreover our result allows the classification of nondeterministic polytime-computation based on runtime complexity analysis of rewrite systems.

58 citations


Journal ArticleDOI
TL;DR: It is shown how mathematical languages used to describe the machines limit the possibilities to observe them and notions of observable deterministic and non-deterministic Turing machines are introduced and conditions ensuring that the latter can be simulated by the former are established.
Abstract: The Turing machine is one of the simple abstract computational devices that can be used to investigate the limits of computability. In this paper, they are considered from several points of view that emphasize the importance and the relativity of mathematical languages used to describe the Turing machines. A deep investigation is performed on the interrelations between mechanical computations and their mathematical descriptions emerging when a human (the researcher) starts to describe a Turing machine (the object of the study) by different mathematical languages (the instruments of investigation). Together with traditional mathematical languages using such concepts as ‘enumerable sets’ and ‘continuum’ a new computational methodology allowing one to measure the number of elements of different infinite sets is used in this paper. It is shown how mathematical languages used to describe the machines limit our possibilities to observe them. In particular, notions of observable deterministic and non-deterministic Turing machines are introduced and conditions ensuring that the latter can be simulated by the former are established. The authors thank the anonymous reviewers for their useful suggestions. This research was partially supported by the Russian Federal Program “Scientists and Educators in Russia of Innovations”, contract number 02.740.11.5018.

57 citations


Posted Content
TL;DR: A statistical comparison of the frequency distributions of data from physical sources and those generated by purely algorithmic means by running abstract computing devices such as Turing machines, cellular automata and Post Tag systems.
Abstract: We propose a test based on the theory of algorithmic complexity and an experimental evaluation of Levin's universal distribution to identify evidence in support of or in contravention of the claim that the world is algorithmic in nature. To this end we have undertaken a statistical comparison of the frequency distributions of data from physical sources on the one hand--repositories of information such as images, data stored in a hard drive, computer programs and DNA sequences--and the frequency distributions generated by purely algorithmic means on the other--by running abstract computing devices such as Turing machines, cellular automata and Post Tag systems. Statistical correlations were found and their significance measured.

53 citations


Journal ArticleDOI
TL;DR: This paper examines the ITRMs introduced by the third and fourth author, and introduces a notion of ITRM-clockable ordinals corresponding to the running times of computations, and proves a Lost Melody theorem.
Abstract: Infinite time register machines (ITRMs) are register machines which act on natural numbers and which are allowed to run for arbitrarily many ordinal steps. Successor steps are determined by standard register machine commands. At limit times register contents are defined by appropriate limit operations. In this paper, we examine the ITRMs introduced by the third and fourth author (Koepke and Miller in Logic and Theory of Algorithms LNCS, pp. 306–315, 2008), where a register content at a limit time is set to the lim inf of previous register contents if that limit is finite; otherwise the register is reset to 0. The theory of these machines has several similarities to the infinite time Turing machines (ITTMs) of Hamkins and Lewis. The machines can decide all $${\Pi^1_1}$$ sets, yet are strictly weaker than ITTMs. As in the ITTM situation, we introduce a notion of ITRM-clockable ordinals corresponding to the running times of computations. These form a transitive initial segment of the ordinals. Furthermore we prove a Lost Melody theorem: there is a real r such that there is a program P that halts on the empty input for all oracle contents and outputs 1 iff the oracle number is r, but no program can decide for every natural number n whether or not $${n \in r}$$ with the empty oracle. In an earlier paper, the third author considered another type of machines where registers were not reset at infinite lim inf’s and he called them infinite time register machines. Because the resetting machines correspond much better to ITTMs we hold that in future the resetting register machines should be called ITRMs.

40 citations


Journal ArticleDOI
TL;DR: It is shown that continuous timed Petri nets are able to simulate Turing machines and thus that basic properties become undecidable.
Abstract: State explosion is a fundamental problem in the analysis and synthesis of discrete event systems. Continuous Petri nets can be seen as a relaxation of the corresponding discrete model. The expected gains are twofold: improvements in complexity and in decidability. In the case of autonomous nets we prove that liveness or deadlock-freeness remain decidable and can be checked more efficiently than in Petri nets. Then we introduce time in the model which now behaves as a dynamical system driven by differential equations and we study it w.r.t. expressiveness and decidability issues. On the one hand, we prove that this model is equivalent to timed differential Petri nets which are a slight extension of systems driven by linear differential equations (LDE). On the other hand, (contrary to the systems driven by LDEs) we show that continuous timed Petri nets are able to simulate Turing machines and thus that basic properties become undecidable.

Journal ArticleDOI
TL;DR: In this article, it was shown that modelling an experimenter and an experimental procedure algorithmically imposes a limit on what can be measured using equipment, and that the results established here are representative of a huge class of experiments.
Abstract: We pose the following question: If a physical experiment were to be completely controlled by an algorithm, what effect would the algorithm have on the physical measurements made possible by the experiment? In a programme to study the nature of computation possible by physical systems, and by algorithms coupled with physical systems, we have begun to analyse: (i) the algorithmic nature of experimental procedures; and (ii) the idea of using a physical experiment as an oracle to Turing Machines. To answer the question, we will extend our theory of experimental oracles so that we can use Turing machines to model the experimental procedures that govern the conduct of physical experiments. First, we specify an experiment that measures mass via collisions in Newtonian dynamics and examine its properties in preparation for its use as an oracle. We begin the classification of the computational power of polynomial time Turing machines with this experimental oracle using non-uniform complexity classes. Second, we show that modelling an experimenter and experimental procedure algorithmically imposes a limit on what can be measured using equipment. Indeed, the theorems suggest a new form of uncertainty principle for our knowledge of physical quantities measured in simple physical experiments. We argue that the results established here are representative of a huge class of experiments.

Journal ArticleDOI
TL;DR: The main idea of this paper is to show that the logical reasoning of computer programming students can be efficiently developed by using at the same time Turing Machine, cellular automata and fractals theory via Problem-Based Learning (PBL).
Abstract: It is common to start a course on computer programming logic by teaching the algorithm concept from the point of view of natural languages, but in a schematic way. In this sense we note that the students have difficulties in understanding and implementation of the problems proposed by the teacher. The main idea of this paper is to show that the logical reasoning of computer programming students can be efficiently developed by using at the same time Turing Machine, cellular automata (Wolfram rule) and fractals theory via Problem-Based Learning (PBL). The results indicate that this approach is useful, but the teacher needs introducing, in an interdisciplinary context, the simple theory of cellular automata and the fractals before the problem implementation.

Journal ArticleDOI
TL;DR: It is shown that all non-negative real numbers are l2-Betti numbers, and that “many” (for example allnon-negative algebraic) realNumbers are “ many” of simply connected manifolds with respect to a free cocompact action.
Abstract: Main theorems of the article concern the problem of M. Atiyah on possible values of l^2-Betti numbers. It is shown that all non-negative real numbers are l^2-Betti numbers, and that "many" (for example all non-negative algebraic) real numbers are l^2-Betti numbers of simply connected manifolds with respect to a free cocompact action. Also an explicit example is constructed which leads to a simply connected manifold with a transcendental l^2-Betti number with respect to an action of the threefold direct product of the lamplighter group Z/2 wr Z. The main new idea is embedding Turing machines into integral group rings. The main tool developed generalizes known techniques of spectral computations for certain random walk operators to arbitrary operators in groupoid rings of discrete measured groupoids.

Journal ArticleDOI
TL;DR: It is proved that any Turing machine that uses only a finite computational space for every input cannot solve an uncomputable problem even when it runs in accelerated mode, and two ways to define the language accepted by an accelerated Turing machine are proposed.
Abstract: In this paper we prove that any Turing machine that uses only a finite computational space for every input cannot solve an uncomputable problem even when it runs in accelerated mode. We also propose two ways to define the language accepted by an accelerated Turing machine. Accordingly, the classes of languages accepted by accelerated Turing machines are the closure under Boolean operations of the sets Σ1 and Σ2.

Journal ArticleDOI
TL;DR: Turing's analysis of computability has recently been challenged; it is claimed that it is circular to analyse the intuitive concept of numerical computability in terms of the Turing machine as mentioned in this paper.

Journal ArticleDOI
01 Nov 2010-Ubiquity
TL;DR: The Turing model is a good abstraction for most digital computers because the number of steps to execute a Turing machine algorithm is predictive of the running time of the computation on a digital computer.
Abstract: Most people understand a computation as a process evoked when a computational agent acts on its inputs under the control of an algorithm. The classical Turing machine model has long served as the fundamental reference model because an appropriate Turing machine can simulate every other computational model known. The Turing model is a good abstraction for most digital computers because the number of steps to execute a Turing machine algorithm is predictive of the running time of the computation on a digital computer. However, the Turing model is not as well matched for the natural, interactive, and continuous information processes frequently encountered today. Other models whose structures more closely match the information processes involved give better predictions of running time and space. Models based on transforming representations may be useful.

Posted Content
TL;DR: In this article, a type system for an extension of lambda calculus with a conditional construction, named STAB, is presented, which characterizes the PSPACE class and allows to program polynomial time Alternating Turing Machines.
Abstract: We present a type system for an extension of lambda calculus with a conditional construction, named STAB, that characterizes the PSPACE class. This system is obtained by extending STA, a type assignment for lambda-calculus inspired by Lafont's Soft Linear Logic and characterizing the PTIME class. We extend STA by means of a ground type and terms for booleans and conditional. The key issue in the design of the type system is to manage the contexts in the rule for conditional in an additive way. Thanks to this rule, we are able to program polynomial time Alternating Turing Machines. From the well-known result APTIME = PSPACE, it follows that STAB is complete for PSPACE. Conversely, inspired by the simulation of Alternating Turing machines by means of Deterministic Turing machine, we introduce a call-by-name evaluation machine with two memory devices in order to evaluate programs in polynomial space. As far as we know, this is the first characterization of PSPACE that is based on lambda calculus and light logics.

Book
30 May 2010
TL;DR: The suggested axiomatic methodology is applied to evaluation of possibilities of computers and their networks, with main emphasis on such properties as computability, decidability, and acceptability.
Abstract: We are living in a world where complexity of systems created and studied by people grows beyond all imaginable limits. Computers, their software and their networks are among the most complicated systems of our time. Science is the only efficient tool for dealing with this overwhelming complexity. One of the methodologies developed in science is the axiomatic approach. It proved to be very powerful in mathematics. In this paper, we develop further an axiomatic approach in computer science initiated by Manna, Blum and other researchers. In the traditional constructive setting, different classes of algorithms (programs, processes or automata) are studied separately, with some indication of relations between these classes. Thus, the constructive approach gave birth to the theory of Turing machines, theory of partial recursive functions, theory of finite automata, and other theories of constructive models of algorithms. The axiomatic context allows one to research classes of classes of algorithms, automata, and processes. As a result, axiomatic approach goes higher in the hierarchy of co mputer and network models, reducing in such a way complexity of their study. The suggested axiomatic methodology is applied to evaluation of possibilities of computers and their networks. People more and more rely on computers and other information process ing systems. So, it is vital to know better than now what computers and other information processing systems can do and what they can’t do. The main emphasis is done on such properties as computability, decidability, and acceptability.

Journal ArticleDOI
TL;DR: In this article, the power and limitation of various advice types for weak computational models of one-tape linear-time Turing machines and one-way finite automata were discussed, and it was shown that certain weak machines can be significantly enhanced in computational power when randomized advice is provided in place of deterministic advice.
Abstract: We discuss the power and limitation of various "advice," when it is given particularly to weak computational models of one-tape linear-time Turing machines and one-way finite (state) automata. Of various advice types, we consider deterministically-chosen advice (not necessarily algorithmically determined) and randomly-chosen advice (according to certain probability distributions). In particular, we show that certain weak machines can be significantly enhanced in computational power when randomized advice is provided in place of deterministic advice.

Journal ArticleDOI
TL;DR: It is proved that quantum computers that are augmented with WOM can solve problems that neither a classical computer with WOM nor a quantum computer without WOM can solving, when all other resource bounds are equal.
Abstract: In classical computation, a "write-only memory" (WOM) is little more than an oxymoron, and the addition of WOM to a (deterministic or probabilistic) classical computer brings no advantage. We prove that quantum computers that are augmented with WOM can solve problems that neither a classical computer with WOM nor a quantum computer without WOM can solve, when all other resource bounds are equal. We focus on realtime quantum finite automata, and examine the increase in their power effected by the addition of WOMs with different access modes and capacities. Some problems that are unsolvable by two-way probabilistic Turing machines using sublogarithmic amounts of read/write memory are shown to be solvable by these enhanced automata.

Journal ArticleDOI
TL;DR: The theory proposes that measurability in Physics is subject to laws which are co-lateral effects of the limits of computability and computational complexity.
Abstract: Earlier, we have studied computations possible by physical systems and by algorithms combined with physical systems. In particular, we have analysed the idea of using an experiment as an oracle to an abstract computational device, such as the Turing machine. The theory of composite machines of this kind can be used to understand (a) a Turing machine receiving extra computational power from a physical process, or (b) an experimenter modelled as a Turing machine performing a test of a known physical theory T. Our earlier work was based upon experiments in Newtonian mechanics. Here we extend the scope of the theory of experimental oracles beyond Newtonian mechanics to electrical theory. First, we specify an experiment that measures resistance using a Wheatstone bridge and start to classify the computational power of this experimental oracle using non-uniform complexity classes. Secondly, we show that modelling an experimenter and experimental procedure algorithmically imposes a limit on our ability to measure resistance by the Wheatstone bridge. The connection between the algorithm and physical test is mediated by a protocol controlling each query, especially the physical time taken by the experimenter. In our studies we find that physical experiments have an exponential time protocol; this we formulate as a general conjecture. Our theory proposes that measurability in Physics is subject to laws which are co-lateral effects of the limits of computability and computational complexity.

Journal ArticleDOI
TL;DR: It is shown here that there is no standard spiking neural P system that simulates Turing machines with less than exponential time and space overheads, and a universal spiking Neural P system is constructed with exhaustive use of rules that simulating Turing machines in linear time and has only 10 neurons.
Abstract: It is shown here that there is no standard spiking neural P system that simulates Turing machines with less than exponential time and space overheads. The spiking neural P systems considered here have a constant number of neurons that is independent of the input length. Following this, we construct a universal spiking neural P system with exhaustive use of rules that simulates Turing machines in linear time and has only 10 neurons.

Book ChapterDOI
01 Jan 2010
TL;DR: In this paper, a mathematical theory about using physical experiments as oracles to Turing machines was developed, where an experiment makes measurements according to a physical theory and the queries to the oracle allow the Turing machine to read the value being measured bit by bit.
Abstract: We have developed a mathematical theory about using physical experiments as oracles to Turing machines. We suppose that an experiment makes measurements according to a physical theory and that the queries to the oracle allow the Turing machine to read the value being measured bit by bit. Using this theory of physical oracles, an experimenter performing an experiment can be modelled as a Turing machine governing an oracle that is the experiment. We consider this computational model of physical measurement in terms of the theory of measurement of Hempel and Carnap (see Fundamentals of Concept, Formation in Empirical Science, vol 2, International Encylopedia of Unified Science, University of chicago press, 1952; Philosophical Foundations of Physics, Basic Book, New York, 1928). We note that once a physical quantity is given a real value, Hempel’s axioms of measurement involve undecidabilities. To solve this problem, we introduce time into Hempel’s axiomatization. Focussing on a dynamical experiment for measuring mass, as in Beggs et al. (Proc R Soc Ser A 464(2098): 2777–2801, 2009; 465(2105): 1453–1465; Technical Report; Accepted for presentation in Studia, Logica International conference on logic and the foundations of physics: space, time and quanta (Trends in Logic VI), Belgium, Brussels, 11–12 December 2008; Bull Euro Assoc Theor Comp. Sci 17: 137–151, 2009), we show that the computational model of measurement satisfies our generalization of Hempel’s axioms. Our analysis also explains undecidability in measurement and that quantities are not always measurable.

Book
01 Jan 2010
TL;DR: In this article, it was shown that the problem of constructing a polynomial-time Turing machine is not fixed-parameter tractable if P ≠ NP for all time constructible and increasing functions.
Abstract: In [9] Yuri Gurevich addresses the question whether there is a logic that captures polynomial time. He conjectures that there is no such logic. He considers a logic, we denote it by L≤, that allows to express precisely the polynomial time properties of structures; however, apparently, there is no algorithm "that given an L≤-sentence ϕ produces a polynomial time Turing machine that recognizes the class of models of ϕ." In [12] Nash, Remmel, and Vianu have raised the question whether one can prove that there is no such algorithm. They give a reformulation of this question in terms of a parameterized halting problem p-ACC≤ for nondeterministic Turing machines. We analyze the precise relationship between L≤ and p-ACC≤. Moreover, we show that p-ACC≤ is not fixed-parameter tractable if "P ≠ NP holds for all time constructible and increasing functions." A slightly stronger complexity theoretic hypothesis implies that L≤ does not capture polynomial time. Furthermore, we analyze the complexity of various variants of p-ACC≤ and address the construction problem associated with p-ACC≤.

Journal ArticleDOI
TL;DR: It is proved that AHNEPs with ten nodes can simulate any nondeterministic Turing machine of time complexity f (n) in time O(f (n); this result significantly improves the best known upper bound for the number of nodes in a network simulating in linear time an arbitrary Turing machine.
Abstract: In this paper, we improve some results regarding the size complexity of accepting hybrid networks of evolutionary processors (AHNEPs). We show that there are universal AHNEPs of size 6, by devising a method for simulating 2-tag systems. This result improves the best upper bound for the size of universal AHNEPs which was 7. We also propose a computationally and descriptionally efficient simulation of nondeterministic Turing machines with AHNEPs. More precisely, we prove that AHNEPs with ten nodes can simulate any nondeterministic Turing machine of time complexity f (n) in time O(f (n)). This result significantly improves the best known upper bound for the number of nodes in a network simulating in linear time an arbitrary Turing machine, namely 24.

Book ChapterDOI
01 Jan 2010
TL;DR: It is shown that there is an isomorphism-invariant polynomial-time computable function problem on finite vector spaces ("given a finite vector space V, output the set of hyperplanes in V" ) that is not computable by any CPT+C program.
Abstract: Many natural problems in computer science concern structures like graphs where elements are not inherently ordered. In contrast, Turing machines and other common models of computation operate on strings. While graphs may be encoded as strings (via an adjacency matrix), the encoding imposes a linear order on vertices. This enables a Turing machine operating on encodings of graphs to choose an arbitrary element from any nonempty set of vertices at low cost (the Augmenting Paths algorithm for BIPARTITE MATCHING being an example of the power of choice). However, the outcome of a computation is liable to depend on the external linear order (i.e., the choice of encoding). Moreover, isomorphism-invariance/encoding-independence is an undecidable property of Turing machines. This trouble with encodings led Blass, Gurevich and Shelah [3] to propose a model of computation known as BGS machines that operate directly on structures. BGS machines preserve symmetry at every step in a computation, sacrificing the ability to make arbitrary choices between indistinguishable elements of the input structure (hence "choiceless computation"). Blass et al. also introduced a complexity class CPT+C (Choiceless Polynomial Time with Counting) defined in terms of polynomially bounded BGS machines. While every property finite structures in CPT+C is polynomial-time computable in the usual sense, it is open whether conversely every isomorphism-invariant property in P belongs to CPT+C. In this paper we give evidence that CPT+C ≠ P by proving the separation of the corresponding classes of function problems. Specifically, we show that there is an isomorphism-invariant polynomial-time computable function problem on finite vector spaces ("given a finite vector space V, output the set of hyperplanes in V" ) that is not computable by any CPT+C program. In addition, we give a new simplified proof of the Support Theorem, which is a key step in the result of [3] that a weak version of CPT+C absent counting cannot decide the parity of sets.

Posted Content
TL;DR: It turns out that the PALOMA model can assign unique consecutive ids to the agents and inform them of the population size, which allows us to give a direct simulation of a Deterministic Turing Machine of O(nlogn) space, thus, establishing that any symmetric predicate in SPACE(NLogn) also belongs to PLM.
Abstract: We propose a new theoretical model for passively mobile Wireless Sensor Networks. We call it the PALOMA model, standing for PAssively mobile LOgarithmic space MAchines. The main modification w.r.t. the Population Protocol model is that agents now, instead of being automata, are Turing Machines whose memory is logarithmic in the population size n. Note that the new model is still easily implementable with current technology. We focus on complete communication graphs. We define the complexity class PLM, consisting of all symmetric predicates on input assignments that are stably computable by the PALOMA model. We assume that the agents are initially identical. Surprisingly, it turns out that the PALOMA model can assign unique consecutive ids to the agents and inform them of the population size! This allows us to give a direct simulation of a Deterministic Turing Machine of O(nlogn) space, thus, establishing that any symmetric predicate in SPACE(nlogn) also belongs to PLM. We next prove that the PALOMA model can simulate the Community Protocol model, thus, improving the previous lower bound to all symmetric predicates in NSPACE(nlogn). Going one step further, we generalize the simulation of the deterministic TM to prove that the PALOMA model can simulate a Nondeterministic TM of O(nlogn) space. Although providing the same lower bound, the important remark here is that the bound is now obtained in a direct manner, in the sense that it does not depend on the simulation of a TM by a Pointer Machine. Finally, by showing that a Nondeterministic TM of O(nlogn) space decides any language stably computable by the PALOMA model, we end up with an exact characterization for PLM: it is precisely the class of all symmetric predicates in NSPACE(nlogn).

Journal ArticleDOI
TL;DR: A survey of the technique that allows giving a simple proof that all Turing machines with two letters and two states have a decidable halting problem is provided.
Abstract: In this paper we provide a survey of the technique that allows giving a simple proof that all Turing machines with two letters and two states have a decidable halting problem The result was proved by L Pavlotskaya in 1973

Book ChapterDOI
06 Jul 2010
TL;DR: In this article, it was shown that TAUT has no p-optimal proof system if and only if a logic related to least fixed-point logic captures polynomial time on all finite structures.
Abstract: We prove that TAUT has a p-optimal proof system if and only if a logic related to least fixed-point logic captures polynomial time on all finite structures. Furthermore, we show that TAUT has no effective p-optimal proof system if NTIME(hO(1)) ⊈ DTIME(hO(log h)) for every time constructible and increasing function h.