scispace - formally typeset
Search or ask a question

Showing papers on "Turing machine published in 1997"


Journal ArticleDOI
TL;DR: This paper gives the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church--Turing thesis, and proves that bits of precision suffice to support a step computation.
Abstract: In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch's model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97--117]. This construction is substantially more complicated than the corresponding construction for classical Turing machines (TMs); in fact, even simple primitives such as looping, branching, and composition are not straightforward in the context of quantum Turing machines. We establish how these familiar primitives can be implemented and introduce some new, purely quantum mechanical primitives, such as changing the computational basis and carrying out an arbitrary unitary transformation of polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing machine need to be specified. We prove that $O(\log T)$ bits of precision suffice to support a $T$ step computation. This justifies the claim that the quantum Turing machine model should be regarded as a discrete model of computation and not an analog one. We give the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church--Turing thesis. We show the existence of a problem, relative to an oracle, that can be solved in polynomial time on a quantum Turing machine, but requires superpolynomial time on a bounded-error probabilistic Turing machine, and thus not in the class $\BPP$. The class $\BQP$ of languages that are efficiently decidable (with small error-probability) on a quantum Turing machine satisfies $\BPP \subseteq \BQP \subseteq \Ptime^{\SP}$. Therefore, there is no possibility of giving a mathematical proof that quantum Turing machines are more powerful than classical probabilistic Turing machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory.

1,706 citations


Journal ArticleDOI
01 Apr 1997
TL;DR: It is constructively proved that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines, raising the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power.
Abstract: Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=/spl Psi/(u(t-n/sub u/), ..., u(t-1), u(t), y(t-n/sub y/), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n/sub u/ and n/sub y/ are the input and output order, and the function /spl Psi/ is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power.

462 citations


Journal ArticleDOI
01 Oct 1997
TL;DR: It is shown that when quantum Turing machines are restricted to have transition amplitudes which are algebraic numbers, BQP, EQP, and nondeterministic quantum polynomial time (NQP) are all contained in PP, hence in P and PSPACE.
Abstract: In this paper some theoretical and (potentially) practical aspects of quantum computing are considered. Using the tools of transcendental number theory it is demonstrated that quantum Turing machines (QTM) with rational amplitudes are sufficient to define the class of bounded error quantum polynomial time (BQP) introduced by Bernstein and Vazirani [Proc. 25th ACM Symposium on Theory of Computation, 1993, pp. 11--20, SIAM J. Comput., 26 (1997), pp. 1277--1339]. On the other hand, if quantum Turing machines are allowed unrestricted amplitudes (i.e., arbitrary complex amplitudes), then the corresponding BQP class has uncountable cardinality and contains sets of all Turing degrees. In contrast, allowing unrestricted amplitudes does not increase the power of computation for error-free quantum polynomial time (EQP). Moreover, with unrestricted amplitudes, BQP is not equal to EQP. The relationship between quantum complexity classes and classical complexity classes is also investigated. It is shown that when quantum Turing machines are restricted to have transition amplitudes which are algebraic numbers, BQP, EQP, and nondeterministic quantum polynomial time (NQP) are all contained in PP, hence in ${\rm P}^{\#{\rm P}}$ and PSPACE. A potentially practical issue of designing "machine independent" quantum programs is also addressed. A single ("almost universal") quantum algorithm based on Shor's method for factoring integers is developed which would run correctly on almost all quantum computers, even if the underlying unitary transformations are unknown to the programmer and the device builder.

309 citations


Journal ArticleDOI
TL;DR: The most distinctive features of human cognition – language and culture – may themselves be viewed as adaptations enabling this representation/computation trade-off to be pursued on an even grander scale.
Abstract: Some regularities enjoy only an attenuated existence in a body of training data. These are regularities whose statistical visibility depends on some systematic recoding of the data. The space of possible recodings is, however, infinitely large - it is the space of applicable Turing machines. As a result, mappings that pivot on such attenuated regularities cannot, in general, be found by brute-force search. The class of problems that present such mappings we call the class of "type-2 problems." Type-1 problems, by contrast, present tractable problems of search insofar as the relevant regularities can be found by sampling the input data as originally coded. Type-2 problems, we suggest, present neither rare nor pathological cases. They are rife in biologically realistic settings and in domains ranging from simple animat (simulated animal or autonomous robot) behaviors to language acquisition. Not only are such problems rife - they are standardly solved! This presents a puzzle. How, given the statistical intractability of these type-2 cases, does nature turn the trick? One answer, which we do not pursue, is to suppose that evolution gifts us with exactly the right set of recoding biases so as to reduce specific type-2 problems to (tractable) type-1 mappings. Such a heavy-duty nativism is no doubt sometimes plausible. But we believe there are other, more general mechanisms also at work. Such mechanisms provide general (not task-specific) strategies for managing problems of type-2 complexity. Several such mechanisms are investigated. At the heart of each is a fundamental ploy - namely, the maximal exploitation of states of representation already achieved by prior, simpler (type-1) learning so as to reduce the amount of subsequent computational search. Such exploitation both characterizes and helps make unitary sense of a diverse range of mechanisms. These include simple incremental learning (Elman 1993), modular connectionism (Jacobs et al. 1991), and the developmental hypothesis of "representational redescription" (Karmiloff-Smith 1979; 1992). In addition, the most distinctive features of human cognition - language and culture - may themselves be viewed as adaptations enabling this representation/computation trade-off to be pursued on an even grander scale.

257 citations


Journal ArticleDOI
TL;DR: In this article, the Choiceless fragment of polynomial time (PTime) is represented by a version of abstract state machines (i.e., evolving algebras), and the idea is to replace arbitrary choice with parallel execution.

129 citations


Journal ArticleDOI
24 Jun 1997
TL;DR: This paper describes the simulation of an S(n) space-bounded deterministic Turing machine by a reversible Turing machine operating in space S( n) and refutes the conjecture, made by M. Li and P. Vitanyi (1996), that any reversible simulation of a irreversible computation must obey Bennett's reversible pebble game rules.
Abstract: This paper describes the simulation of an S(n) space-bounded deterministic Turing machine by a reversible Turing machine operating in space S(n) It thus answers a question posed by C Bennett (1989) and refutes the conjecture, made by M Li and P Vitanyi (1996), that any reversible simulation of an irreversible computation must obey Bennett's reversible pebble game rules

118 citations


Journal ArticleDOI
TL;DR: It is shown that Short TM Computation is complete for $W[1]$.
Abstract: A completeness theory for parameterized computational complexity has been studied in a series of recent papers, and has been shown to have many applications in diverse problem domains including familiar graph-theoretic problems, VLSI layout, games, computational biology, cryptography, and computational learning [ADF,BDHW,BFH, DEF,DF1-7,FHW,FK]. We here study the parameterized complexity of two kinds of problems: (1) problems concerning parameterized computations of Turing machines, such as determining whether a nondeterministic machine can reach an accept state in $k$ steps (the Short TM Computation Problem), and (2) problems concerning derivations and factorizations, such as determining whether a word $x$ can be derived in a grammar $G$ in $k$ steps, or whether a permutation has a factorization of length $k$ over a given set of generators. We show hardness and completeness for these problems for various levels of the $W$ hierarchy. In particular, we show that Short TM Computation is complete for $W[1]$ . This gives a new and useful characterization of the most important of the apparently intractable parameterized complexity classes.

99 citations


Journal ArticleDOI
TL;DR: In both classes it is shown that a bijective expansive TMT is conjugate to a subshift of finite type and that topological entropy of every TMH is zero, and conjecture that every TMT has a periodic point.

65 citations


Journal ArticleDOI
TL;DR: Systematic techniques are developed to construct natural complete languages for the classes defined by the "guess-then-check" model GC, which improves a number of previous results in the study of limited nondeterminism.
Abstract: The relationship between nondeterminism and other computational resources is investigated based on the "guess-then-check" model GC. Systematic techniques are developed to construct natural complete languages for the classes defined by this model. This improves a number of previous results in the study of limited nondeterminism. Connections of the model GC to computational optimization problems are exhibited.

64 citations


Journal ArticleDOI
TL;DR: A myth has arisen concerning Turing's article of 1936, namely that Turing set forth a fundamental principle concerning the limits of what can be computed by machine; this supposed principle is sometimes incorrectly termed the Church-Turing thesis.
Abstract: A myth has arisen concerning Turing's article of 1936, namely that Turing set forth a fundamental principle concerning the limits of what can be computed by machine—a myth that has passed into cogn...

60 citations



Book ChapterDOI
22 Nov 1997
TL;DR: The paper gives a short introduction to basic concepts of TTE (Type 2 Theorie of Effectivity), and shows its general applicability by some selected examples, and discusses the problem of zero-finding.
Abstract: While for countable sets there is a single well established computability theory (ordinary recursion theory), Computable Analysis is still underdeveloped. Several mutually non-equivalent theories have been proposed for it, none of which, however, has been accepted by the majority of mathematicians or computer scientists. In this contribution one of these theories, TTE (Type 2 Theorie of Effectivity), is presented, which at least in the author's opinion has important advantages over the others. TTE intends to characterize and study exactly those functions, operators etc. known from Analysis, which can be realized correctly by digital computers. The paper gives a short introduction to basic concepts of TTE and shows its general applicability by some selected examples. First, Turing computability is generalized from finite to infinite sequences of symbols. Assuming that digital computers can handle (w.l.o.g.) only sequences of symbols, infinite sequences of symbols are used as names for “infinite objects” such as real numbers, open sets, compact sets or continuous functions. Naming systems are called representations. Since only very few representations are of interest in applications, a very fundamental principle for defining effective representations for To-spaces with countable bases is introduced. The concepts are applied to real numbers, compact sets, continuous functions and measures. The problem of zero-finding is considered. Computational complexity is discussed. We conclude with some remarks on other models for Computable Analysis. The paper is a shortened and revised version of [Wei97].

Proceedings ArticleDOI
01 Mar 1997
TL;DR: A collection of new and enhanced tools for experimenting with concepts in formal languages and automata theory, written in Java, include JFLAP for creating and simulating finite automata, pushdown automata and Turing machines, and PumpLemma for proving specific languages are not regular.
Abstract: We present a collection of new and enhanced tools for experimenting with concepts in formal languages and automata theory. New tools, written in Java, include JFLAP for creating and simulating finite automata, pushdown automata and Turing machines; Pâte for parsing restricted and unrestricted grammars and transforming context-free grammars to Chomsky Normal Form; and PumpLemma for proving specific languages are not regular. Enhancements to previous tools LLparse and LRparse, instructional tools for parsing LL(1) and LR(1) grammars, include parsing LL(2) grammars, displaying parse trees, and parsing any context-free grammar with conflict resolution.

Journal ArticleDOI
TL;DR: The main result is that the class of binary sets that can be decided by real Turing machines in parallel polynomial time is exactly the class PSPACE/poly.
Abstract: In this paper, we study the computational power of real Turing machines over binary inputs. Our main result is that the class of binary sets that can be decided by real Turing machines in parallel polynomial time is exactly the class PSPACE/poly.

Journal ArticleDOI
TL;DR: It is believed that computer architectures inspired by molecular biology will allow the development of new FPGAs endowed with quasi-biological properties extremely useful in environments where human intervention is necessarily limited.

Journal ArticleDOI
TL;DR: It is shown that Elman-style networks can simulate any frontier-to-root tree automation, while neither cascade-correlation networks nor neural trees can, and it is obtained that neural trees for sequences cannot simulate any finite state machine.

Proceedings ArticleDOI
13 Apr 1997
TL;DR: It is shown that there effectively exists a universal circular H system which can simulate any circular H systems with the same terminal alphabet, which strongly suggests a feasible design for a DNA computer based on circular splicing.
Abstract: From a biological motivation of the interactions between linear and circular DNA sequences, we propose a new type of splicing model called "circular H systems" and show that they have the same computational power as Turing machines. It is also shown that there effectively exists a universal circular H system which can simulate any circular H system with the same terminal alphabet, which strongly suggests a feasible design for a DNA computer based on circular splicing.

Journal ArticleDOI
TL;DR: A weak version of the Blum?Shub?Smale model of computation over the real numbers, where only a “moderate” usage of multiplications and divisions is allowed, and the class of boolean languages recognizable in polynomial time is shown to be the complexity class P/poly.

Book
01 Jan 1997
TL;DR: Andrew Hodges gives a fresh and interesting analysis of Turing's developing thought, relating it to his extraordinary life, and the principle of the post-war electronic computer.
Abstract: Alan Turing's 1936 paper ON COMPUTABLE NUMBERS, introducing the Turing machine, was a landmark of twentieth century thought. It provided the principle of the post-war electronic computer. Influenced by his crucial codebreaking work in thesecond world war, Turing argued that all the operations of the mind could be performed by computers. His thesis, made famous by the wit and drama of the Turing Test, is the cornerstone of modern Artificial Intelligence. Andrew Hodgesgives a fresh and interesting analysis of Turing's developing thought, relating it to his extraordinary life.

Journal ArticleDOI
TL;DR: In this paper, the authors use Kleene closure of languages and inversion of formal power series to investigate subclasses of the complexity class GapL. In particular, they show that inversion is NP-hard.
Abstract: In this paper we show how two fundamental operations used in formal language theory provide useful tools for the investigation of arithmetic complexity classes. More precisely, we use Kleene closure of languages and inversion of formal power series to investigate subclasses of the complexity class GapL. GapL is the complexity class that characterizes the complexity of computing the determinant; it corresponds to counting the number of accepting and rejecting paths of nondeterministic logspace-bounded Turing machines.) We define a counting version of Kleene closure and show that it is intimately related to inversion within the complexity classes GapL and GapNC^1. In particular, we prove that Kleene closure and inversion are both hard operations in the following sense There is a set in AC^0 for which Kleene closure is NL-complete and inversion is GapL-complete. There is a finite set for which Kleene closure is NC^1-complete and inversion is GapNC^1-complete. Furthermore, we classify the complexity of the Kleene closure of finite languages. We formulate the problem in terms of finite monoids and relate its complexity to the internal structure of the monoid.


Journal ArticleDOI
TL;DR: It is shown that with respect to a certain class of norms the so-called shortest lattice vector problem is polynomial-time Turing (Cook) reducible to the nearest latticevector problem.

Posted Content
Philip Maymin1
TL;DR: The lambda-q calculus may be strictly stronger than quantum computers because NP-complete problems such as satisfiability are efficiently solvable in the lambda-Q calculus but there is a widespread doubt that they are efficientlysolvable by quantum computers.
Abstract: We show that the lambda-q calculus can efficiently simulate quantum Turing machines by showing how the lambda-q calculus can efficiently simulate a class of quantum cellular automaton that are equivalent to quantum Turing machines. We conclude by noting that the lambda-q calculus may be strictly stronger than quantum computers because NP-complete problems such as satisfiability are efficiently solvable in the lambda-q calculus but there is a widespread doubt that they are efficiently solvable by quantum computers.

Journal Article
TL;DR: It is shown that the obstruction set for an ideal in the minor order cannot be computed from a description of the ideal in monadic second-order logic.
Abstract: The major results of Robertson and Seymour on graph well-quasi-ordering establish nonconstructively that many natural graph properties that constitute ideals in the minor or immersion orders are characterized by a nite set of forbidden substructures termed the obstructions for the property. This raises the question of what general kinds of information about an ideal are su cient, or insu cient, to allow the obstruction set for the ideal to be e ectively computed. It has been previously shown that it is not possible to compute the obstruction set for an ideal from a description of a Turing machine that recognizes the ideal. This result is signi cantly strengthened in the case of the minor ordering. It is shown that the obstruction set for an ideal in the minor order cannot be computed from a description of the ideal in monadic second-order logic.

Proceedings Article
17 Jun 1997
TL;DR: It is proved that the order of oracle queries does matter unless PH collapses, and this improves upon the previous result of Hemaspaandra, HemaspAandra and Hempel, who showed that the orders of the queries did not matter.
Abstract: We consider polynomial-time Turing machines that have access to two oracles and investigate when the order of oracle queries is significant. The oracles used here are complete languages for the Polynomial Hierarchy (PH). We prove that, for solving decision problems, the order of oracle queries does not matter. This improves upon the previous result of Hemaspaandra, Hemaspaandra and Hempel, who showed that the order of the queries does not matter if the base machine asks only one query to each oracle. On the other hand, we prove that, for computing functions, the order of oracle queries does matter unless PH collapses.

Journal Article
TL;DR: The main result shows that nondeterminism can be more powerful than randomness for read-once branching programs, and shows that there is no “probability amplification” technique forRead- once branching programs which allows to decrease the error to an arbitrarily small constant by iterating probabilistic computations.
Abstract: Randomized branching programs are a probabilistic model of computation defined in analogy to the well-known probabilistic Turing machines. In this paper, we present complexity theoretic results for randomized read-once branching programs. Our main result shows that nondeterminism can be more powerful than randomness for read-once branching programs. We present a function which is computable by nondeterministic read-once branching programs of polynomial size, while on the other hand randomized read-once branching programs for this function with twosided error at most 21 256 have exponential size. The same function exhibits an exponential gap between the randomized read-once branching program sizes for different constant worst-case errors, which shows that there is no “probability amplification” technique for read-once branching programs which allows to decrease the error to an arbitrarily small constant by iterating probabilistic computations.

Journal ArticleDOI
TL;DR: A Turing machine which is referred to as a Psychrometric Turing Machine (PTM), to solve all possible psychrometric problems and selects the optimal equation order based upon a user-specified optimality criterion of CPU cycles is constructed.
Abstract: A technique for selecting psychrometric equations and their solution order is presented The solution order for a given psychrometric problem is not always readily identifiable Furthermore, because the psychrometric equations can be solved in many different sequences, the solution process can become convoluted For example, if atmospheric pressure, dry-bulb temperature and relative humidity are known and it is desired to determine the other 12 psychrometric attributes, then there are approximately 37,780 different orders in which to solve the equations to determine the other parameters The tasks of identifying these many possible combinations of equations, and selecting an appropriate one, is called a decision problem in computation theory One technique for solving decision problems is a Turing machine computational model We have constructed a Turing machine which we refer to as a Psychrometric Turing Machine (PTM), to solve all possible psychrometric problems The PTM selects the optimal equation order based upon a user-specified optimality criterion of CPU cycles A solution is comprised of a series of functions based on equations found in the 1993 ASHRAE Handbook—Fundamentals The PTM is shown to be a practical application to a non-deterministic, multiple-path problem It required 700 ms on an engineering workstation (100 MHz, Sparc 10) to search all possible combinations and determine the optimal solution route for the most complicated “two-to-all” psychrometric problem For a particular psychrometric problem, once the equation order is found, these equations can be used to determine the unknown attributes from the known attributes in a consistent manner that is in some sense optimal We demonstrate the application of the PTM with several examples: a psychrometric calculator, a source code generator, and a listing of the optimal function call sequence for most “two-to-all” psychrometric problems encountered

Journal ArticleDOI
TL;DR: It is shown that any computable problem can be realized in a self-stabilizing fashion and the total amount of memory required by the distributed system is equal to the memory used by the Turing machine.

Journal ArticleDOI
TL;DR: The laterality problem, raised in the early seventies, see [9], solved on {0,1} alphabet without restriction, is now completely solved in the non-erasing case.
Abstract: In a previous work, [2], we defined a criterion which allowed to separate ceses when all non-erasing Turing machines on {0,1} have a decidable halting problem from cases where a universal non-erasing machine can be constructed. Applying a theorem which entails the just indicated frontier and analogous techniques based upon a qualitative study of the motions of the head of a Turing machine on its tape, another frontier result is here proved, based upon a new criterion, namely the number of left instructions. In this paper, a complete proof of the decidability part of the results is supplied. The case of a single left instruction with a finite alphabet in a generalized non-erasing context is also delt with. Thus, the laterality problem, raised in the early seventies, see [9], solved on {0,1} alphabet without restriction, is now completely solved in the non-erasing case.

Book ChapterDOI
23 Aug 1997
TL;DR: It is shown that if the input functions are restricted to range over the real numbers in the sense of Ko and Friedman, then BFF and Cmax coincide, and there is a unique maximal class Cmax of type 2 functionals that satisfies this stronger version of Cook's conditions.
Abstract: Cook has proposed three necessary conditions that a class C of type 2 functionals must satisfy in order to be the intuitively correct class of polynomial time computable functionals. We consider a strengthening of Cook's conditions, by replacing the notion of closure under functional substitution with the notion of uniform closure, introduced by Seth, and we show that there is a unique maximal class Cmax of type 2 functionals that satisfies this stronger version of Cook's conditions. We give an Oracle Turing Machine characterization of Cmax. We show Cmax to be different from Cook's and Kapron's BFF and Seth's C2. We also show that when the input functions are restricted to range over PTIME, BFF and Cmax still differ. However, if the input functions are restricted to range over the real numbers in the sense of Ko and Friedman, then BFF and Cmax coincide. Work by Seth suggests that Cook's conditions might not be sufficient, and Seth has proposed to add a new condition. We give a Turing Machine characterization of a class CS that is bigger than BFF and satisfies both Cook's and Seth's conditions. We propose a new condition, in our opinion more natural than Seth's, and show that only BFF satisfies this new condition together with Cook's.