scispace - formally typeset
Search or ask a question
Topic

Turing machine

About: Turing machine is a research topic. Over the lifetime, 5017 publications have been published within this topic receiving 125257 citations. The topic is also known as: deterministic Turing machine.


Papers
More filters
Journal ArticleDOI
TL;DR: Four ostensibly different theoretical models of induction are presented, in which the problem dealt with is the extrapolation of a very long sequence of symbols—presumably containing all of the information to be used in the induction.
Abstract: 1 Summary In Part I, four ostensibly different theoretical models of induction are presented, in which the problem dealt with is the extrapolation of a very long sequence of symbols—presumably containing all of the information to be used in the induction Almost all, if not all problems in induction can be put in this form Some strong heuristic arguments have been obtained for the equivalence of the last three models One of these models is equivalent to a Bayes formulation, in which a priori probabilities are assigned to sequences of symbols on the basis of the lengths of inputs to a universal Turing machine that are required to produce the sequence of interest as output Though it seems likely, it is not certain whether the first of the four models is equivalent to the other three Few rigorous results are presented Informal investigations are made of the properties of these models There are discussions of their consistency and meaningfulness, of their degree of independence of the exact nature of the Turing machine used, and of the accuracy of their predictions in comparison to those of other induction methods In Part II these models are applied to the solution of three problems—prediction of the Bernoulli sequence, extrapolation of a certain kind of Markov chain, and the use of phrase structure grammars for induction Though some approximations are used, the first of these problems is treated most rigorously The result is Laplace's rule of succession The solution to the second problem uses less certain approximations, but the properties of the solution that are discussed, are fairly independent of these approximations The third application, using phrase structure grammars, is least exact of the three First a formal solution is presented Though it appears to have certain deficiencies, it is hoped that presentation of this admittedly inadequate model will suggest acceptable improvements in it This formal solution is then applied in an approximate way to the determination of the “optimum” phrase structure grammar for a given set of strings The results that are obtained are plausible, but subject to the uncertainties of the approximation used

1,927 citations

Journal ArticleDOI
TL;DR: This paper shows how to do an on-line simulation of an arbitrary RAM by a probabilistic oblivious RAM with a polylogaithmic slowdown in the running time, and shows that a logarithmic slowdown is a lower bound.
Abstract: Software protection is one of the most important issues concerning computer practice. There exist many heuristics and ad-hoc methods for protection, but the problem as a whole has not received the theoretical treatment it deserves. In this paper, we provide theoretical treatment of software protection. We reduce the problem of software protection to the problem of efficient simulation on oblivious RAM.A machine is oblivious if thhe sequence in which it accesses memory locations is equivalent for any two inputs with the same running time. For example, an oblivious Turing Machine is one for which the movement of the heads on the tapes is identical for each computation. (Thus, the movement is independent of the actual input.) What is the slowdown in the running time of a machine, if it is required to be oblivious? In 1979, Pippenger and Fischer showed how a two-tape oblivious Turing Machine can simulate, on-line, a one-tape Turing Machine, with a logarithmic slowdown in the running time. We show an analogous result for the random-access machine (RAM) model of computation. In particular, we show how to do an on-line simulation of an arbitrary RAM by a probabilistic oblivious RAM with a polylogaithmic slowdown in the running time. On the other hand, we show that a logarithmic slowdown is a lower bound.

1,752 citations

Journal ArticleDOI
TL;DR: This paper gives the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church--Turing thesis, and proves that bits of precision suffice to support a step computation.
Abstract: In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch's model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97--117]. This construction is substantially more complicated than the corresponding construction for classical Turing machines (TMs); in fact, even simple primitives such as looping, branching, and composition are not straightforward in the context of quantum Turing machines. We establish how these familiar primitives can be implemented and introduce some new, purely quantum mechanical primitives, such as changing the computational basis and carrying out an arbitrary unitary transformation of polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing machine need to be specified. We prove that $O(\log T)$ bits of precision suffice to support a $T$ step computation. This justifies the claim that the quantum Turing machine model should be regarded as a discrete model of computation and not an analog one. We give the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church--Turing thesis. We show the existence of a problem, relative to an oracle, that can be solved in polynomial time on a quantum Turing machine, but requires superpolynomial time on a bounded-error probabilistic Turing machine, and thus not in the class $\BPP$. The class $\BQP$ of languages that are efficiently decidable (with small error-probability) on a quantum Turing machine satisfies $\BPP \subseteq \BQP \subseteq \Ptime^{\SP}$. Therefore, there is no possibility of giving a mathematical proof that quantum Turing machines are more powerful than classical probabilistic Turing machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory.

1,706 citations

Book ChapterDOI
01 Jan 1990
TL;DR: In this paper, the authors focus on rewrite systems, which are directed equations used to compute by repeatedly replacing sub-terms of a given formula with equal terms until the simplest form possible is obtained.
Abstract: Publisher Summary This chapter focuses on rewrite systems, which are directed equations used to compute by repeatedly replacing sub-terms of a given formula with equal terms until the simplest form possible is obtained. As a formalism, rewrite systems have the full power of Turing machines and may be thought of as nondeterministic Markov algorithms over terms rather than strings. The theory of rewriting is in essence a theory of normal forms. To some extent, it is an outgrowth of the study of A. Church's Lambda Calculus and H. B. Curry's Combinatory Logic. The chapter discusses the syntax and semantics of equations from the algebraic, logical, and operational points of view. To use a rewrite system as a decision procedure, it must be convergent. The chapter describes this fundamental concept as an abstract property of binary relations. To use a rewrite system for computation or as a decision procedure for validity of identities, the termination property is crucial. The chapter presents the basic methods for proving termination. The chapter discusses the question of satisfiability of equations and the convergence property applied to rewriting.

1,551 citations

Alex Graves1, Greg Wayne1, Ivo Danihelka1
20 Oct 2014
TL;DR: A combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent.
Abstract: We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.

1,471 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Data structure
28.1K papers, 608.6K citations
87% related
Approximation algorithm
23.9K papers, 654.3K citations
84% related
Concurrency
13K papers, 347.1K citations
84% related
Directed graph
12.2K papers, 302.4K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022139
2021128
2020148
2019168
2018141