scispace - formally typeset
Topic

Turing machine

About: Turing machine is a(n) research topic. Over the lifetime, 5017 publication(s) have been published within this topic receiving 125257 citation(s). The topic is also known as: deterministic Turing machine.
Papers
More filters

Journal ArticleDOI
Abstract: It is argued that underlying the Church-Turing hypothesis there is an implicit physical assertion. Here, this assertion is presented explicitly as a physical principle: ‘every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means’. Classical physics and the universal Turing machine, because the former is continuous and the latter discrete, do not obey the principle, at least in the strong form above. A class of model computing machines that is the quantum generalization of the class of Turing machines is described, and it is shown that quantum theory and the ‘universal quantum computer’ are compatible with the principle. Computing machines resembling the universal quantum computer could, in principle, be built and would have many remarkable properties not reproducible by any Turing machine. These do not include the computation of non-recursive functions, but they do include ‘quantum parallelism’, a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it. The intuitive explanation of these properties places an intolerable strain on all interpretations of quantum theory other than Everett’s. Some of the numerous connections between the quantum theory of computation and the rest of physics are explored. Quantum complexity theory allows a physically more reasonable definition of the ‘complexity’ or ‘knowledge’ in a physical system than does classical complexity theory.

3,335 citations


Journal ArticleDOI
Charles H. Bennett1
TL;DR: This result makes plausible the existence of thermodynamically reversible computers which could perform useful computations at useful speed while dissipating considerably less than kT of energy per logical step.
Abstract: The usual general-purpose computing automaton (e.g.. a Turing machine) is logically irreversible- its transition function lacks a single-valued inverse. Here it is shown that such machines may he made logically reversible at every step, while retainillg their simplicity and their ability to do general computations. This result is of great physical interest because it makes plausible the existence of thermodynamically reversible computers which could perform useful computations at useful speed while dissipating considerably less than kT of energy per logical step. In the first stage of its computation the logically reversible automaton parallels the corresponding irreversible automaton, except that it saves all intermediate results, there by avoiding the irreversible operation of erasure. The second stage consists of printing out the desired output. The third stage then reversibly disposes of all the undesired intermediate results by retracing the steps of the first stage in backward order (a process which is only possible because the first stage has been carried out reversibly), there by restoring the machine (except for the now-written output tape) to its original condition. The final machine configuration thus contains the desired output and a reconstructed copy of the input, but no other undesired data. The foregoing results are demonstrated explicitly using a type of three-tape Turing machine. The biosynthesis of messenger RNA is discussed as a physical example of reversible computation.

3,242 citations


Journal ArticleDOI
01 Nov 2002-Neural Computation
TL;DR: A new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks, based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry.
Abstract: A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.

2,877 citations


Proceedings Article
01 Jan 1987-
TL;DR: Permission to copy without fee all or part of this material is granted provided that the copies are not made or Idistributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machimery.
Abstract: Permission to copy without fee all or part of this material is granted provided that the copies are not made or Idistributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machimery. To copy otherwise, or to republish, requires a fee and/or specfic permission. correctly run a given Turing machine hi on these 2;‘s while keeping the maximum possible pniracy about them. That is, they want to compute Y~(~l,..., 2,) without revealing more about the Zi’s than it is already contained in the value y itself. For instance, if M computes the sum of the q’s, every single player should not be able to learn more than the sum of the inputs of the other parties. Here A4 ma.y very well be a probabilistic Turing machine. In this case, all playen want to agree on a single string y, selected with the right probability distribution, as M’s output.

2,030 citations


Journal ArticleDOI
TL;DR: Four ostensibly different theoretical models of induction are presented, in which the problem dealt with is the extrapolation of a very long sequence of symbols—presumably containing all of the information to be used in the induction.
Abstract: 1 Summary In Part I, four ostensibly different theoretical models of induction are presented, in which the problem dealt with is the extrapolation of a very long sequence of symbols—presumably containing all of the information to be used in the induction Almost all, if not all problems in induction can be put in this form Some strong heuristic arguments have been obtained for the equivalence of the last three models One of these models is equivalent to a Bayes formulation, in which a priori probabilities are assigned to sequences of symbols on the basis of the lengths of inputs to a universal Turing machine that are required to produce the sequence of interest as output Though it seems likely, it is not certain whether the first of the four models is equivalent to the other three Few rigorous results are presented Informal investigations are made of the properties of these models There are discussions of their consistency and meaningfulness, of their degree of independence of the exact nature of the Turing machine used, and of the accuracy of their predictions in comparison to those of other induction methods In Part II these models are applied to the solution of three problems—prediction of the Bernoulli sequence, extrapolation of a certain kind of Markov chain, and the use of phrase structure grammars for induction Though some approximations are used, the first of these problems is treated most rigorously The result is Laplace's rule of succession The solution to the second problem uses less certain approximations, but the properties of the solution that are discussed, are fairly independent of these approximations The third application, using phrase structure grammars, is least exact of the three First a formal solution is presented Though it appears to have certain deficiencies, it is hoped that presentation of this admittedly inadequate model will suggest acceptable improvements in it This formal solution is then applied in an approximate way to the determination of the “optimum” phrase structure grammar for a given set of strings The results that are obtained are plausible, but subject to the uncertainties of the approximation used

1,831 citations


Network Information
Related Topics (5)
Complexity class

2.7K papers, 79.1K citations

91% related
Undecidable problem

3.1K papers, 71.2K citations

91% related
Automaton

2.3K papers, 53.8K citations

91% related
Nondeterministic algorithm

5.8K papers, 156.6K citations

91% related
PSPACE

1.6K papers, 46K citations

91% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20224
2021128
2020147
2019168
2018141
2017168

Top Attributes

Show by:

Topic's top 5 most impactful authors

Katsushi Inoue

25 papers, 190 citations

Oscar H. Ibarra

25 papers, 470 citations

Mark Burgin

24 papers, 192 citations

Itsuo Takanami

19 papers, 337 citations

Claudio Zandron

17 papers, 131 citations