scispace - formally typeset
Search or ask a question

Showing papers on "Turing machine published in 1999"


Book
01 Mar 1999
TL;DR: This chapter discusses Neural Networks and Turing Machines, which are concerned with the construction of neural networks based on the explicit specification of a discrete-time Turing machine.
Abstract: 1 Computational Complexity.- 1.1 Neural Networks.- 1.2 Automata: A General Introduction.- 1.2.1 Input Sets in Computability Theory.- 1.3 Finite Automata.- 1.3.1 Neural Networks and Finite Automata.- 1.4 The Turing Machine.- 1.4.1 Neural Networks and Turing Machines.- 1.5 Probabilistic Turing Machines.- 1.5.1 Neural Networks and Probabilistic Machines.- 1.6 Nondeterministic Turing Machines.- 1.6.1 Nondeterministic Neural Networks.- 1.7 Oracle Turing Machines.- 1.7.1 Neural Networks and Oracle Machines.- 1.8 Advice Turing Machines.- 1.8.1 Circuit Families.- 1.8.2 Neural Networks and Advice Machines.- 1.9 Notes.- 2 The Model.- 2.1 Variants of the Network.- 2.1.1 A "System Diagram" Interpretation.- 2.2 The Network's Computation.- 2.3 Integer Weights.- 3 Networks with Rational Weights.- 3.1 The Turing Equivalence Theorem.- 3.2 Highlights of the Proof.- 3.2.1 Cantor-like Encoding of Stacks.- 3.2.2 Stack Operations.- 3.2.3 General Construction of the Network.- 3.3 The Simulation.- 3.3.1 P-Stack Machines.- 3.4 Network with Four Layers.- 3.4.1 A Layout Of The Construction.- 3.5 Real-Time Simulation.- 3.5.1 Computing in Two Layers.- 3.5.2 Removing the Sigmoid From the Main Layer.- 3.5.3 One Layer Network Simulates TM.- 3.6 Inputs and Outputs.- 3.7 Universal Network.- 3.8 Nondeterministic Computation.- 4 Networks with Real Weights.- 4.1 Simulating Circuit Families.- 4.1.1 The Circuit Encoding.- 4.1.2 A Circuit Retrieval.- 4.1.3 Circuit Simulation By a Network.- 4.1.4 The Combined Network.- 4.2 Networks Simulation by Circuits.- 4.2.1 Linear Precision Suffices.- 4.2.2 The Network Simulation by a Circuit.- 4.3 Networks versus Threshold Circuits.- 4.4 Corollaries.- 5 Kolmogorov Weights: Between P and P/poly.- 5.1 Kolmogorov Complexity and Reals.- 5.2 Tally Oracles and Neural Networks.- 5.3 Kolmogorov Weights and Advice Classes.- 5.4 The Hierarchy Theorem.- 6 Space and Precision.- 6.1 Equivalence of Space and Precision.- 6.2 Fixed Precision Variable Sized Nets.- 7 Universality of Sigmoidal Networks.- 7.1 Alarm Clock Machines.- 7.1.1 Adder Machines.- 7.1.2 Alarm Clock and Adder Machines.- 7.2 Restless Counters.- 7.3 Sigmoidal Networks are Universal.- 7.3.1 Correctness of the Simulation.- 7.4 Conclusions.- 8 Different-limits Networks.- 8.1 At Least Finite Automata.- 8.2 Proof of the Interpolation Lemma.- 9 Stochastic Dynamics.- 9.1 Stochastic Networks.- 9.1.1 The Model.- 9.2 The Main Results.- 9.2.1 Integer Networks.- 9.2.2 Rational Networks.- 9.2.3 Real Networks.- 9.3 Integer Stochastic Networks.- 9.4 Rational Stochastic Networks.- 9.4.1 Rational Set of Choices.- 9.4.2 Real Set of Choices.- 9.5 Real Stochastic Networks.- 9.6 Unreliable Networks.- 9.7 Nondeterministic Stochastic Networks.- 10 Generalized Processor Networks.- 10.1 Generalized Networks: Definition.- 10.2 Bounded Precision.- 10.3 Equivalence with Neural Networks.- 10.4 Robustness.- 11 Analog Computation.- 11.1 Discrete Time Models.- 11.2 Continuous Time Models.- 11.3 Hybrid Models.- 11.4 Dissipative Models.- 12 Computation Beyond the Turing Limit.- 12.1 The Analog Shift Map.- 12.2 Analog Shift and Computation.- 12.3 Physical Relevance.- 12.4 Conclusions.

407 citations


Posted Content
TL;DR: In this article, the authors consider parameterized model-checking problems for various fragments of first-order logic as generic parameterized problems and show how this approach can be useful in studying both fixed-parameter tractability and intractability.
Abstract: In this article, we study parameterized complexity theory from the perspective of logic, or more specifically, descriptive complexity theory. We propose to consider parameterized model-checking problems for various fragments of first-order logic as generic parameterized problems and show how this approach can be useful in studying both fixed-parameter tractability and intractability. For example, we establish the equivalence between the model-checking for existential first-order logic, the homomorphism problem for relational structures, and the substructure isomorphism problem. Our main tractability result shows that model-checking for first-order formulas is fixed-parameter tractable when restricted to a class of input structures with an excluded minor. On the intractability side, for every t >= 0 we prove an equivalence between model-checking for first-order formulas with t quantifier alternations and the parameterized halting problem for alternating Turing machines with t alternations. We discuss the close connection between this alternation hierarchy and Downey and Fellows' W-hierarchy. On a more abstract level, we consider two forms of definability, called Fagin definability and slicewise definability, that are appropriate for describing parameterized problems. We give a characterization of the class FPT of all fixed-parameter tractable problems in terms of slicewise definability in finite variable least fixed-point logic, which is reminiscent of the Immerman-Vardi Theorem characterizing the class PTIME in terms of definability in least fixed-point logic.

108 citations


Journal ArticleDOI
TL;DR: An emerging field is described, that of nonclassical computability and non classical computing machinery, and a philosophical defence of its foundations is provided.
Abstract: (1999). Beyond the universal Turing machine. Australasian Journal of Philosophy: Vol. 77, No. 1, pp. 46-66.

98 citations


Journal ArticleDOI
TL;DR: It is proved that splicingsystems with finite components and certain controls on their work are computationallycomplete (they can simulate any Turing Machine); moreover, there are universalsplicings systems (systems with all components fixed which can simulateAny given splicing system, when an encoding of the particular system is added—as a program—to the universal system).
Abstract: We prove that splicingsystems with finite components and certain controls on their work are computationallycomplete (they can simulate any Turing Machine); moreover, there are universalsplicingsystems (systems with all components fixed which can simulate any given splicing system, when an encoding of the particular system is added—as a program—to the universal system).

93 citations


Journal ArticleDOI
TL;DR: It is shown closed-form analytic functions consisting of a finite number of trigonometric terms can simulate Turing machines, with exponential slowdown in one dimension or in real time in two or more.

92 citations


Journal ArticleDOI
TL;DR: It is shown that unbounded error, space O(s) bounded quantum Turing machines and probabilistic Turing machines are equivalent in power and, furthermore, that any QTM running in space s can be simulated deterministically in NC2(2s)?DSPACE(s2)?DTIME(2O(s).

83 citations


Book ChapterDOI
20 Sep 1999
TL;DR: The expressive power of OCL in terms of navigability and computability is examined and it is showed that OCL is not equivalent to the relational calculus, which means an algorithm computing the transitive closure of a binary relation is expressed in OCL.
Abstract: This paper examines the expressive power of OCL in terms of navigability and computability. First the expressive power of OCL is compared with the relational calculus; it is showed that OCL is not equivalent to the relational calculus. Then an algorithm computing the transitive closure of a binary relation -operation that cannot be encoded in the relational calculus- is expressed in OCL. Finally the equivalence of OCL with a Turing machine is pondered.

78 citations


Journal Article
TL;DR: In this article, the authors consider a model analogous to Turing machines with a read-only input tape and propose two different space measures, corresponding to the maximal number of bits and clauses/monomials that need to be kept in the memory simultaneously.
Abstract: We study space complexity in the framework of propositional proofs. We consider a natural model analogous to Turing machines with a read-only input tape and such popular propositional proof systems as resolution, polynomial calculus, and Frege systems. We propose two different space measures, corresponding to the maximal number of bits, and clauses/monomials that need to be kept in the memory simultaneously. We prove a number of lower and upper bounds in these models, as well as some structural results concerning the clause space for resolution and Frege systems.

75 citations


Book ChapterDOI
01 Jan 1999
TL;DR: John von Neumann worked up to conceive the first cellular automaton, leaving interesting views about implied mathematics, including logics, probabilities, leading from the discrete to the continuous.
Abstract: At the beginning of this story is John von Neumann. As far back as 1948 he introduced the idea of a theory of automata in a conference at the Hixon Symposium, September 1948 (von Neumann, 1951). From that time on, he worked to what he described himself not as a theory, but as “an imperfectly articulated and hardly formalized ”body of experience“ (introduction to ”The Computer and the Brain“, written around 1955-56 and published after his death (von Neumann, 1958)). He worked up to conceive the first cellular automaton (he is also said to have introduced the cellular epithet (Burks, 1972)). He also left interesting views about implied mathematics, including logics, probabilities, leading from the discrete to the continuous (von Neumann, 1951; von Neumann, 1956; von Neumann, 1966).

72 citations


Book ChapterDOI
01 Jan 1999
TL;DR: This paper proves the Game of Life’s universality with respect to several computational models: boolean circuits, Turing machines, and two-dimensional cellular automata.
Abstract: The Game of Life was created by J.H. Conway. One of the main features of this game is its universality. We prove in this paper this universality with respect to several computational models: boolean circuits, Turing machines, and two-dimensional cellular automata. These different points of view on Life’s universality are chosen in order to clarify the situation and to simplify the original proof. We also present precise definitions of these 3 universality properties and explain the relations between them.

69 citations


Book ChapterDOI
06 Jul 1999
TL;DR: Nested Petrinets are Petri nets using otherPetri nets as tokens, thereby allowing easy description of hierarchical systems.
Abstract: Nested Petri nets are Petri nets using other Petri nets as tokens, thereby allowing easy description of hierarchical systems Their nested structure makes some important verification problems undecidable (reachability, boundedness, ) while some other problems remain decidable (termination, inevitability, )

Journal ArticleDOI
TL;DR: A is computable from (or recursive in) B if there is a Turing machine which, when equipped with an oracle for B, computes (the characteristic function of) A, i.e. for some e, φe = A.
Abstract: The primary notion of effective computability is that provided by Turing machines (or equivalently any of the other common models of computation). We denote the partial function computed by the eth Turing machine in some standard list by φe. When these machines are equipped with an “oracle” for a subset A of the natural numbers ω, i.e. an external procedure that answers questions of the form “is n in A”, they define the basic notion of relative computability or Turing reducibility (from Turing (1939)). We say that A is computable from (or recursive in) B if there is a Turing machine which, when equipped with an oracle for B, computes (the characteristic function of) A, i.e. for some e, φe = A. We denote this relation by A ≤T B which we read as A is (Turing) reducible to B or A is recursive (computable) in B. This relation is transitive and reflexive and so induces an equivalence relation ≡T (A ≡T B ⇔ A ≤T B ∧ B ≤T A) and a partial order also denoted by ≤T on the equivalence classes. These equivalence classes are called (Turing) degrees and the equivalence class of a set A ⊆ ω is called its degree. It is typically denoted by a or deg(A). ∗Partially supported by NSF Grant DMS-9802843. †Partially supported by NSF Grant DMS-97-96121.

Proceedings ArticleDOI
01 Mar 1999
TL;DR: JFLAP is enhanced to allow one to study the proofs of several theorems that focus on conversions of languages, from one form to another, such as converting an NFA to a DFA and then to a minimum state DFA.
Abstract: An automata theory course can be taught in an interactive, hands-on manner using a computer. At Duke we have been using the software tool JFLAP to provide interaction and feedback in CPS 140, our automata theory course. JFLAP is a tool for designing and running nondeterministic versions of finite automata, pushdown automata, and Turing machines. Recently, we have enhanced JFLAP to allow one to study the proofs of several theorems that focus on conversions of languages, from one form to another, such as converting an NFA to a DFA and then to a minimum state DFA. In addition, our enhancements combined with other tools allow one to interactively study LL and LR parsing methods.

Book ChapterDOI
TL;DR: The present paper gives a mathematically precise form to the impression of great complexity of the decision problem for R →, the implication fragment of R, by showing that any Turing machine which solves this decision problem must use an exponential amount of space on infinitely many inputs.
Abstract: Relevance logic is distinguished among non-classical logics by the richness of its mathematical as well as philosophical structure. This richness is nowhere more evident than in the difficulty of the decision problem for the main relevant propositional logics. These logics are in general undecidable [22]; however, there are important decidable subsystems. The decision procedures for these subsystems convey the impression of great complexity. The present paper gives a mathematically precise form to this impression by showing that the decision problem for R →, the implication fragment of R, is exponential space hard. This means that any Turing machine which solves this decision problem must use an exponential amount of space (relative to the input size) on infinitely many inputs.

Journal ArticleDOI
John H. Reif1
TL;DR: Techniques for executing lengthy computations using short DNA strands by more or less conventional biotechnology engineering techniques within a small number of lab steps are described, in the context of well defined abstract models of biomolecular computation.
Abstract: This paper is concerned with the development of techniques for massively parallel computation at the molecular scale, which we refer to as molecular parallelism. While this may at first appear to be purely science fiction, Adleman [Ad1] has already employed molecular parallelism in the solution of the Hamiltonian path problem, and successfully tested his techniques in a lab experiment on DNA for a small graph. Lipton [L] showed that finding the satisfying inputs to a Boolean expression of size n can be done in O(n) lab steps using DNA of length O(n log n) base pairs. This recent work by Adleman and Lipton in molecular parallelism considered only the solution of NP search problems, and provided no way of quickly executing lengthy computations by purely molecular means; the number of lab steps depended linearly on the size of the simulated expression. See [Re3] for further recent work on molecular parallelism and see [Re4] for an extensive survey of molecular parallelism. Our goal is to execute lengthy computations quickly by the use of molecular parallelism. We wish to execute these biomolecular computations using short DNA strands by more or less conventional biotechnology engineering techniques within a small number of lab steps. This paper describes techniques for achieving this goal, in the context of well defined abstract models of biomolecular computation. Although our results are of theoretical consequence only, due to the large amount of molecular parallelism (i.e., large test tube volume) required , we believe that our theoretical models and results may be a basis for more practical later work, just as was done in the area of parallel computing. We propose two abstract models of biomolecular computation. The first, the Parallel Associative Memory (PAM) model, is a very high-level model which includes a Parallel Associative Matching (PA-Match) operation, that appears to improve the power of molecular parallelism beyond the operations previously considered by Lipton [L]. We give some simulations of conventional sequential and parallel computational models by our PAM model. Each of the simulations use strings of length O(s) over an alphabet of size O(s) (which correspond to DNA of length O(s log s) base pairs). Using O(s log s) PAM operations that are not PA-Match (or O(s) operations assuming a ligation operation) and t PA-Match operations, we can: 1. simulate a nondeterministic Turing Machine computation with space bound s and time bound 2 O(s) , with t = O(s) , 2. simulate a CREW PRAM with time bound D, with M memory cells, and processor bound P, where here s = O( log (PM)) and t = O(D+s), 3. find the satisfying inputs to a Boolean circuit constructible in s space with n inputs, unbounded fan-out, and depth D, where here t = O(D+s). We also propose a Recombinant DNA (RDNA) model which is a low-level model that allows operations that are abstractions of very well understood recombinant DNA operations and provides a representation, which we call the complex , for the relevant structural properties of DNA. The PA-Match operation for lengthy strings of length s cannot be feasibly implemented by recombinant DNA techniques directly by a single step of complementary pairing in DNA; nevertheless we show this Matching operation can be simulated in the RDNA model with O(s) slowdown by multiple steps of complementary pairing of substrings of length 2 (corresponding to logarithmic length DNA subsequences). Each of the other operations of the PAM model can be executed in our RDNA model, without slowdown. We further show that, with a further O(s)/ log (1/e) slowdown, the simulations can be done correctly with probability 1/2 even if certain recombinant DNA operations (e.g., Separation) can error with a probability e. We also observe efficient simulations can be done by PRAMs and thus Turing Machines of our molecular models.

Book ChapterDOI
TL;DR: This paper reviews single-stream and multiple-stream interaction machines, extensions of set theory and algebra for models of sequential interaction, and interactive extensions of the Turing test to motivate the use of interactive models as a basis for applications to computer architecture, software engineering, and artificial intelligence.
Abstract: The irreducibility of interactive to algorithmic computing requires fundamental questions concerning models of computation to be reexamined. This paper reviews single-stream and multiple-stream interaction machines, extensions of set theory and algebra for models of sequential interaction, and interactive extensions of the Turing test. It motivates the use of interactive models as a basis for applications to computer architecture, software engineering, and artificial intelligence.

Proceedings Article
13 Jul 1999
TL;DR: Experimental results show that, in this domain, the new graph-based operator provides a clear advantage over two-point crossover.
Abstract: The success of the application of evolutionary approaches depends, to a large extent, on problem representation and on the used genetic operators. In this paper we introduce a new graph based crossover operator and compare it with classical two-point crossover. The study was carried out using a theoretical hard problem known as Busy Beaver. This problem involves the search for the Turing Machine that produces the maximum number of ones when started on a blank tape. Experimental results show that, in this domain, the new graph-based operator provides a clear advantage over two-point crossover.

Journal ArticleDOI
TL;DR: A framework for an algorithmic analysis of dissipative flows is presented, enabling the comparison of the performance of discrete and continuous time analog computation models.
Abstract: Dissipative flows model a large variety of physical systems. In this Letter the evolution of such systems is interpreted as a process of computation; the attractor of the dynamics represents the output. A framework for an algorithmic analysis of dissipative flows is presented, enabling the comparison of the performance of discrete and continuous time analog computation models. A simple algorithm for finding the maximum of n numbers is analyzed, and shown to be highly efficient. The notion of tractable (polynomial) computation in the Turing model is conjectured to correspond to computation with tractable (analytically solvable) dynamical systems having polynomial complexity. The computation of a digital computer, and its mathematical abstraction, the Turing machine is described by a map on a discrete configuration space. In recent years scientists have developed new approaches to computation, some of them based on continuous time analog systems. The most promising are neuromorphic systems [1], models of human memory [2], and experimentally realizable quantum computers [3]. Although continuous time systems are widespread in experimental realizations, no theory exists for their algorithmic analysis. The standard theory of computation and computational complexity [4] deals with computation in discrete time and in a discrete configuration space, and is inadequate for the description of such systems. This Letter describes an attempt to fill this gap. Our model of a computer is based on dissipa

Posted Content
TL;DR: A general model of multi-tape, multihead Quantum Turing machines with multi final states that also allow tape heads to stay still is discussed.
Abstract: The notion of quantum Turing machines is a basis of quantum complexity theory. We discuss a general model of multi-tape, multi-head Quantum Turing machines with multi final states that also allow tape heads to stay still.

Journal ArticleDOI
TL;DR: This work is investigating cellular automata on two-dimensional array as language recognizer and some limitations of the power capabilities of real-time recognition are shown.

Journal ArticleDOI
TL;DR: This special issue contains both material on non-computable aspects of Kolmogorov complexity and material on many fascinating applications based on different ways of approximating Kolmogsorovcomplexity.
Abstract: 1. UNIVERSALITYThe theory of Kolmogorov complexity is based on thediscovery, by Alan Turing in 1936, of the universal Turingmachine. After proposing the Turing machine as anexplanation of the notion of a computing machine, Turingfound that there exists one Turing machine which cansimulate any other Turing machine.Complexity, according to Kolmogorov, can be measuredby the length of the shortest program for a universal Turingmachine that correctly reproduces the observed data. It hasbeen shown that, although there are many universal Turingmachines(andthereforemanypossible ‘shortest’ programs),the corresponding complexities differ by at most an additiveconstant.The main thrust of the theory of Kolmogorov complexityis its ‘universality’; it strives to construct universal learningmethods based on universal coding methods. Thisapproach was originated by Solomonoff and made moreappealing to mathematicians by Kolmogorov. Typicallythese universal methods will be computable only in someweak sense. In applications, therefore, we can only hopeto approximate Kolmogorov complexity and related notions(such as randomnessdeficiency and algorithmic informationmentioned below). This special issue contains both materialon non-computable aspects of Kolmogorov complexity andmaterial on many fascinating applications based on differentways of approximating Kolmogorov complexity.2. BEGINNINGSAs we have already mentioned, the two main originators ofthe theory of Kolmogorovcomplexity were Ray Solomonoff(born 1926) and Andrei Nikolaevich Kolmogorov (1903–1987). The motivations behind their work were completelydifferent; Solomonoff was interested in inductive inferenceand artificial intelligence and Kolmogorov was interestedin the foundations of probability theory and, also, ofinformation theory. They arrived, nevertheless, at the samemathematical notion, which is now known as Kolmogorovcomplexity.In 1964 Solomonoff published his model of inductiveinference. He argued that any inference problem can bepresentedas a problemof extrapolatinga verylongsequenceof binary symbols; ‘given a very long sequence, representedby T, what is the probability that it will be followed by a

Book ChapterDOI
06 Sep 1999
TL;DR: In this paper, a general model of multi-tape, multi-head Quantum Turing machines with multi final states that also allow tape heads to stay still is discussed, which is a basis of quantum complexity theory.
Abstract: The notion of quantum Turing machines is a basis of quantum complexity theory. We discuss a general model of multi-tape, multihead Quantum Turing machines with multi final states that also allow tape heads to stay still.

Journal ArticleDOI
TL;DR: It is proved that all recursively enumerable languages can be generated by context-free returning parallel communicating grammar systems by showing how the parallel communicating grammars can simulate two-counter machines.

Proceedings ArticleDOI
01 Jun 1999
TL;DR: The basic predicates and operations on solids are computable in this model which admits regular and non-regular sets and supports a design methodology for actual robust algorithms.
Abstract: Solid modelling and computational geometry are based on classical topology and geometry in which the basic predicates and operations, such as membership, subset inclusion, union and intersection, are not continuous and therefore not computable. But a sound computational framework for solids and geometry can only be built in a framework with computable predicates and operations. In practice, correctness of algorithms in computational geometry is usually proved using the unrealistic Real RAM machine model of computation, which allows comparison of real numbers, with the undesirable result that correct algorithms, when implemented, turn into unreliable programs. Here, we use a domain-theoretic approach to recursive analysis to develop the basis of an e3ective and realistic framework for solid modelling. This framework is equipped with a well de5ned and realistic notion of computability which re6ects the observable properties of real solids. The basic predicates and operations on solids are computable in this model which admits regular and non-regular sets and supports a design methodology for actual robust algorithms. Moreover, the model is able to capture the uncertainties of input data in actual CAD situations. c 2002 Elsevier Science B.V. All rights reserved.

Book ChapterDOI
01 Jan 1999
TL;DR: The modified model, the reversible guided recombination system, has the computational power of a Turing machine, indicates that, in principle, some unicellular organisms may have the capacity to perform any computation carried out by an electronic computer.
Abstract: In [10] we proved that a model for the guided homologous recombinations that take place during gene rearrangement in ciliates has the computational power of a Turing machine, the accepted formal model of computation. In this paper we change some of the assumptions and propose a new model that (i) allows recombinations between and within circular strands in addition to recombinations between linear and circular strands and within linear strands, (ii) relies on a commutative splicing scheme, and (iii) where all recombinations are reversible. We prove that the modified model, the reversible guided recombination system, has the computational power of a Turing machine. This indicates that, in principle, some unicellular organisms may have the capacity to perform any computation carried out by an electronic computer.

Journal ArticleDOI
TL;DR: Only computational explanations of a content-involving sort can answer certain 'how'-questions; can support content- involving counterfactuals; and have the generality characteristic of psychological explanations.
Abstract: Only computational explanations of a content-involving sort can answer certain 'how'-questions; can support content-involving counterfactuals; and have the generality characteristic of psychological explanations. Purely formal characterizations of computations have none of these properties, and do not determine content. These points apply not only to psychological explanation, but to Turing machines themselves. Computational explanations which involve content are not opposed to naturalism. They are also required if we are to explain the content-involving properties of mental states.

Proceedings ArticleDOI
01 Mar 1999
TL;DR: The Java Computability Toolkit (JCT) is introduced here as a new teaching aide and as an exploratory student's supplement to a course on theory of computation.
Abstract: Interactive visualization tools for models of computation provide a more compelling means of exploration and feedback than traditional paper and pencil methods in theory of computation courses. The Java Computability Toolkit (JCT) is introduced here as a new teaching aide and as an exploratory student's supplement to a course on theory of computation. JCT consists of two Java multiple-window, web-accessible, graphical environments, allowing the construction and simulation of finite automata and Turing machines. This paper discusses JCT's use, design, and applications in teaching.

01 Sep 1999
TL;DR: The goal is to provide multiple perspective on interactive modeling from the viewpoint of interaction machines and mathematics, to persuade readers that interaction paradigms can play a significant role in narrowing the gap between theoretical models and software practice.
Abstract: D. Goldin, Univ. of Massachusetts - Boston P. Wegner, Brown University Finite computing agents that interact with an environment are shown to be more expressive than Turing machines according to a notion of expressiveness that measures problem-solving ability and is specified by observation equivalence. Sequential interactive models of objects, agents, and embedded systems are shown to be more expressive than algorithms. Multi-agent (distributed) models of coordination, collaboration, and true concurrency are shown to be more expressive than sequential models. The technology shift from algorithms to interaction is expressed by a mathematical paradigm shift that extends inductive definition and reasoning methods for finite agents to coinductive methods of set theory and algebra. An introduction to models of interactive computing is followed by an account of mathematical models of sequential interaction in terms of coinductive methods of non-well-founded set theory, coalgebras, and bisimulation. Models of distributed information flow and multi-agent interaction are developed, and the Turing test is extended to interactive sequential and distributed models of computation. Specification of interactive systems is defined in terms of observable behavior, Godel incompleteness is shown for interaction machines, and explanatory power of physical theories is shown to correspond to expressiveness for models of computation. Our goal is to provide multiple perspective on interactive modeling from the viewpoint of interaction machines and mathematics, to persuade readers that interaction paradigms can play a significant role in narrowing the gap between theoretical models and software practice.

Posted Content
TL;DR: There are infinite time computable functions f:R-->R that are not one-tape computable, and so the two models of supertask computation are not equivalent; but closing it under composition yields the full class of all infinite time computation functions.
Abstract: Infinite time Turing machines with only one tape are in many respects fully as powerful as their multi-tape cousins. In particular, the two models of machine give rise to the same class of decidable sets, the same degree structure and, at least for functions f:R-->N, the same class of computable functions. Nevertheless, there are infinite time computable functions f:R-->R that are not one-tape computable, and so the two models of supertask computation are not equivalent. Surprisingly, the class of one-tape computable functions is not closed under composition; but closing it under composition yields the full class of all infinite time computable functions. Finally, every ordinal which is clockable by an infinite time Turing machine is clockable by a one-tape machine, except certain isolated ordinals that end gaps in the clockable ordinals.

Book ChapterDOI
10 Jan 1999
TL;DR: It is shown that numeric folding over a given vocabulary is sometimes not able to compute the whole class of uniform aggregate function over the same vocabulary, but this limitation can be partially remedied by the restructuring capabilities of a query language.
Abstract: In this paper we present a new approach for studying aggregations in the context of database query languages. Starting from a broad definition of aggregate function, we address our investigation from two different perspectives. We first propose a declarative notion of uniform aggregate function that refers to a family of scalar functions uniformly constructed over a vocabulary of basic operators by a bounded Turing Machine. This notion yields an effective tool to study the effect of the embedding of a class of built-in aggregate functions in a query language. All the aggregate functions most used in practice are included in this classification. We then present an operational notion of aggregate function, by considering a high-order folding constructor, based on structural recursion, devoted to compute numeric aggregations over complex values. We show that numeric folding over a given vocabulary is sometimes not able to compute, by itself, the whole class of uniform aggregate function over the same vocabulary. It turns out however that this limitation can be partially remedied by the restructuring capabilities of a query language.