scispace - formally typeset
Search or ask a question

Showing papers on "Turing machine published in 1983"


Journal ArticleDOI
TL;DR: A modified time complexity measure UTIME of Turing machines computations which is sensitive to multiplication by constants is introduced and the results concerning languages over one- and two-letter alphabets are refined.

156 citations


Proceedings ArticleDOI
07 Nov 1983
TL;DR: It is shown that, for multi-tape Turing machines, non-deterministic linear time is more powerful than deterministic linearTime for general Turing machines.
Abstract: We show that, for multi-tape Turing machines, non-deterministic linear time is more powerful than deterministic linear time. We also discuss the prospects for extending this result to more general Turing machines.

126 citations


Journal ArticleDOI
TL;DR: The well-defined but noncomputable functions E(k) and S( k) given by T. Rado as the "score" and "shift number" for the k-state Turing machine "Busy Beaver Game" were reported by this author, supported the conjecture that these lower bounds are the actual particular values of the functions for k 4.
Abstract: The well-defined but noncomputable functions E(k) and S(k) given by T. Rado as the "score" and "shift number" for the k-state Turing machine "Busy Beaver Game" were previously known only for k 13 and S(4) a 107, reported by this author, supported the conjecture that these lower bounds are the actual particular values of the functions for k 4. The four-state case has previously been reduced to solving the blank input tape halting problem of only 5,820 individual machines. In this final stage of the k = 4 case, one appears to move into a heuristic level of higher order where it is necessary to treat each machine as representing a distinct theorem. The remaining set consists of two primary classes in which a machine and its tape are viewed as the representation of a growing string of cellular automata. The proof techniques, embodied in programs, are entirely heuristic, while the inductive proofs, once established by the computer, are completely rigorous and become the key to the proof of the new and original mathematical results: l:(4) 13 and S(4) = 107. " In any case, even though skilled mathematicians and experienced programmers attempted to evaluate E(3) and S(3), there is no evidence that any known approach will yield the answer, even if we avail ourselves of high-speed computers and elaborate programs. As regards L(4), S(4), the situation seems to be entirely hopeless at present."-Tibor Rado, 1963. 1. Background and Introduction. The "Busy Beaver Game" was devised by Tibor Rado [8] for the purpose of illustrating the notion of noncomputability. Given the set of Turing machines of exactly k states which operate with the minimum alphabet of two symbols (a space and a mark or 0 and 1) one considers the problem of behavior of these machines on a tape which is initially all blank (all O's). This is a finite set of machines, there being exactly (4k + 1)2k distinct machines, where, with two table entries per state, each table entry may consist of printing 0 or 1, moving right or left, branching to state 1, 2,... ,k or simply an entry to declare a halt.* Since the input tape is blank, each machine faces one of two possible fates: either it eventually halts, or else it continues running forever. After a particular machine Received July 22, 1981; revised June 28, 1982 and September 7, 1982. 1980 Mathematics Subject Classification. Primary 03D10, 03B35; Secondary 68C30, 68D20.

74 citations


Book ChapterDOI
21 Aug 1983
TL;DR: The c lasses LOGCFL c o n s i s t s of all s e t s log s p a c e r e d u c i b l e to t h e c lass CFL of c y n t e x t f r ee l a n g u a g e s.
Abstract: The c lass LOGCFL c o n s i s t s of all s e t s log s p a c e r e d u c i b l e to t h e c lass CFL of c o n t e x t f r ee l a n g u a g e s . (He re A is log s p a c e r e d u c i b l e to B iff t h e r e is s o m e log s p a c e c o m p u t a b l e f u n c t i o n f s u c h t h a t fo r all x , x e A iff f ( z ) e B ) . Sudbor ough [Su] c h a r a c t e r i z e d LOGCFL as t h o s e s e t s a c c e p t e d by a n o n d e t e r m i n i s t i c a u x i l i a r y p u s h d o w n m a c h i n e in log s p a c e and p o l y n o m i a l t ime . F r o m this , i t follows t h a t NL C LOGCFL. Ruzzo [Ru2] f u r t h e r c h a r a c t e r i z e d LGGCFL as t h o s e s e t s a c c e p t e d by an ATM in log s p a c e a n d p o l y n o m i a l t r e e size, and p r o v e d LOGCFL C N C 2. Bes ides c o n t e x t f r e e l a n g u a g e s and m e m b e r s of NL, t h e c l a s s LOGCFL cont a i n s t h e m o n o t o n e p l a n a r c i r c u i t va lue p r o b l e m [DC], b o u n d e d v a l e n c e subt r e e i s o m o r p h i s m [Ru3], a n d bas i c d y n a m i c p r o g r a m m i n g p r o b l e m s [Gol] . The l a t t e r a r e m o r e n a t u r a l l y e x p r e s s e d as f u n c t i o n s , and so i t s e e m s t h a t t h e n a t u r a l c l ass to c o n s i d e r is CFL" ( the c l o s u r e of CFL u n d e r -<). P r o p o s i t i o n 5. LOGCFL C CFL*. Proof . An i n s p e c t i o n of S u d b o r o u g h ' s p roof [Su] t h a t e v e r y s e t a c c e p t e d by a n o n d e t e r m i n i s t i c a u x i l i a r y p u s h d o w n m a c h i n e in log s p a c e and p o l y n o m i a l t i m e is l o g s p a c e r e d u c i b l e to CFL shows t h a t t h e r e d u c t i o n is v ia an NC 1 c o m p u t a b l e f u n c t i o n . H e n c e LOGCFL = NC1CFL, and t h e p r o p o s i t i o n follows. The above i n c l u s i o n is p rope r , n o t only b e c a u s e CFL" c o n t a i n s f u n c t i o n s o t h e r t h a n 0 1 f u n c t i o n s , b u t b e c a u s e a p p a r e n t l y LOGCFL is n o t c losed u n d e r c o m p l e m e n t a t i o n . For e x a m p l e , t he c o m p l e m e n t of t he g r a p h a c c e s s i b i l i t y p r o b l e m does no t a p p e a r to be in LGGCFL.

58 citations


Journal ArticleDOI
TL;DR: A simple, natural new complexity measure for 2- ATM's (or TR2-ATM's), called ‘leaf-size’, is introduced, and a spectrum of complexity classes based on leaf-size bounded computations is provided, to investigate recognizability of connected patterns by 2-ATm's.

49 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to show how the main results of the Church-Markov-Turing theory of computable functions may quickly be derived and understood without recourse to the largely irrelevant theories of recursive functions, Markov algorithms, or Turing machines.
Abstract: The modern theory of computability is based on the works of Church, Markov and Turing who, starting from quite different models of computation, arrived at the same class of computable functions. The purpose of this paper is the show how the main results of the Church-Markov-Turing theory of computable functions may quickly be derived and understood without recourse to the largely irrelevant theories of recursive functions, Markov algorithms, or Turing machines. We do this by ignoring the problem of what constitutes a computable function and concentrating on the central feature of the Church-Markov-Turing theory: that the set of computable partial functions can be effectively enumerated. In this manner we are led directly to the heart of the theory of computability without having to fuss about what a computable function is.The spirit of this approach is similar to that of [RGRS]. A major difference is that we operate in the context of constructive mathematics in the sense of Bishop [BSH1], so all functions are computable by definition, and the phrase “you can find” implies “by a finite calculation.” In particular if P is some property, then the statement “for each m there is n such that P(m, n)” means that we can construct a (computable) function θ such that P(m, θ(m)) for all m. Church's thesis has a different flavor in an environment like this where the notion of a computable function is primitive.One point of such a treatment of Church's thesis is to make available to Bishopstyle constructivists the Markovian counterexamples of Russian constructivism and recursive function theory. The lack of serious candidates for computable functions other than recursive functions makes it quite implausible that a Bishopstyle constructivist could refute Church's thesis, or any consequence of Church's thesis. Hence counterexamples such as Specker's bounded increasing sequence of rational numbers that is eventually bounded away from any given real number [SPEC] may be used, as Brouwerian counterexamples are, as evidence of the unprovability of certain assertions.

27 citations


01 May 1983
TL;DR: A proof by a computer program of the Turing completeness of a computational paradigm akin to Pure LISP is described, which the authors believe is the first instance of a machine proving the Turing complementteness of another computational paradigm.
Abstract: : The authors describe a proof by a computer program of the Turing completeness of a computational paradigm akin to Pure LISP. That is, they define formally the notions of a Turing machine and of a version of Pure LISP and prove that anything that can be computed by a Turing machine can be computed by LISP. While this result is straightforward, they believe this is the first instance of a machine proving the Turing completeness of another computational paradigm. (Author)

27 citations


Journal ArticleDOI
Wolfgang J. Paul1
TL;DR: For d and k > 2, d-dimensional k-tape Turing machines cannot simulate d- dimensional Turing machines with k heads on l tape in real time.

22 citations


Journal ArticleDOI
TL;DR: Conway does it again! He's already given us Life and Sprouts, Phutball and Hackenbush, the Doomsday Rule and Sylver Coinage, and dozens of other things [1], bewildering on first acquaintance, but enticingly arranged and punctuated with pedagogy so that we can't help learning about them as discussed by the authors.
Abstract: Conway does it again! He's already given us Life and Sprouts, Phutball and Hackenbush, the Doomsday Rule and Sylver Coinage, and dozens of other things [1], bewildering on first acquaintance, but enticingly arranged and punctuated with pedagogy so that we can't help learning about them. It's an old adage that you don't really understand something until you teach it to someone else. Donald Knuth extends this to teaching it to a computer. John Horton Conway is never satisfied with his exposition until he can explain his latest interest to the person-in-the-street, even one without mathematical training. If you understand logic and computers all the way from Turing machines to the implementing of a program that you've written in a high-level language, then this article isn't for you. But before you go, you must at least be intrigued by Conway's machine, a row of fourteen rational numbers (FIGURE 1).

20 citations


Book ChapterDOI
TL;DR: A new model of parallel computation, namely the FIFO nets, is introduced, which has the power of the Turing machine and regularity is decidable for monogeneous nets.

19 citations


Book ChapterDOI
23 May 1983
TL;DR: A simple general technique for minimal logical implementation of Turing machine programs provides for completeness of logical decision problems which via the implementation correspond to limited or unlimited halting problems.
Abstract: We develop a simple general technique for minimal logical implementation of Turing machine programs. This implementation provides for completeness of logical decision problems which via the implementation correspond to limited or unlimited halting problems.

Book ChapterDOI
21 Aug 1983
TL;DR: Probabilistic 1-way Turing machines are proved to be logarithmically more economical in the sense of space complexity for recognition of specific languages.
Abstract: Probabilistic 1-way Turing machines are proved to be logarithmically more economical in the sense of space complexity for recognition of specific languages. Languages recognizable in o (loglog n) space are proved to be regular. One or two reversals of the head on the work tape of a probabilistic 1-way Turing machine can sometimes be simulated in no less than const n reversals by a deterministic machine.


01 Jan 1983
TL;DR: A Turing machine with two storage tapes cannot simulate a queue in both real-time and with at least one storage tape head always within o(n) squares from the start square as mentioned in this paper.
Abstract: A Turing machine with two storage tapes cannot simulate a queue in both real-time and with at least one storage tape head always within o(n) squares from the start square. This fact may be useful for showing that a two-head tape unit is more powerful in real-time than two one-head tape units, as is commonly conjectured.

Journal ArticleDOI
TL;DR: A hierarchy of such Turing functions and numbers is established, and two classes of transcedent Turing numbers are investigated.

Journal ArticleDOI
TL;DR: The McCulloch‐Pitts formal neural net theory is after the modular neurophysiological counterpart of logical machines, so that it actually provides biologically plausible models for automata, Turing Machines etc and not viceversa.
Abstract: The significance of the McCulloch‐Pitts formal neural net theory is still nowadays frequently misunderstood at present, and their basic units are wrongly considered as factual models for neurons. As a consequence, the whole original theory and its later addenda are unreasonably criticized for their simplicity. But, as it was proved then and since, the theory is after the modular neurophysiological counterpart of logical machines, so that it actually provides biologically plausible models for automata, Turing Machines etc and not viceversa. In its true context, no theory has surpassed its proposals. In McCulloch and Pitts Memoriam and for the sake of future theoretical research, we stress this important historical point, including also some recent results on the neurophysiological counterparts of modular arbitrary probabilistic automata.

DOI
01 Jan 1983
TL;DR: This is a report on the results of a competition which was initiated on the occasion of the 6th GI-conference on Theoretical Computer Science, which took place at the University of Dortmund from January 5th to 7th, 1983, and the best solution of the 5-state Busy-Beaver-Game was asked.
Abstract: This is a report on the results of a competition which was initiated on the occasion of the 6th GI-conference on Theoretical Computer Science, which took place at the University of Dortmund from January 5th to 7th, 1983. It was asked for the best solution of the 5-state Busy-Beaver-Game. At first we make some historical remarks, introduce the formalism, and list some results. Then the two best solutions are described. Next we make some remarks on the behaviour of good beavers and on the strange behaviour of some Turing machines. Zoological names were given to the latter machines. The amusing results are written down in the last chapter. In the appendix you can find a lot of examples.

Journal ArticleDOI
TL;DR: This work shows the existence of a set such that it is recognized by an ONPTM with 12-(logn)/8n bounded error probability in O(n) time but for every e, 0

Journal ArticleDOI
TL;DR: This paper defines a very general, realistic model of planar, digital circuits, which allows for full parallelism and addresses the issue of area-time tradeoffs and shows that the area of a circuit can always be bounded by a polynomial function of the sequential time.


Journal ArticleDOI
TL;DR: Any function $f(x_1, \cdots ,x_k )$ computable by a program using only comparison-based conditional forward branching instructions and the arithmetic operations can be computed by an off-line Turing machine in space and time.
Abstract: We study the space and time complexity of functions computable by simple loop-free programs operating on integers. In particular, we show that any function $f(x_1 , \cdots ,x_k )$ computable by a program using only comparison-based conditional forward branching instructions and the arithmetic operations $ + , - $, and truncating division by integer constants (such programs compute exactly the functions definable in Presburger arithmetic) can be computed by an off-line Turing machine in space $s(n)$ and time $n^2 /s(n)$ for any reasonable space bound $s(n)$ between $\log n$ and n. Moreover, the space-time trade-off is optimal.

Book ChapterDOI
01 Jan 1983
TL;DR: Under such conditions, if the algorithmic complexity of the pseudo-random input in question exceeds a threshold value, then the described Monte-Carlo method yields the correct value, of the estimated measure.
Abstract: The unknown measure of a measure of a measurable subset of the unit real interval is estimated using an appropriate Monte-Carlo method. The random sample is simulated by a binary sequence of high algorithmic complexity and the tested set is supposed to be effectively decidable by a turing machine. Under such conditions, if the algorithmic complexity of the pseudo-random input in question exceeds a threshold value, then the described Monte-Carlo method yields the correct value, of the estimated measure. Moreover, if the computation complexity of the indicator (characteristic function) of the tested set is uniformly bounded, then the mentioned threshold value can be effectively computed.


Book ChapterDOI
21 Aug 1983
TL;DR: This paper deals with the class RBQ (also sometimes called BNP) of all languages acceptable in linear time by reversal-bounded nondeterministic multitape Turing machines.
Abstract: First it is dealt with the class RBQ (also sometimes called BNP) of all languages acceptable in linear time by reversal-bounded nondeterministic multitape Turing machines.


01 Jan 1983
TL;DR: The most significant characteristics of HLLI are analysed in the context of different design styles, and some guidelines are presented on how to identify the most suitable design style for a given high-level language problem.
Abstract: The problem of designing a system for high-level language interpretation (HLLI) is considered. First, a model of the design process is presented where several styles of design, e.g. turing machine interpretation, CISC architecture interpretation and RISC architecture interpretation are treated uniformly. Second, the most significant characteristics of HLLI are analysed in the context of different design styles, and some guidelines are presented on how to identify the most suitable design style for a given high-level language problem. 12 references.

Proceedings ArticleDOI
01 Dec 1983
TL;DR: To be considered fast, algorithms for operations on large data structures should operate in polylog time, i.e., with the number of steps bounded by a polynomial in log(N) where N is the size of the data structure.
Abstract: To be considered fast, algorithms for operations on large data structures should operate in polylog time, i.e., with the number of steps bounded by a polynomial in log(N) where N is the size of the data structure. Example: an ordered list of reasonably short strings can be searched in log2 (N) time via binary search. To measure the time and space complexity of such operations, the usual Turing machine with its serial-access input tape is replaced by a random access model. To compare such problems and define completeness, the appropriate relation is loglog reducibility: the relation generated by random-access transducers whose work tapes have length at most log(log(N)). The surprise is that instead of being a refinement of the standard log space, polynomial time, polynomial space, ... hierarchy, the complexity classes for these random-access Turing machines form a distinct parallel hierarchy, namely, polylog time, polylog space, exppolylog time, ... . Propositional truth evaluation, context-free language recognition and searching a linked list are complete for polylog space. Searching ordered lists and searching unordered lists are complete for polylog time and nondeterministic polylog time respectively. In the serial-access hierarchy, log-space reducibility is not fine enough to classify polylog-time problems and there can be no complete problems for polylog space even with polynomial-time Turing reducibility

Book ChapterDOI
03 Oct 1983
TL;DR: While pattern-directed computation invokes transformation rules by matching their patterns to an expression and then performing the associated actions, adapter-driven computation only requires adapter/expression fittings, allowing a new representation of AI data bases, LISP functions, hypergraph operations, inference rules, and Turing machines.
Abstract: The generalization of pattern matching to adapter fitting, as implemented in the programming language FIT, is described semantically. Adapters are like patterns that contain functions which during fitting are applied to corresponding arguments contained in data instances. They are more concise, easier to read, and more efficiently implementable than equivalent LAMBDA expressions and pattern-action rules, because they can analyse data, like patterns, and manipulate them, like functions, in one sweep. Variable settings created by pairing adapter elements with data elements are treated as expressions obeying a consistent-assignment rule, generalizing the usual single-assignment. While pattern-directed computation invokes transformation rules by matching their patterns to an expression and then performing the associated actions, adapter-driven computation only requires adapter/expression fittings. This permits a new representation of AI data bases, LISP functions, hypergraph operations, inference rules (incl. Wang’s algorithm), Woods’ RTNs, and Turing machines, showing that adapter-driven computation provides an AI-oriented general computational base. The efficiency of pattern-directed and adapter-driven computation is enhanced by introducing the SECURE operator as a functional alternative to PROLOG’s cut.

Book ChapterDOI
TL;DR: It is claimed that the three notions ‘complexity, ‘bounded rationality’, and ‘problem-solving’ are intrinsically related.
Abstract: We claim that the three notions ‘complexity’, ‘bounded rationality’, and ‘problem-solving’ are intrinsically related.

01 Jan 1983
TL;DR: In this paper, the authors describe a general method for interpreting how Turing machines perform computations in finitely axiomatizable theories, whose properties are determined by Turing machine computations.
Abstract: In this paper we describe a general method for interpreting how Turing machines perform computations in finitely axiomatizable theories. The method can be used to construct finitely axiomatizable theories whose properties are determined by Turing machine computations. This method is used to prove that the Lindenbaum boolean algebra for any recursively enumerable theory T is recursively isomorphic to the Lindenbaum algebra of a suitable finitely axiomatizable theory F In addition, the axiom system of the theory T and the recursive isomorphism of the boolean algebras T and ~ can be found uniformly with respect to a recursively enumerable index for the axiom system of T. This solves a problem due to Hanf [13], given as No. 22 in Friedman's list [12]. Hanf [14] announced the solution of this problem in 1975, but no proof was ever published. We then use the above method of interpretation to s~udy the properties of simple models. We construct a finitely axiomatizable complete theory possessing a nonconstructivizable simple model, thereby solving a problem posed by Harrington [15]. In Sec. 7 we estimate the complexity of some classes of formulas. The interpretations are based on the author's construction of a complete finitely axiomatizable superstable theory. The models for this theory