scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 1989"


Book
01 Jun 1989
TL;DR: The neural computing theory and practice book will be the best reason to choose, especially for the students, teachers, doctors, businessman, and other professions who are fond of reading.
Abstract: In what case do you like reading so much? What about the type of the neural computing theory and practice book? The needs to read? Well, everybody has their own reason why should read some books. Mostly, it will relate to their necessity to get knowledge from the book and want to read just to get entertainment. Novels, story book, and other entertaining books become so popular this day. Besides, the scientific books will also be the best reason to choose, especially for the students, teachers, doctors, businessman, and other professions who are fond of reading.

1,848 citations


Book
01 Jan 1989
TL;DR: This book represents the first treatment of computable analysis at the graduate level within the tradition of classical mathematical reasoning and is sufficiently detailed to provide an introduction to research in this area.
Abstract: This book represents the first treatment of computable analysis at the graduate level within the tradition of classical mathematical reasoning. Among the topics dealt with are: classical analysis, Hilbert and Banach spaces, bounded and unbounded linear operators, eigenvalues, eigenvectors, and equations of mathematical physics. The book is self-contained, and yet sufficiently detailed to provide an introduction to research in this area.

871 citations



Proceedings ArticleDOI
30 Oct 1989
TL;DR: In this paper, a graph-theoretic analog of the Myhill-Nerode characterization of regular languages is proved, which is used to establish that for many applications obstruction sets are computable by known algorithms.
Abstract: A theorem that is a graph-theoretic analog of the Myhill-Nerode characterization of regular languages is proved. The theorem is used to establish that for many applications obstruction sets are computable by known algorithms. The focus is exclusively on what is computable (by a known algorithm) in principle, as opposed to what is computable in practice. >

113 citations


Proceedings ArticleDOI
14 May 1989
TL;DR: It is shown that, as long as the envelope of trajectories generated by the control system can be described algebraically, there is an effective procedure for deciding if a successful n-step plan exists, and the proposed method makes use of recognizable sets as subgoals for multistep planning.
Abstract: It is shown that fine motion plans in the LMT framework developed by T. Lozano-Perez, M. Mason and R. Taylor (1984) are computable, and an algorithm for computing them by reducing fine motion planing to an algebraic decision problem is presented. Fine-motion planning involves planning a successful motion of a robot at the fine scale of assembly operations, where control and sensor uncertainty are significant. It is shown that, as long as the envelope of trajectories generated by the control system can be described algebraically, there is an effective procedure for deciding if a successful n-step plan exists. The proposed method makes use of recognizable sets as subgoals for multistep planning. These sets are finitely parameterizable, and it is shown that they are the only sets that need be considered as subgoals. Unfortunately, if the full generality of the LMT framework is used, finding a fine-motion plan can take time double exponential in the number of plant steps. >

90 citations


Journal ArticleDOI
TL;DR: In this article, the assumption that agents are boundedly rational is made operational by imposing computability constraints on the economy: all equilibrium price functions or forecasts of future equilibrium prices are required to be computable.
Abstract: In this paper we consider how boundedly rational agents learn rational expectations. The assumption that agents are boundedly rational is made operational by imposing computability constraints on the economy: all equilibrium price functions or forecasts of future equilibrium prices are required to be computable. Computable functions are defined, as in the computer science literature, as functions whose values can be calculated using some finite algorithm. The paper examines two learning environments. In the first, agents have perfect information about the state of nature. In this case, the theory of machine inference can be applied to show that there is a broad class of computable economies whose rational expectations equilibria can be learned by inductive inference. In the second environment, agents do not have perfect information about the state of nature. In this case, a version of Godel's incompleteness theorem applicable to the theory of computable functions yields the conclusion that rational expectations equilibria cannot be learned.

86 citations


Journal ArticleDOI
TL;DR: A calculus is developed that can be used in verifying that lists defined by l where l = f I are productive, and the power and the usefulness of the theory are demonstrated by several nontrivial examples.
Abstract: Several related notions of the productivity are presented for functional languages with lazy evaluation. The notion of productivity captures the idea of computability, of progress of infinite-list programs. If an infinite-list program is productive, then every element of the list can be computed in finite “time.” These notions are used to study recursive list definitions, that is, lists defined by l where l = fl. Sufficient conditions are given in terms of the function f that either guarantee the productivity of the list or its unproductivity. Furthermore, a calculus is developed that can be used in verifying that lists defined by l where l = f I are productive. The power and the usefulness of our theory are demonstrated by several nontrivial examples. Several observations are given in conclusion.

81 citations


Book
11 Jan 1989
TL;DR: This book discusses Grammatical Basis of Language Translation, Foundations of Recursive Function Theory, and the Scope of Primitive Recursive functions in Partial Recursive Functions.
Abstract: Preliminaries Review of Set Theory Grammatical Basis of Language Translation Historical Background Preview of the Remaining Text Chapter Review Problems Finite Automata and Regular Languages Lexical Analysis Deterministic Finite Automata The Limits of Deterministic Finite Automata Nondeterministic Finite Automata Regular Grammars Regular Expressions Closing Comments Chapter Review Problems Pushdown Automata and Context-Free Languages Pushdown Automata Context-Free Grammars The Limits of Pushdown Automata LL(k) Parsers LR(k) Parsers Turing Machines and Phase-Structure Languages Turing Machines Modular Construction of Turing Machines Turing Machines as Language Accepters Turing-Acceptable Languages Beyond Phrase-Structure Languages Closing Comments Chapter Review Problems Computability Foundations of Recursive Function Theory The Scope of Primitive Recursive Functions Partial Recursive Functions The Power of Programming Languages Closing Comments Chapter Review Problems Complexity Computations The Complexity of Algorithms The Complexity of Problems Time Complexity of Language Recognition Problems Time Complexity of Nondeterministic Machines Closing Comments Chapter Review Problems Appendices A More About Constructing LR(1) Parse Tables B More About Ackerman's Function C Some Important Unsolvable Problems D On the Complexity of the String Comparison Problem E A Sampling of NP Problems Additional Reading 0805301437T04062001

78 citations


Book
01 Jan 1989
TL;DR: In this article, Sunitha and Kalyani present a formal language and automata theory for regular expressions and finite automata, which is free to download, and also includes solution sets.
Abstract: Guyz, i need dis book. I downloaded from google books but some pages are not available so. I ll b thankful to one who ll help.a formal grammar and it is more capable than a finite-state machine but. 2K.V.N Sunitha,N Kalyani Formal Languages and automata theoryTata. Formal Languages and Automata Theory. Regular Expressions and Finite Automata. Best of all, its free to download, and also includes solution sets.

44 citations


Proceedings ArticleDOI
30 Oct 1989
TL;DR: The authors define a simple typed while-programming language that generalizes the sort of simple language used in computability texts to define the familiar numerical computable functions and corresponds roughly to the mu -recursion of R.O. Gandy (1967).
Abstract: The authors define a simple typed while-programming language that generalizes the sort of simple language used in computability texts to define the familiar numerical computable functions and corresponds roughly to the mu -recursion of R.O. Gandy (1967). This language does not fully capture the notion of higher type computability. The authors define run times for their programs and prove that the feasible functionals of S. Cook and A. Urquhart (1988) are precisely those functionals computable by typed while-programs with run times feasibly length-bounded. The authors introduce the notion of a bounded typed loop program and prove that a finite type functional is feasible if it is computable by a bounded typed loop program. >

44 citations


Journal ArticleDOI
TL;DR: It is proved, under a hypothesis of determinism, that the analytic outputs of a C ∞ GPAC are computable by a digital computer.
Abstract: Church's thesis, that all reasonable definitions of “computability” are equivalent, is not usually thought of in terms of computability by a continuous computer, of which the general-purpose analog computer (GPAC) is a prototype. Here we prove, under a hypothesis of determinism, that the analytic outputs of a C ∞ GPAC are computable by a digital computer. In [POE, Theorems 5, 6, 7, and 8], Pour-El obtained some related results. (The proof there of Theorem 7 depends on her Theorem 2, for which the proof in [POE] is incorrect, but for which a correct proof is given in [LIR]. Also, the proof in [POE] of Theorem 8 depends on the unproved assertion that a solution of an algebraic differential equation must be analytic on an open subset of its domain. However, this assertion was later proved in [BRR].) As in [POE], we reduce the problem to a problem about solutions of certain systems of algebraic differential equations (ADE's). If such a system is nonsingular (i.e. if the “separant” does not vanish along the given solution), then the argument is very easy (see [VSD] for an even simpler situation), so that the essential difficulties arise from singular systems. Our main tools in handling these difficulties are drawn from the excellent (and difficult) paper [DEL] by Denef and Lipshitz. The author especially wants to thank Leonard Lipshitz for his kind help in the preparation of the present paper. We emphasize here that our proof of the simulation result applies only to the GPAC as described below. The GPAC's form a natural subclass of the class of all analog computers, and are based on certain idealized components (“black boxes”), mostly associated with the technology of past decades. One can easily envisage other kinds of black boxes of an input-output character that would lead to different kinds of analog computers. (For example, one could incorporate delays, or spatial integrators in addition to the present temporal integrators, etc.) Whether digital simulation is possible for these “extended” analog computers poses a rich and challenging set of research questions.

Proceedings ArticleDOI
Garzon1, Franklin1
01 Jan 1989
TL;DR: The authors show that neural networks are at least as powerful as cellular automata and that the converse is true for finite networks, and suggest that the full classes are probably identical.
Abstract: The authors present a general framework within which the computability of solutions to problems by various types of automata networks (neural networks and cellular automata included) can be compared and their complexity analyzed. Problems solvable by global dynamics of neural networks, cellular automata, and, in general, automata networks are studied as self-maps of the Cantor set. The theory derived from this approach generalizes classical computability theory; it allows a precise definition of equivalent models and thus a meaningful comparison of the computational power of these models. The authors show that neural networks are at least as powerful as cellular automata and that the converse is true for finite networks. Evidence indicates that the full classes are probably identical. The proofs of these results rely on the existence of a universal neural network, of interest in its own right. >

Book
01 Jan 1989
TL;DR: In this article, the authors propose a solution to Post's Problem and Strong Reducibilities for degree theory with jump and many-one and other degree theories, and compare them with degree theory without jump.
Abstract: Recursiveness and Computability. Induction. Systems of Equations. Arithmetical Formal Systems. Turing Machines. Flowcharts. Functions as Rules. Arithmetization. Church's Thesis. Basic Recursion Theory. Partial Recursive Functions. Diagonalization. Partial Recursive Functionals. Effective Operations. Indices and Enumerations. Retraceable and Regressive Sets. Post's Problem and Strong Reducibilities. Post's Problem. Simple Sets and Many-One Degrees. Hypersimple Sets and Truth-Table Degrees. Hyperhypersimple Sets and Q-Degrees. A Solution to Post's Problem. Creative Sets and Completeness. Recursive Isomorphism Types. Variations of Truth-Table Reducibility. The World of Complete Sets. Formal Systems and R.E. Sets. Hierarchies and Weak Reducibilities. The Arithmetical Hierarchy. The Analytical Hierarchy. The Set-Theoretical Hierarchy. The Constructible Hierarchy. Turing Degrees. The Language of Degree Theory. The Finite Extension Method. Baire Category. The Coinfinite Extension Method. The Tree Method. Initial Segments. Global Properties. Degree Theory with Jump. Many-One and Other Degrees. Distributivity. Countable Initial Segments. Uncountable Initial Segments. Global Properties. Comparison of Degree Theories. Structure Inside Degrees. Bibliography. Index.

01 Jan 1989
TL;DR: An asymptotically optimal geometric scheduling scheme that works for any RIA is developed and the so-called computability tree is analyzed to determine linear subspaces in the index space of a given RIA such that all variables lying on the same subspace can be scheduled at the same time.
Abstract: Regular Iterative Algorithms (RIAs) can be used to solve problems in a wide variety of areas including signal processing, matrix algebra, and combinatorics. Notably, RIAs include the class of algorithms implementable on systolic arrays. An attractive feature of RIAs is that these algorithms can be efficiently implemented on locally connected arrays of essentially identical processor modules, with register pipelines of various lengths and/or LIFO buffers in some of the links. Although efficient procedures for analyzing and implementing RIAs on regular processor arrays have recently been developed, issues such as optimal scheduling and parallel implementation of any general RIA have not been fully resolved. Moreover, algorithms are seldom available as RIAs and there are no systematic procedures for deriving RIAs from higher level descriptions of algorithms. In this thesis, some previous work by Karp, Miller, and Winograd (1967) and more recently by Rao, Jagadish and Kailath (1985) is extended to solve the problem of optimal scheduling and parallel implementation of RIAs. It is demonstrated that any RIA defined over a bounded or semi-infinite index space can be scheduled and mapped on to regular processor arrays by solving a set of integer programming problems with a small number of variables. An asymptotically optimal geometric scheduling scheme that works for any RIA is developed; in particular, we analyze the so-called computability tree to determine linear subspaces in the index space of a given RIA such that all variables lying on the same subspace can be scheduled at the same time. Next, procedures for converting algorithms described in fairly general terms (e.g., mathematical expressions) to RIAs are discussed. A class of algorithms called Assignment Codes (ACs) is precisely defined and formal procedures for converting linearly indexed ACs to RIAs are presented. Procedures to directly schedule linearly indexed codes are also discussed.

Journal ArticleDOI
01 Nov 1989

Journal ArticleDOI
TL;DR: In this article, a theory of physics and cosmology based on the five principles of finiteness, discreteness, finite computability, absolute non- uniqueness, and strict construction is proposed.
Abstract: We base our theory of physics and cosmology on the five principles of finiteness, discreteness, finite computability, absolute non- uniqueness, and strict construction. Our modeling methodology starts from the current practice of physics, constructs a self-consistent representation based on the ordering operator calculus and provides rules of correspondence that allow us to test the theory by experiment. We use program universe to construct a growing collection of bit strings whose initial portions (labels) provide the quantum numbers that are conserved in the events defined by the construction. The labels are followed by content strings which are used to construct event-based finite and discrete coordinates. On general grounds such a theory has a limiting velocity, and positions and velocities do not commute. We therefore reconcile quantum mechanics with relativity at an appropriately fundamental stage in the construction. We show that events in different coordinate systems are connected by the appropriate finite and discrete version of the Lorentz transformation, that 3-momentum is conserved in events, and that this conservation law is the same as the requirement that different paths can ''interfere'' only when they differ by an integral number of deBroglie wavelengths. 38 refs., 12 figs., 3 tabs.

Journal ArticleDOI
TL;DR: This paper shall study the dependence structure of RIAs that are not systolic; examples of such RIAs include matrix pivoting algorithms and certain forms of numerically stable two-dimensional filtering algorithms.
Abstract: The study of Regular Iterative Algorithms (RIAs) was introduced in a seminal paper by Karp, Miller, and Winograd in 1967. In more recent years, the study of systolic architectures has led to a renewed interest in this class of algorithms, and the class of algorithms implementable on systolic arrays (as commonly understood) has been identified as a precise subclass of RIAs. In this paper, we shall study the dependence structure of RIAs that are not systolic; examples of such RIAs include matrix pivoting algorithms and certain forms of numerically stable two-dimensional filtering algorithms. It has been shown that the so-called hyperplanar scheduling for systolic algorithms can no longer be used to schedule and implement non-systolic RIAs. Based on the analysis of a so-called computability tree we generalize the concept of hyperplanar scheduling and determine linear subspaces in the index space of a given RIA such that all variables lying on the same subspace can be scheduled at the same time. This subspace scheduling technique is shown to be asymptotically optimal, and formal procedures are developed for designing processor arrays that will be compatible with our scheduling schemes. Explicit formulas for the schedule of a given variable are determined whenever possible; subspace scheduling is also applied to obtain lower dimensional processor arrays for systolic algorithms.

01 Jan 1989
TL;DR: A fully-described program in Pascal has been included and serves a two-fold purpose: it exposes concrete practical solutions, which encourages the reader to try other strategies on his (her) own, and allows to check the correctness of different theoretic concepts emerging from the Method of Maximal Adjacencies.
Abstract: One of the central problems in the physical realization of sequential machines is the selection of binary codes to represent the internal states of the machine. The Method of Maximal Adjacencies can be viewed as an approach to the state assignment problem. This research report concentrates on simple, practical strategies to implement that method. A fully-described program in Pascal has been included and serves a two-fold purpose: (1) it exposes concrete practical solutions, which encourages the reader to try other strategies on his (her) own; (2) it has been conceived from a general standpoint that allows to check the correctness of different theoretic concepts emerging from the Method of Maximal Adjacencies. A set of industrial sequential machines has been chosen to test the program. Results from other existing methods have been also reported.

Book ChapterDOI
01 Feb 1989
TL;DR: In this paper genuine complexity classes for given operation sets S are defined, following ideas due to Karpinski and the author from [KaM 88], and results on genuine computability, lower bound methods, as well as examples for complexity gaps and separated complexity classes are surveyed.
Abstract: This survey paper presents a complexity theoretical approach to genuinely time bounded computations. Such computations are executed by random access machines with given set \(S \subseteq \{ + , - ,*,DIV,...\}\)of arithmetic operations. The uniform cost measure is assumed, and the input is given integer by integer, not bit by bit. “Genuinely” (also called “strongly” in the literature) means that we measure the time complexity T(n) as the worst case runtime over all inputs consisting of n integers (not of n bits). Computability and complexity now heavily depend on the operation set S. In this paper genuine complexity classes for given operation sets S are defined, following ideas due to Karpinski and the author from [KaM 88]. Furthermore, results on genuine computability, lower bound methods, as well as examples for complexity gaps and separated complexity classes are surveyed.

Book ChapterDOI
26 Feb 1989
TL;DR: Some preliminary developments toward a computational theory of systems has been presented with the following points.
Abstract: Some preliminary developments toward a computational theory of systems has been presented with the following points.


DOI
21 Feb 1989
TL;DR: This paper settled James Peterson's conjecture that any significant extension of the Petri net model tends to be equivalent to a Turing machine negatively.
Abstract: A new class of "concurrent automata" is constructed. These automata are extensions of pushdown automata and Petri nets. They lie properly between ordinary Petri nets and Turing machines. In the September 1977 issue of Computing survey James Peterson states that "Any significant extension of the Petri net model tends to be equivalent to a Turing machine". This paper settled his conjecture negatively. The formal languages that are accepted/reconized by this automata lie properly between context free languages and context sensitive languages.


01 Jan 1989
TL;DR: In this paper, a fixpoint semantics for Horn programs containing both relations and rewriting rules is defined, and the computability of the fixpoint in presence of an infinite universe and the completeness of the semantics restricted to interpretations containing only bounded depth terms are studied.
Abstract: This work is devoted to the integration of functions in Datalog. Functions are defined with a rewrite relation. We define a fixpoint semantics for Horn programs containing both relations and rewriting rules. The principal contribution is the bounded semantics. We study the two following problems: the computability of the fixpoint in presence of an infinite universe and the completeness of the semantics restricted to interpretations containing only bounded depth terms. We prove that both problems are undecidable in general, but decidable subcases are presented.