scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 1980"


Book
30 Jun 1980
TL;DR: The author explains how theorems such as Godel's incompleteness theorem and the second recursion theorem can be applied to the problem of computable functions.
Abstract: What can computers do in principle? What are their inherent theoretical limitations? These are questions to which computer scientists must address themselves. The theoretical framework which enables such questions to be answered has been developed over the last fifty years from the idea of a computable function: intuitively a function whose values can be calculated in an effective or automatic way. This book is an introduction to computability theory (or recursion theory as it is traditionally known to mathematicians). Dr Cutland begins with a mathematical characterisation of computable functions using a simple idealised computer (a register machine); after some comparison with other characterisations, he develops the mathematical theory, including a full discussion of non-computability and undecidability, and the theory of recursive and recursively enumerable sets. The later chapters provide an introduction to more advanced topics such as Godel's incompleteness theorem, degrees of unsolvability, the Recursion theorems and the theory of complexity of computation. Computability is thus a branch of mathematics which is of relevance also to computer scientists and philosophers. Mathematics students with no prior knowledge of the subject and computer science students who wish to supplement their practical expertise with some theoretical background will find this book of use and interest.

410 citations


Journal ArticleDOI
TL;DR: It is shown that any finite causal and lossless real multivariate system can be realized as a computationally compatible cascade of real degree 1 and degree 2 factors consisting of an orthogonal constant interconnecting part and a lossless dynamic portion.
Abstract: A generally applicable theory for multiport orthogonal digital filter synthesis is discussed, using a direct algebraic method. Standard (complex) degree 1 and irreducible (real) degree 2 chain scattering matrices are derived and the computability problem which normally occurs in such filters is discussed and solved for the general case. It is shown that any finite causal and lossless real multivariate system can be realized as a computationally compatible cascade of real degree 1 and degree 2 factors consisting of an orthogonal constant interconnecting part and a lossless dynamic portion. In order to increase the practical interest of the paper, realizations of these sections are explicitly given. The known scalar wave digital filter ‘adaptors’ emerge as a special case of the theory presented in this paper. The general results are of interest in multiport digital signal processing, in multivariate interpolation theory and in multichannel prediction and modelling theory and techniques.

117 citations


Book
01 Jan 1980
TL;DR: The maximal type structure and countable functionals on the topological space of a topological topology have been studied in this paper, where the computability vs recursion of the computable structure is discussed.
Abstract: The maximal type structure.- The countable functionals.- Ct(n) as a topological space.- Computability vs recursion.- The computable structure on Ct(k).- Sections.- Some further results and topics.

93 citations



Journal ArticleDOI
TL;DR: To compute the result of the composition of functions and to solve fixpoint equations, this work uses a special kind of network of parallel processes built up in a modular way and proves that every computable function is continuous.

50 citations


Book ChapterDOI
01 Jan 1980
TL;DR: In this article, the authors present a list of open problems concerning regular languages and finite automata, including the star height problem, which has been studied extensively in theoretical computer science.
Abstract: Publisher Summary The theory of regular languages and finite automata was developed in the early 1950s and is one of the oldest branches of theoretical computer science. Regular languages constitute the best known family of formal languages, and finite automata constitute the best known family of abstract machine models. The concepts of regular languages and finite automata appear frequently in theoretical computer science and have several important applications. There is a vast literature on these subjects. Despite the fact that many researchers have worked in this field, there remain several difficult open problems. The chapter discusses six of these problems. These problems are of fundamental importance and considerable difficulty. Most of them are intimately involved with the fundamental property of finite automata, namely finiteness. In a monograph published in 1971, McNaughton and Papert included a collection of open problems concerning regular languages. Their list is headed by the star height problem and until now, no progress has been made on such an intriguing question. The bounds on star height apply only to languages whose syntactic monoids are groups. In that case, the corresponding semiautomata are permutation semiautomata.

47 citations



Journal ArticleDOI
TL;DR: The simplification is based on two new lemmas that are of some interest in themselves, which allows to guarantee the Church-Rosser property under very weak assumptions.
Abstract: In /2/ a certain type of bases ("Grobner-Bases") for polynomial ideals has been introduced whose usefulness stems from the fact that a number of important computability problems in the theory of polynomial ideals are reducible to the construction of bases of this type. The key to an algorithmic construction of Grobner-bases is a characterization theorem for Grobner-bases whose proof in /2/is rather complex.In this paper a simplified proof is given. The simplification is based on two new lemmas that are of some interest in themselves. The first lemma characterizes the congruence relation modulo a polynomial ideal as the reflexive-transitive closure of a particular reduction relation ("M-reduction") used in the definition of Grobner-bases and its inverse. The second lemma is a lemma on general reduction relations, which allows to guarantee the Church-Rosser property under very weak assumptions.

21 citations


Journal ArticleDOI
TL;DR: The notion of simple loop programs is introduced, by which the class L which coincides with the class of Kalmar's elementary functions is characterized, and two hierarchies are characterized, Bk and Eξpk, which are classes of languages which can be recognized in O(nκ) times and gκ(p(n)) time.

15 citations



Proceedings ArticleDOI
28 Apr 1980
TL;DR: This paper inaugurates the study of pebbling with auxiliary pushdowns, which bears to plain pebbled the same relationship as Cook's study of space-bounded machines with auxiliary Pushdown, and is capable of distinguishing among Strong's “languages of maximal power,” a distinction not possible when comparative schematology is based on computability considerations alone.
Abstract: This paper has three claims to interest. First, it combines comparative schematology with complexity theory. This combination is capable of distinguishing among Strong's “languages of maximal power,” a distinction not possible when comparative schematology is based on computability considerations alone, and it is capable of establishing exponential disparities in running times, a capability not currently possessed by complexity theory alone. Secondly, this paper inaugurates the study of pebbling with auxiliary pushdowns, which bears to plain pebbling the same relationship as Cook's study of space-bounded machines with auxiliary pushdowns bears to plain space-bounded machines. This extension of pebbling serves as the key to the problems of comparative schematology mentioned above. Finally, this paper advantageously displays the virtues of recent work by Gabber and Galil giving explicit constructions for certain graphs, for the availability of such explicit constructions is essential to the results of this paper.

Journal ArticleDOI
TL;DR: The notion of a computable algebraic system was introduced by Mal'cev and Rabin this article, who showed that a system can be combinatorially presented and have effectively decidable term or word problem.
Abstract: A natural way of studying the computability of an algebraic structure or process is to apply some of the theory of the recursive functions to the algebra under consideration through the manufacture of appropriate coordinate systems from the natural numbers. An algebraic structure A = (A; σ1,…, σk) is computable if it possesses a recursive coordinate system in the following precise sense: associated to A there is a pair (α, Ω) consisting of a recursive set of natural numbers Ω and a surjection α: Ω → A so that (i) the relation defined on Ω by n ≡α m iff α(n) = α(m) in A is recursive, and (ii) each of the operations of A may be effectively followed in Ω, that is, for each (say) r-ary operation σ on A there is an r argument recursive function on Ω which commutes the diagramwherein αr is r-fold α × … × α.This concept of a computable algebraic system is the independent technical idea of M.O.Rabin [18] and A.I.Mal'cev [14]. From these first papers one may learn of the strength and elegance of the general method of coordinatising; note-worthy for us is the fact that computability is a finiteness condition of algebra—an isomorphism invariant possessed of all finite algebraic systems—and that it serves to set upon an algebraic foundation the combinatorial idea that a system can be combinatorially presented and have effectively decidable term or word problem.

Dissertation
01 Jan 1980
TL;DR: This dissertation is to provide a categorical method for obtaining effective solutions of recursive domain equations and to provide effective models of denotational semantics and algebraic data types.
Abstract: Solving recursive domain equations is one of the main concerns in the denotational semantics of programming languages, and in the algebraic specification of data types. Because we are to solve them for the specification of computable objects, effective solutions of them should be needed. Though general methods for obtaining solutions are well known, effectiveness of the solutions has not been explicitly investigated.* The main objective of this dissertation is to provide a categorical method for obtaining effective solutions of recursive domain equations. Thence we will provide effective models of denotational semantics and algebraic data types. The importance of considering the effectiveness of solutions is two-fold. First we can guarantee that for every denotational specification of a programming language and algebraic data type specification, implementation exists. Second, we have an instance of a computability theory where higher type computability and even infinite type computability can be discussed very smoothly. *While this dissertation has been written, Plotkin and Smyth obtained an alternative to our method which worked only for effectively given categories with universal objects.

Journal ArticleDOI
TL;DR: It is concluded that the “imposed problem ignorance” of past complexity research is deleterious to research progress on “computability” or “efficiency of computation.”
Abstract: Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal “centered-cutoff” algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By “model approximation,” both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral “constraint contraction” algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the “imposed problem ignorance” of past complexity research is deleterious to research progress on “computability” or “efficiency of computation.”

Journal ArticleDOI
TL;DR: In this paper, it is shown that effectiveness is epistemically at least as basic as computability in the sense that decisions about computability normally involve judgments concerning effectiveness, and that the present notions of effectiveness can contribute to the clarification and, perhaps, solution of various philosophical problems, confusions and disputes.
Abstract: This paper focuses on two notions of effectiveness which are not treated in detail elsewhere. Unlike the standard computability notion, which is a property of functions themselves, both notions of effectiveness are properties of interpreted linguistic presentations of functions. It is shown that effectiveness is epistemically at least as basic as computability in the sense that decisions about computability normally involve judgments concerning effectiveness. There are many occurrences of the present notions in the writings of logicians; moreover, consideration of these notions can contribute to the clarification and, perhaps, solution of various philosophical problems, confusions and disputes.

01 Jan 1980
TL;DR: This paper inaugurates the study of pebbling with auxiliary pushdowns, and advantageously displays the virtues of recent work by Gabber and Galil giving explicit constructions for certain graphs, for the availability of such explicit construction is essential to the results of this paper.
Abstract: This paper has three claims to interest. First, it combines comparative schematology with complexity theory. This combination is capable of distinguishing among Strong's "languages of maximal power," a distinction not possible when comparative schematology is based on computability considerations alone, and it is capable of establishing exponential disparities in running times, a capability not currently possessed by complexity theory alone. Secondly, this paper inaugurates the study of pebbling with auxiliary pushdowns, which bears to plain pebbling the same relationship as Cook's study of space-bounded machines with auxiliary pushdowns bears to plain space-bounded machines. This extension of pebbling serves as the key to the problems of comparative schematology mentioned above. Finally, this paper advantageously displays the virtues of recent work by Gabber and Galil giving explicit constructions for certain graphs, for the availability of such explicit constructions is essential to the results of this paper.

Book ChapterDOI
14 Jul 1980
TL;DR: It is observed that locally-computable Baire categories are incomparable with all existing resource-bounded measure notions on small complexity classes, which might explain why those two settings seem to differ so fundamentally.
Abstract: We introduce two resource-bounded Baire category notions on small complexity classes such as P, QUASIPOLY, SUBEXP and PSPACE and on probabilistic classes such as BPP, which differ on how the corresponding finite extension strategies are computed. We give an alternative characterization of small sets via resourcebounded Banach-Mazur games. As an application of the first notion, we show that for almost every language A (i.e. all except a meager class) computable in subexponential time, PA = BPP. We also show that almost all languages in PSPACE do not have small nonuniform complexity. We then switch to the second Baire category notion (called locally-computable), and show that the class SPARSE is meager in P. We show that in contrast to the resource-bounded measure case, meager-comeager laws can be obtained for many standard complexity classes, relative to locally-computable Baire category on BPP and PSPACE. Another topic where locally-computable Baire categories differ from resourcebounded measure is regarding weak-completeness: we show that there is no weakcompleteness notion in P based on locally-computable Baire categories, i.e. every P-weakly-complete set is complete for P. We also prove that the class of complete sets for P under Turing-logspace reductions is meager in P, if P is not equal to DSPACE(log n), and that the same holds unconditionally for QUASIPOLY. Finally we observe that locally-computable Baire categories are incomparable with all existing resource-bounded measure notions on small complexity classes, which might explain why those two settings seem to differ so fundamentally.

Journal ArticleDOI
Jeffrey M. Jaffe1
TL;DR: It is shown that data flow schemes have the power to express an arbitrary determinate functional and the proof involves a demonstration that “restricted dataflow schemes” can simulate Turing Machines.

Book
01 Jan 1980
TL;DR: How to get rid of pseudoterminals and some properties of local testability are discussed.
Abstract: How to get rid of pseudoterminals.- Test sets for homomorphism equivalence on context free languages.- Languages with homomorphic replacements.- Functions equivalent to integer multiplication.- Languages with reducing reflexive types.- Semantics of unbounded nondeterminism.- A shifting algorithm for min-max tree partitioning.- A characterisation of computable data types by means of a finite equational specification method.- A note on sweeping automata.- Border rank of a pxqx2 tensor and the optimal approximation of a pair of bilinear forms.- Derivations et reductions dans les grammaires algebrioues.- Semantic analysis of communicating sequential processes.- Dos systems and languages.- Algebraic implementation of abstract data types: concept, syntax, semantics and correctness.- Parameterized data types in algebraic specification languages.- Characterizing correctness properties of parallel programs using fixpoints.- Formal properties of one-visit and multi-pass attribute grammars (extended abstract).- Cryptocomplexity and NP-completeness.- On the analysis of tree-matching algorithms.- Generating and searching sets induced by networks.- The complexity of the inequivalence problem for regular expressions with intersection.- An almost linear time algorithm for computing a dependency basis in a relational data base.- Bipolar synchronization systems.- Testing of properties of finite algebras.- A transaction model.- On observing nondeterminism and concurrency.- Terminal algebra semantics and retractions for abstract data types.- The complexity of semilinear sets.- A theory of nondeterminism.- A representation theorem for models of *-free PDL.- Present-day Hoare-like systems for programming languages with procedures: Power, limits and most likely extensions.- Symmertric space-bounded computation (extended abstract).- On some properties of local testability.- Semantics :Algebras,fixed points,axioms.- Measuring the expressive power of dynamic logics: An application of abstract model theory.- Pebbling mountain ranges and its application to DCFL-recognition.- Space-restricted attribute grammars.- A constructive approach to compiler correctness.- A worst-case analysis of nearest neighbor searching by projection.- Proprietes syntactiques du produit non ambigu.- On the optimal assignment of attributes to passes in multi-pass attribute evaluators.- Optimal unbounded search strategies.- A "fast implementation" of a multidimensional storage into a tree storage.- Grammatical families.- Partitioned chain grammars.- An improved program for constructing open hash tables.- On the power of commutativity in cryptography.- Characterizations of the LL(k) property.- Computability in categories.- On the size complexity of monotone formulas.- Reversible computing.- The use of metasystem transition in theorem proving and program optimization.- On the power of real-time turing machines under varying specifications.

Book
01 Jan 1980
TL;DR: In this paper, the concept of effective weight on an effective cpo is introduced and recursive cpo-elements are defined and two axioms for cpo complexity are introduced.
Abstract: Effective cpo-s are a useful tool for introducing computability on a large class of sets. In this report Blum's theory of computational complexity is generalized to certain effective cpo-s with effective weight. At first, as a generalization of Rogers's Isomorphism Theorem for Godel numberings of the partial recursive functions it is proved that two "admissible numberings" of the recursively enumerable elements of an effective cpo are recursively isomorphic. The concept of effective weight on an effective cpo is introduced and recursive cpo-elements are defined. In analogy to Blum's approach two axioms for cpo-complexity are introduced. For cpo-elements with zero weight a hierarchy theorem, the speedup theorem and the gap theorem are proved. Using "extended" cpo-s the results can be applied to the recursive elements of a cpo.

Book ChapterDOI
TL;DR: The purpose of this paper is to construct models for the main methods of statistical inference and estimation using the semantical definition of Probability introduced in the author's A semanticals definition of probability appearing in Non-classical Logics, Model Theory and Computability, North-Holland, 1977.
Abstract: The purpose of this paper is to construct models for the main methods of statistical inference and estimation using the semantical definition of Probability introduced in the author's A semantical definition of Probability appearing in Non-classical Logics, Model Theory and Computability, North-Holland, 1977, pp. 135–167. Using the simple probability structures introduced there as building blocks, a new type of compound probability structures (cps) is defined. Different forms of cps give an account of the classical frequency methods, Bayesian methods, and stochastic processes.