scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 1988"


Book
01 Feb 1988
TL;DR: This paper presents a survey of Computability in an Abstract Setting of the Input-Output Relation of the COV Inductively Definable Functions and some important properties of Induction, as well as a look ahead to a future generation of programming languages.
Abstract: Straight-Line Programs. Preliminaries: Signatures and Structures. The Programming Language. Assertions. Correctness Formulae. A Proof System Soundness. Predicates State Transformers The Weakest Precondition and Strongest Postcondition. Completeness of the Proof System. `While' Programs. Notation for Partial Functions. The Programming Language. Assertions. Correctness Formulae. A Proof System Soundness. Partial State Transformers The Weakest Precondition and Strongest Postcondition. Completeness of the Proof System. Appendix: Total Correctness for `While' Programs. Recursive Programs. The Programming Language. Assertions. Correctness Formulae. A Proof System Soundness. A Look Ahead. Inductive Computability of the Input-Output Relation. Completeness of the Proof System. Appendix: Total Correctness for Recursive Programs. Computability in an Abstract Setting. Induction Schemes. Some Important Properties. From Induction Schemes to `While' Programs. From `While' Programs to Induction Schemes. Course-of-Values Induction. From COV Induction Schemes to `While'-Array Programs. From `While'-Array Programs to COV Induction Schemes. More on Induction. The COV Inductively Definable Functions. A Survey of Computability in an Abstract Setting. A Generalized Church-Turing Thesis. Bibliography.

92 citations


01 Jun 1988
TL;DR: The Nonspeedup Theorem is proved, which states that 2$\sp{n}$ parallel queries to any fixed nonrecursive oracle cannot be answered by an algorithm that makes only n queries toAny oracle whatsoever.
Abstract: We study classes of sets and functions computable by algorithms that make a limited number of queries to an oracle We distinguish between queries made in parallel (each question being independent of the answers to the others, as in a truth-table reduction) and queries made in serial (each question being permitted to depend on the answers to the previous questions, as in a Turing reduction) We define computability by a set of functions, and we show that it captures the information-theoretic aspects of computability by a fixed number of queries to an oracle Using that concept, we prove a very powerful result, the Nonspeedup Theorem, which states that 2$\sp{n}$ parallel queries to any fixed nonrecursive oracle cannot be answered by an algorithm that makes only n queries to any oracle whatsoever This is the tightest general result possible A corollary is the intuitively obvious, but nontrivial result that additional parallel queries to an oracle allow us to compute additional functions; the same is true of serial queries We show that if k + 1 parallel queries to the oracle A can be answered by an algorithm that makes only k serial queries to any oracle B, then n parallel queries to the oracle A can be answered by an algorithm that makes only O(log n) parallel queries to a third oracle C We also consider polynomial time bounded algorithms that make a fixed number of queries to an oracle It has been shown that the Nonspeedup Theorem does not apply in the polynomial time bounded framework However, we prove a Weak Nonspeedup Theorem, which states that if 2$\sp{k}$ parallel queries to the oracle A can be answered by an algorithm that makes only k serial queries to the oracle B, then any n parallel queries to the oracle A can be answered by an algorithm that makes only 2$\sp{k}$ $-$ 1 of the same queries to A A corollary is that if A is NP-hard and P $ ot=$ NP, then extra parallel queries to A allow us to compute extra functions in polynomial time; the same is true of serial queries

89 citations


Journal ArticleDOI
01 Jun 1988
TL;DR: In this paper, the authors present a methodology for testing a general logic program containing function symbols and built-in predicates for safety and effective computability, under the assumption that queries are evaluated using a bottom-up fixpoint computation.
Abstract: This paper presents a methodology for testing a general logic program containing function symbols and built-in predicates for safety and effective computability. Safety is the property that the set of answers for a given query is finite. A related issues is whether the evaluation strategy can effectively compute all answers and terminate. We consider these problems under the assumption that queries are evaluated using a bottom-up fixpoint computation. We also approximate the use of function symbols by considering Datalog programs with infinite base relations over which finiteness constraints and monotonicity constraints are considered. One of the main results of this paper is a recursive algorithm, check_clique, to test the safety and effective computability of predicates in arbitrarily complex cliques. This algorithm takes certain procedures as parameters, and its applicability can be strengthened by making these procedures more sophisticated. We specify the properties required of these procedures precisely, and present a formal proof of correctness for algorithm check_clique. This work provides a framework for testing safety and effective computability of recursive programs, and is based on a clique by clique analysis. The results reported here form the basis of the safety testing for the LDL language, being implemented at MCC.

58 citations


Book
01 Jan 1988
TL;DR: The notion of regular rewriting systems is defined, and cost series associated with operators that are described by such systems are considered, and analysis methods apply to compute such costs and provide an asymptotic evaluation of the average cost of an operator.
Abstract: Algebraic specifications are now widely used for data structuring and they turn out to be quite useful for various aspects of program development, such as prototyping, assisted program construction, proving properties, etc [3, 12, 13, 15, 16, 17, 181 Some of these applications require adding a notion of computation to algebraic specifications, for instance by providing a (convergent) rewrite rule system that expresses the properties of the operators In this context, it may be of prime interest to define a notion of algorithmic complexity for an algebraic specification, or, more precisely, a notion of complexity for each operator defined in the specification Computing operator complexity within a given specification helps understanding how evaluation costs are distributed; it may single out “costly” operators, and motivate the search for an equivalent, but “cheaper”, specification In [5], the cost of a term is defined as the number of rewriting steps for reducing it to its normal form, and the cost of an operator is defined as the genera1 cost of a term obtained by applying this operator to terms in normal form In this paper, we further formalize this notion of operator complexity and investigate its computation through analysis methods developed for instance in [24,9] We show how these methods apply to the computation of the enumerative series related to the terms of an algebraic specification We define the notion of regular rewriting systems, and consider cost series associated with operators that are described by such systems We show how these analysis methods apply to compute such costs and provide an asymptotic evaluation of the average cost of an operator Our results allow costs to be computed without any explicit manipulation of series We provide the user with ready-to-use formulae, where the different parameters only depend on the “geometry” of the system, eg the number of constructors in the left-hand side of rules, number of occurrences of a derived operator in the right-hand side, etc Quantitative evaluation of rewriting systems had not yet been studied under such an approach (except in [5]), to our knowledge From a different point of view, complexity of algebraic implementations has been studied in [2,8 etc] wrt computability issues

36 citations


Proceedings ArticleDOI
01 Mar 1988
TL;DR: It is shown that there is an alternative computability theory in which some of the basic results on unsolvability become more absolute and results on completeness become simpler, and many of the central concepts become more abstract.
Abstract: The theory of computability often called basic recursive function theory is usually motivated and developed using Church's thesis. It is shown that there is an alternative computability theory in which some of the basic results on unsolvability become more absolute. Results on completeness become simpler, and many of the central concepts become more abstract. In this approach computations are viewed as mathematical objects, and the major theorems in recursion theory may be classified according to which axioms about computation are needed to prove them. The theory is a typed theory of functions over the natural numbers, and there are unsolvable problems in this setting independent of the existence of indexings. The unsolvability results are interpreted to show that the partial function concept serves to distinguish between classical and constructive type theories. >

35 citations



Book
01 Jan 1988
TL;DR: Planar point location revisited (A guided tour of a decade of research), and a generic algorithm for transaction processing during network partitioning.
Abstract: Planar point location revisited (A guided tour of a decade of research).- Computing a viewpoint of a set of points inside a polygon.- Analysis of preflow push algorithms for maximum network flow.- A new linear algorithm for the two path problem on chordal graphs.- Extending planar graph algorithms to K 3,3-free graphs.- Constant-space string-matching.- Inherent nonslicibility of rectangular duals in VLSI floorplanning.- Path planning with local information.- Linear broadcast routing.- Predicting deadlock in store-and-forward networks.- On parallel sorting and addition with concurrent writes.- An optimal parallel algorithm for sorting presorted files.- Superlinear speedup in parallel state-space search.- Circuit definitions of nondeterministic complexity classes.- Non-uniform proof systems: A new framework to describe non-uniform and probabilistic complexity classes.- Padding, commitment and self-reducibility.- The complexity of a counting finite-state automaton.- A hierarchy theorem for pram-based complexity classes.- A natural deduction treatment of operational semantics.- Uniformly applicative structures, a theory of computability and polyadic functions.- A proof technique for register atomicity.- Relation level semantics.- A constructive set theory for program development.- McCarthy's amb cannot implement fair merge.- GHC - A language for a new age of parallel programming.- Accumulators: New logic variable abstractions for functional languages.- A resolution rule for well-formed formulae.- Algebraic and operational semantics of positive/negative conditional algebraic specifications.- Semi-unification.- A method to check knowledge base consistency.- Knowledgebases as structured theories.- On functional independencies.- A generic algorithm for transaction processing during network partitioning.

16 citations


Journal ArticleDOI
TL;DR: The meaning and relationship of randomness and determinism are discussed, and a fundamental development of chaotic dynamical systems is given with examples.

13 citations



Book ChapterDOI
David Basin1
23 May 1988
TL;DR: It is demonstrated that such an environment can be used effectively for proving theorems about computability and for developing partial programs with correctness proofs, and extends the well-known proofs as programs paradigm to partial functions.
Abstract: We report on a new environment developed and implemented inside the Nuprl type theory that facilitates proving theorems about partial functions. It is the first such automated type-theoretic account of partiality. We demonstrate that such an environment can be used effectively for proving theorems about computability and for developing partial programs with correctness proofs. This extends the well-known proofs as programs paradigm to partial functions.

11 citations


Journal ArticleDOI
TL;DR: It is shown that even if the circumscription is expressible as a first-order sentence, it is not computable, and it is proved the undecidability of determining whether a given set of first- order sentences has a minimal model, a unique minimal model or an infinite number of minimal models.

Journal ArticleDOI
TL;DR: A special programing language in which named data represent the data processed by programs, named functions represent the program semantics, and compositions of named functions represents the program construction tools is introduced.
Abstract: Our results can be summarized as follows. 1. We have introduced a special programing language in which named data represent the data processed by programs, named functions represent the program semantics, and compositions of named functions represent the program construction tools. 2. The concepts of determinant computability of operators and natural computability of functions have been used to define the complete classes of computable compositions and functions over various forms of named data. 3. The complete classes of compositions and functions have been represented in simple form as closures of the set of basic compositions under superposition.


Book
21 Sep 1988
TL;DR: In this paper, the authors introduce the notion of enumeration and enumeration properties, and demonstrate how to apply enumeration to enumerative functions and predicate classes in the context of functional transformations.
Abstract: 1 Functions and Predicates.- 1. Definitions.- 2. Numerical Functions.- 3. Finitary Rules.- 4. Closure Properties.- 5. Minimal Closure.- 6. More Elementary Functions and Predicates.- 2 Recursive Functions.- 1. Primitive Recursion.- 2. Functional Transformations.- 3. Recursive Specifications.- 4. Recursive Evaluation.- 5. Church's Thesis.- 3 Enumeration.- 1. Predicate Classes.- 2. Enumeration Properties.- 3. Induction.- 4. Nondeterministic Computability.- 4 Reflexive Structures.- 1. Interpreters.- 2. A Universal Interpreter.- 3. Two Constructions.- 4. The Recursion Theorem.- 5. Relational Structures.- 6. Uniform Structures.- 5 Hyperenumeration.- 1. Function Quantification.- 2. Nonfinitary Induction.- 3. Functional Induction.- 4. Ordinal Notations.- 5. Reflexive Systems.- 6. Hyperhyperenumeration.- References.

Journal ArticleDOI
TL;DR: It follows from this and previous results that for sufficiently large R(n) , the addition of a pushdown store to R( n) reversal-bounded multicounter machines has little effect on the computing powers of the machines.

Proceedings ArticleDOI
01 Feb 1988
TL;DR: The course is best positioned within the curriculum at the Junior level, recognizing that Junior level students are rarely mathematically sophisticated, and the treatment is not as rigorous as that of a more advanced course on the theory of computation.
Abstract: Theory of computation courses have traditionally been taught at the advanced-undergraduate/graduate level, primarily due to the level of mathematical rigor associated with the topics involved. The topics covered include automata theory, formal languages, computability, uncomputability, and computational complexity. If the essentials of these topics are introduced earlier in the undergraduate computer science curriculum, students gain deeper insights and better comprehend the underlying computational issues associated with the material covered in subsequent computer science courses. Such a course is required of all computer science majors at the University of North Florida. Experience has demonstrated that a minimum background for the course includes Freshman-Sophomore mathematics (presently calculus) and a typical introduction to computer science. Thus the course is best positioned within the curriculum at the Junior level. Recognizing that Junior level students are rarely mathematically sophisticated, the treatment is not as rigorous as that of a more advanced course on the theory of computation. To reinforce the “theory” covered in class, an integral portion of the course is devoted to “hands-on” exercises using simulation tools designed for construction of a variety of automata. The exercises generally require the construction of automata of various forms, with observation of their step by step operation. Further exercises illustrate the connections between various automata and areas such as hardware design and compiler construction. The paper describes the course and the nature of the simulation tools used in the “hands-on” component of the course.


Proceedings Article
01 Jan 1988
TL;DR: A recursive algorithm to test the safety and effective computability of predicates in arbitrarily complex cliques, based on a clique by clique analysis, and its applicability can be strengthened by making these procedures more sophisticated.
Abstract: This paper presents a methodology for testing a general logic program containing function symbols and built-in predicates for safety and effective computability. Safety is the property that the set of answers for a given query is finite. A related issues is whether the evaluation strategy can effectively compute all answers and terminate. We consider these problems under the assumption that queries are evaluated using a bottom-up fixpoint computation. We also approximate the use of function symbols by considering Datalog programs with infinite base relations over which finiteness constraints and monotonicity constraints are considered. One of the main results of this paper is a recursive algorithm, check_clique, to test the safety and effective computability of predicates in arbitrarily complex cliques. This algorithm takes certain procedures as parameters, and its applicability can be strengthened by making these procedures more sophisticated. We specify the properties required of these procedures precisely, and present a formal proof of correctness for algorithm check_clique. This work provides a framework for testing safety and effective computability of recursive programs, and is based on a clique by clique analysis. The results reported here form the basis of the safety testing for the LDL language, being implemented at MCC.

01 Jan 1988
TL;DR: A more precise formulation of this feature of the familiar theories of physics is proposed—one based on the issue of whether or not the physically measurable numbers predicted by the theory are computable in the mathematical sense.
Abstract: The familiar theories of physics have the feature that the application of the theory to make predictions in specific circumstances can be done by means of an algorithm. We propose a more precise formulation of this feature—one based on the issue of whether or not the physically measurable numbers predicted by the theory are computable in the mathematical sense. Applying this formulation to one approach to a quantum theory of gravity, there are found indications that there may exist no such algorithms in this case. Finally, we discuss the issue of whether the existence of an algorithm to implement a theory should be adopted as a criterion for acceptable physical theories. “Can it then be that there is... something of use for unraveling the universe to be learned from the philosophy of computer design?” —J. A. Wheeler

Journal ArticleDOI
TL;DR: In this article, it was shown that a number-theoretic function representing an experimental physical setup is general recursive, for a very simplified model, and it is shown (for a simplified model) that such a function is also general recursive.
Abstract: It is shown (for a very simplified model) that a number-theoretic function representing an experimental physical setup is general recursive.

Journal ArticleDOI
TL;DR: The shadowing lemma as mentioned in this paper is a well-known result in physics via integer maps, and it has been used in many applications, e.g., particle physics and particle physics.
Abstract: 5.1. The continuum and analysis vs. the results of fixed precision computation 5.2. Turing and computable irrational numbers 5.3. Bernoulli shifts, computable chaotic orbits, and normal numbers 5.4. The shadowing lemma 5.5. Cellular automata 5.6. Generalization: Physics via integer maps


Book ChapterDOI
Patrick Bellot1
21 Dec 1988
TL;DR: A Computability theory developed from the theory of URS is described, which handles polyadicity as a primitive notion and allows a natural representation of functions with variable arity, that is functions which can be applied to sequences of arguments of any length.
Abstract: This article describes a Computability theory developed from the theory of URS described by E.G. Wagner and H.R. Strong and a Combinatory theory named TGE presented by the authors. Its main contribution is that the theory handles polyadicity as a primitive notion and allows a natural representation of functions with variable arity, that is functions which can be applied to sequences of arguments of any length. Aside from classical computability results, we prove a General Abstraction theorem which allows us to construct representations for a large class of functions with variable arity.

05 Oct 1988
TL;DR: In this paper, a theory of physics and cosmology based on the five principles of finiteness, discreteness, finite computability, absolute non- uniqueness, and strict construction is proposed.
Abstract: We base our theory of physics and cosmology on the five principles of finiteness, discreteness, finite computability, absolute non- uniqueness, and strict construction. Our modeling methodology starts from the current practice of physics, constructs a self-consistent representation based on the ordering operator calculus and provides rules of correspondence that allow us to test the theory by experiment. We use program universe to construct a growing collection of bit strings whose initial portions (labels) provide the quantum numbers that are conserved in the events defined by the construction. The labels are followed by content strings which are used to construct event-based finite and discrete coordinates. On general grounds such a theory has a limiting velocity, and positions and velocities do not commute. We therefore reconcile quantum mechanics with relativity at an appropriately fundamental stage in the construction. We show that events in different coordinate systems are connected by the appropriate finite and discrete version of the Lorentz transformation, that 3-momentum is conserved in events, and that this conservation law is the same as the requirement that different paths can ''interfere'' only when they differ by an integral number of deBroglie wavelengths. 38 refs., 12 figs., 3 tabs.

Book ChapterDOI
04 Jul 1988
TL;DR: This talk examines the strange situation encountered in algebraic topology: on one hand no general algorithm is able to decide whether some topological space is simply connected; this is an easy consequence of the undecidability of the word problem.
Abstract: In this talk we examine the strange situation encountered in algebraic topology: on one hand no general algorithm is able to decide whether some topological space is simply connected; this is an easy consequence of the undecidability of the word problem On the other hand most of the important results in algebraic topology assume that the spaces under consideration are simply connected! So that one can ask for algorithms that use some method or other, and always compute something, in such a way that if the space given as input is simply connected, then the result obtained is the good one The problem is to explain what is something in general

Book ChapterDOI
01 Jan 1988
TL;DR: In this paper, the authors model human choice processes by computational procedures and by representations of computational theory, to the extent that "human rationality" and "human problem-solving" has been taken as an anchor point for constructing artificial intelligence.
Abstract: Ever since choice theory has established itself as part of economic theory and mathematical economics there have been attempts to axiozatize it on the basis of set theory and topology. To the extent that “human rationality” and “human problem-solving” has been taken as an anchor point for constructing “artificial intelligence” it would be natural to model human choice processes by computational procedures and by representations of computational theory.