scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 1996"


Book
01 Feb 1996
TL;DR: This text for graduate students discusses the mathematical foundations of statistical inference for building three-dimensional models from image and sensor data that contain noise--a task involving autonomous robots guided by video cameras and sensors.
Abstract: This text for graduate students discusses the mathematical foundations of statistical inference for building three-dimensional models from image and sensor data that contain noise--a task involving autonomous robots guided by video cameras and sensors. The text employs a theoretical accuracy for the optimization procedure, which maximizes the reliability of estimations based on noise data. The numerous mathematical prerequisites for developing the theories are explained systematically in separate chapters. These methods range from linear algebra, optimization, and geometry to a detailed statistical theory of geometric patterns, fitting estimates, and model selection. In addition, examples drawn from both synthetic and real data demonstrate the insufficiencies of conventional procedures and the improvements in accuracy that result from the use of optimal methods.

499 citations


Book
01 Jan 1996
TL;DR: In this article, an introductory course for computer science and computer engineering majors who have knowledge of some higher-level programming language, the fundamentals of formal languages, automata, computability, and related matters form the major part of the theory of computation.
Abstract: Formal languages, automata, computability, and related matters form the major part of the theory of computation. This textbook is designed for an introductory course for computer science and computer engineering majors who have knowledge of some higher-level programming language, the fundamentals of

322 citations


Book
04 Jan 1996
TL;DR: This chapter discusses Reliable Inquiry, Continuity, Reducibility, and the Game of Science, which aims to clarify the role of faith in the development of science.
Abstract: Introduction 1. Reliable Inquiry 2. The Demons of Passive Observation 3. Topology and Ideal Hypothesis Assessment 4. Continuity, Reducibility, and the Game of Science 5. The Demons of Computability 6. Computers in Search of Truth 7. So Much Time, Such Little Brains 8. The Logic of Ideal Discovery 9. Computerized Discobvery 10. Prediction 11. The Assessment and Discovery of First-order Theories 12. Probability and Reliability 13. Experiment and Causal Inference 14. Relativism and Reliability

316 citations


Journal ArticleDOI
TL;DR: After a careful historical and conceptual analysis of computability and recursion, several recommendations are made about preserving the intensional differences between the concepts of “computability” and “recursion.”
Abstract: We consider the informal concept of “computability” or “effective calculability” and two of the formalisms commonly used to define it, “(Turing) computability” and “(general) recursiveness”. We consider their origin, exact technical definition, concepts, history, general English meanings, how they became fixed in their present roles, how they were first and are now used, their impact on nonspecialists, how their use will affect the future content of the subject of computability theory, and its connection to other related areas. After a careful historical and conceptual analysis of computability and recursion we make several recommendations in section §7 about preserving the intensional differences between the concepts of “computability” and “recursion.” Specifically we recommend that: the term “recursive” should no longer carry the additional meaning of “computable” or “decidable;” functions defined using Turing machines, register machines, or their variants should be called “computable” rather than “recursive;” we should distinguish the intensional difference between Church's Thesis and Turing's Thesis, and use the latter particularly in dealing with mechanistic questions; the name of the subject should be “Computability Theory” or simply Computability rather than “Recursive Function Theory.”

218 citations


Book
01 Mar 1996
TL;DR: This paper discusses the development of Numerical Software Backward Error Analysis in Libraries, and some of the techniques used to develop and evaluate these techniques are now known as PRECISE techniques.
Abstract: Foreword Iain S. Duff Preface General Presentation Notations. Part I. Computability in Finite Precision: Well-Posed Problems Approximations Convergence in Exact Arithmetic Computability in Finite Precision Gaussian Elimination Forward Error Analysis The Influence of Singularities Numerical Stability in Exact Arithmetic Computability in Finite Precision for Iterative and Approximate Methods The Limit of Numerical Stability in Finite Precision Arithmetically Robust Convergence The Computed Logistic Bibliographical Comments. Part II. Measures of Stability for Regular Problems: Choice of Data and Class of Perturbations Choice of Norms: Scaling Conditioning of Regular Problems Simple Roots of Polynomials Factorizations of a Complex Matrix Solving Linear Systems Functions of a Square Matrix Concluding Remarks Bibliographical Comments. Part III. Computation in the Neighbourhood of a Singularity: Singular Problems Which are Well-Posed Condition Numbers of Holder-Singularities Computability of Ill-Posed Problems Singularities of z ----> A - zI Distances to Singularity Unfolding of Singularity Spectral Portraits Bibliographical Comments. Part IV. Arithmetic Quality of Reliable Algorithms: Forward and Backward Analyses Backward Error Quality of Reliable Software Formulae for Backward Errors Influence of the Class of Perturbations Iterative Refinement for Backward Stability Robust Reliability and Arithmetic Quality Bibliographical Comments. Part V. Numerical Stability in Finite Precision: Iterative and Approximate Methods Numerical Convergence of Iterative Solvers Stopping Criteria in Finite Precision Robust Convergence The Computed Logistic Revisited Care of Use Bibliographical Comments. PartVI. Software Tools for Round-Off Error Analysis in Algorithms. A Historical Perspective The Assessment of the Quality of the Numerical Software Backward Error Analysis in Libraries Sensitivity Analysis Interval Analysis Probabilisitc Models Computer Algebra Bibliographical Comments. Part VII. The Toolbox PRECISE for Computer Experimentation. What is PRECISE? Module for Backward Error Analysis Sample Size Backward Analysis with PRECISE Dangerous Border and Unfolding of a Singularity Summary of Module 1 Bibliographical Comments. Part VIII. Experiments with PRECISE. Format of the Examples Backward Error Analysis for Linear Systems Computer Unfolding of Singularity Dangerous Border and Distance to Singularity Roots of Polynomials Eigenvalue Problems Conclusion Bibliographical Comments. Part IX. Robustness to Nonnormality. Nonnormality and Spectral Instability Nonnormality in Physics and Technology Convergence of Numerical Methods in Exact Arithmetic Influence on Numerical Software Bibliographical Comments. Part V. Qualitative Computing. Sensitivity and Pseudosolutions for F (x) = y: Pseudospectra of Matrices Pseudozeroes of Polynomials Divergence Portrait for the Complex Logistic Iteration Qualitative Computation of a Jordan Form Beyond Linear Perturbation Theory Bibliographical Comments. Part XI. More Numerical Illustrations with PRECISE: Annex: The Toolbox PRECISE for MATLAB Index Bibliography.

162 citations


Journal ArticleDOI
TL;DR: It is proved that every Turing machine can be simulated by a system based entirely on contextual insertions and deletions and decidability of existence of solutions to equations involving these operations.
Abstract: We investigate two generalizations of insertion and deletion of words, that have recently become of interest in the context of molecular computing. Given a pair of words (x, y), called a context, the (x, y)-contextual insertion of a wordvinto a worduis performed as follows. For each occurrence ofxyas a subword inu, we include in the result of the contextual insertion the words obtained by insertingvintou, betweenxandy. The (x, y)-contextual deletion operation is defined in a similar way. We study closure properties of the Chomsky families under the defined operations, contextual ins-closed and del-closed languages, and decidability of existence of solutions to equations involving these operations. Moreover, we prove that every Turing machine can be simulated by a system based entirely on contextual insertions and deletions

134 citations


Book
30 Apr 1996
TL;DR: Sigmadefinability and the Godel Incompleteness Theorem, and Computability on Admissible Sets.
Abstract: Sigmadefinability and the Godel Incompleteness Theorem. Computability on Admissible Sets. Selected Topics. Appendix. Index.

107 citations


Journal ArticleDOI
TL;DR: An important result in this paper is the proof that every computable functional on real numbers is continuous w.r.t. the compact open topology on the function space.
Abstract: We present the different constructive definitions of real number that can be found in the literature. Using domain theory we analyse the notion of computability that is substantiated by these definitions and we give a definition of computability for real numbers and for functions acting on them. This definition of computability turns out to be equivalent to other definitions given in the literature using different methods. Domain theory is a useful tool to study higher order computability on real numbers. An interesting connection between Scott topology and the standard topologies on the real line and on the space of continuous functions on reals is stated. An important result in this paper is the proof that every computable functional on real numbers is continuous w.r.t. the compact open topology on the function space.

52 citations


Journal ArticleDOI
Qing Zhou1
TL;DR: This paper combines together the two concepts, computability on abstract metric spaces and computability for continuous functions, and delineate the basic properties of computable open and closed sets.
Abstract: In this paper we study intrinsic notions of “computability” for open and closed subsets of Euclidean space. Here we combine together the two concepts, computability on abstract metric spaces and computability for continuous functions, and delineate the basic properties of computable open and closed sets. The paper concludes with a comprehensive examination of the Effective Riemann Mapping Theorem and related questions.

49 citations


Book ChapterDOI
18 Aug 1996
TL;DR: This paper presents a parallel greedy randomized adaptive search procedure (GRASP) for solving MAX-SAT problems and experimental results indicate that almost linear speedup is achieved.
Abstract: The weighted maximum satisfiability (MAX-SAT) problem is central in mathematical logic, computing theory, and many industrial applications. In this paper, we present a parallel greedy randomized adaptive search procedure (GRASP) for solving MAX-SAT problems. Experimental results indicate that almost linear speedup is achieved.

41 citations


Journal ArticleDOI
TL;DR: A possible model, constituting a chaotic dynamical system, is presented, which is term as the analog shift map, when viewed as a computational model has super-Turing power and is equivalent to neural networks and the class of analog machines.

Journal ArticleDOI
TL;DR: A recursive algorithm to test the safety and effective computability of predicates in arbitrarily complex cliques in the predicate connection graph, and a framework for analyzing programs that are produced by the Magic Sets transformation, to analyze recursive cliques.

Journal Article
TL;DR: A number of classes of X-machines are investigated and it is shown that a certain class of these machines the 2-stack straight move stream X-machine computes precisely the class of partial recursive functions.
Abstract: The theory of computable functions is well known and has given rise to many classes of computational models of varying power and usefullness. We take another look at this subject using the idea of a generalised machine the X-machine to provide some further insights into the issue and to discuss an elegant general approach to the question of classifying computational models including some of the socalled "Super-Turing" models. This paper investigates a number of classes of X-machines. It considers their relative computational capabilities and contrasts these with other important models. It is shown that a certain class of these machines the 2-stack straight move stream X-machine computes precisely the class of partial recursive functions. The importance of this work to the theory of testing of systems is

Book ChapterDOI
18 Sep 1996
TL;DR: Computation theories and structures are introduced to represent mathematical objects and applications of algorithms occuring in algorithmic services to provide a formal framework for the specification of symbolic mathematical problem solving by cooperation of algorithms and theorems.
Abstract: Recent research towards integrating symbolic mathematical reasoning and computation has led to prototypes of interfaces and environments. This paper introduces computation theories and structures to represent mathematical objects and applications of algorithms occuring in algorithmic services. The composition of reasoning and computation theories and structures provide a formal framework for the specification of symbolic mathematical problem solving by cooperation of algorithms and theorems.

Journal ArticleDOI
TL;DR: It is proved that the class of functions Rl → Rm (with l, m finite and R a commutative, ordered ring) computable in the BSS model and the functions distributively computable over a natural distributive graph based on the operations of R coincide.

Posted Content
TL;DR: In this paper, the authors study economic equilibrium theory with a uniformity principle, which constrains the magnitudes (prices, quantities, etc.) and the operations (to perceive, evaluate, choose, communicate, etc.).
Abstract: This paper studies economic equilibrium theory with a 'uniformity principle' constraining the magnitudes (prices, quantities, etc.) and the operations (to perceive, evaluate, choose, communicate, etc.) that agents can use.We look at the special case of computability constraints, where all price s, quantities, preference relations, utility functions, demand functions, etc. are required to be computable by finite algorithms.

01 Jan 1996
TL;DR: This study considers the complexity of the wait-free approximate agreement problem in an asynchronous shared memory comprised of only single-bit multi-writer multi-reader registers, and shows matching upper and lower bounds of $\Theta(s/\varepsilon)$ steps and shared registers.
Abstract: Agreement problems are central to the study of wait-free protocols for shared memory distributed systems. We examine two specific issues arising out of this study. We consider the complexity of the wait-free approximate agreement problem in an asynchronous shared memory comprised of only single-bit multi-writer multi-reader registers. For real-valued inputs of magnitude at most s and a real-valued accuracy requirement $\varepsilon>0$ we show matching upper and lower bounds of $\Theta(\log(s/\varepsilon))$ steps and shared registers. For inputs drawn from any fixed finite range this is significantly better than the best possible algorithm for single-writer multi-reader registers, which, for n processes, requires $\Omega(\log\ n)$ steps. These results are used to show a separation between the wait-free single-writer multi-reader and wait-free multi-writer multi-reader models of computation. The consensus hierarchy characterizes the strength of a shared object by its ability to solve the consensus problem in a wait-free manner. One important application of a hierarchy classifying the power of objects is to compare the power of systems offering different collections of objects. Ideally, a hierarchy should reduce the task of determining the strength of an architecture supporting shared memory distributed systems to the problem of determining the strength of each type of shared object supported by the architecture. Informally, a hierarchy that allows this is robust. Several variations of the consensus hierarchy have appeared in the literature, and it has been shown that all but one of them are not robust. The remaining hierarchy, named $h\sbsp{m}{r},$ has been the subject of considerable research. We show that, in a natural setting, the consensus hierarchy $h\sbsp{m}{r}$ is not robust.

Book ChapterDOI
24 Apr 1996
TL;DR: A new fixpoint theorem is presented which guarantees the existence and the finite computability of the least common solution of a countable system of recursive equations over a wellfounded domain and covers all the known versions of fixpoint theorems.
Abstract: We present a new fixpoint theorem which guarantees the existence and the finite computability of the least common solution of a countable system of recursive equations over a wellfounded domain. The functions are only required to be increasing and delay-monotone, the latter being a property much weaker than monotonicity. We hold that wellfoundedness is a natural condition as it guarantees termination of every fixpoint computation algorithm. Our fixpoint theorem covers, under the wellfoundedness condition, all the known versions of fixpoint theorems. To demonstrate its power and versatility we contrast an application in data flow analysis, where known versions are applicable as well, to a practically relevant application in program optimization, which due to its second order effects, requires the full strength of our new theorem. In fact, the new theorem is central for establishing the optimality of the partial dead code elimination algorithm considered, which is implemented in the new release of the Sun SPARCompiler language systems.

Journal ArticleDOI
TL;DR: It is shown that a physically motivated Lyapunov-like function based on the concept of total information can be derived for large classes of non-linear physical systems and studied how this function may be used for designing estimation, adaptation and learning mechanisms for such systems.
Abstract: Lyapunov design has never been systematic. In the adaptive control of complex multi-input non-linear systems, physical considerations, such as conservation of energy or entropy increase, represent one of the major tools in building Lyapunov-like functions and providing stability and performance guarantees. In this paper we show that a physically motivated Lyapunov-like function based on the concept of total information can be derived for large classes of non-linear physical systems. We study how this function may be used for designing estimation, adaptation and learning mechanisms for such systems. In the process we revisit familiar notions such as controllability and observability from an information perspective, which in turns allows us to define 'natural' space-time scales at which to observe and control a given complex system. By formulating control problems in algorithmic form, we emphasize the importance of computability and computational complexity for issues of control. Generic control problems are shown to be NP-hard: each additional complication, such as the presence of noise or the absence of complete system identification, moves the control problem further up the polynomial hierarchy of computational complexity. In some cases, requirements of 'optimality' may be unrealistic or irrelevant, since the solution to the problem of finding the optimal algorithm for control is uncomputable.


Book
01 Jan 1996
TL;DR: This book provides an elementary introduction to formal languages and machine computation, and contains a chapter on number-theoretic computation.
Abstract: This book provides an elementary introduction to formal languages and machine computation. The materials covered include computation-oriented mathematics, finite automata and regular languages, push-down automata and context-free languages, Turing machines and recursively enumerable languages, and computability and complexity. As integers are important in mathematics and computer science, the book also contains a chapter on number-theoretic computation. The book is intended for university computing and mathematics students and computing professionals.

Journal ArticleDOI
TL;DR: It is shown that any Berry-computable function can be computed in a sequential way by a relative algorithm and if the codomain of the considered stable function is an atomic dI-domain, the reverse holds as well.
Abstract: In this paper we investigate a subset of the class of Scott-computable stable functions called Berry-computable. Furthermore we introduce a corresponding notion of an effectively given dI-domain. We obtain the result that the category with these domains as objects and Berry-computable functions as morphisms has all the important attributes of recursive domains and Scott-computable continuous functions, e.g., Cartesian closedness. Moreover, we investigate the relationship between stable functions and sequential algorithms. We show that any Berry-computable function can be computed in a sequential way by a relative algorithm. Additionally, if the codomain of the considered stable function is an atomic dI-domain, we obtain the result that the reverse holds as well.

Book ChapterDOI
30 Sep 1996
TL;DR: Three directions of research which use formal language theory tools for modelling operations specific to DNA (and RNA) recombinations obtain computability models which are universal (language generating devices are obtained which are equivalent in power with Turing machines).
Abstract: We briefly present notions and results from three directions of research which use formal language theory tools for modelling operations specific to DNA (and RNA) recombinations; in all cases one obtains computability models which are universal (language generating devices are obtained which are equivalent in power with Turing machines). The basic operations are those of sticking (a model of the Watson-Crick complementarity), of splicing (a model of the recombinant behaviour of DNA sequences under the influence of restriction enzymes), and of insertion/deletion (known to hold both for DNA and for RNA sequences). Because this paper is an overview, we present here only definitions and results without proofs; further details can be found in the papers quoted below.

01 Jan 1996
TL;DR: This work characterize the Type 2 computable functions on computable real numbers as exactly those Type 1 Computable functions which satisfy a certain additional condition concerning their domain of deenition.
Abstract: Based on the Turing machine model there are essentially two diierent notions of computable functions over the real numbers. The eeective functions are deened only on computable real numbers and are Type 1 computable with respect to a numbering of the computable real numbers. The eeectively continuous functions may be deened on arbitrary real nunbers. They are exactly those functions which are Type 2 computable with respect to an appropriate representation of the real numbers. We characterize the Type 2 computable functions on computable real numbers as exactly those Type 1 computable functions which satisfy a certain additional condition concerning their domain of deenition. Our result is a sharp strengthening of the well-known continuity result of Tseitin and Moschovakis for eeective functions. The result is presented for arbitrary computable metric spaces.


Book ChapterDOI
26 Aug 1996
TL;DR: This paper describes a mechanisation of computability theory in HOL using the Unlimited Register Machine (URM) model of computation, first specified as a rudimentary machine language and then the notion of a computable function is derived.
Abstract: This paper describes a mechanisation of computability theory in HOL using the Unlimited Register Machine (URM) model of computation. The URM model is first specified as a rudimentary machine language and then the notion of a computable function is derived. This is followed by an illustration of the proof of a number of basic results of computability which include various closure properties of computable functions. These are used in the implementation of a mechanism which partly automates the proof of the computability of functions and a number of functions are then proved to be computable. This work forms part of a comparative study of different theorem proving approaches and a brief discussion regarding theorem proving in HOL follows the description of the mechanisation.

07 Oct 1996
TL;DR: Several types of matching systems are shown to have the same power as finite automata, one variant is proven to be equivalent to Turing machines, and another one is found to have a strictly intermediate power.
Abstract: We introduce the matching systems, a computability model which is an abstraction of the way of using the Watson-Crick complementarity when computing with DNA in the Adleman style. Several types of matching systems are shown to have the same power as finite automata, one variant is proven to be equivalent to Turing machines, and another one is found to have a strictly intermediate power.

Proceedings Article
10 Jun 1996
TL;DR: The main theorem allows us to treat in a general and natural way the search problems that correspond to the many important graph decision problems now known to be solvable in linear time for graphs of bounded treewidth and pathwidth.
Abstract: An annotation of a string X over an alphabet Σ is a string Y having the same length as X, over an alphabet Γ. The pair (X, Y) can be viewed as a string over the alphabet Σ×Γ. The search problem for a language L ⊑ (Σ × Γ)* is the problem of computing, given a string X ∈ Σ*, a string Y ∈ Γ*, having the same length as X, such that (X, Y) ∈ L, or determining that no such Y exists. We define a notion of finitestate searchability and prove the following (main) theorem: If L is finitestate recognizable, then it is finitestate searchable. The notions of annotation and finite-state searchability can be generalized to trees of symbols. Annotation search problems have a variety of applications. For example, the tree or string of symbols may represent the structural parse of a graph of bounded treewidth or pathwidth, and the annotation may represent a “solution” to a search problem (e.g., finding a subgraph homeomorphic to a fixed graph H). Our main theorem allows us to treat in a general and natural way the search problems that correspond to the many important graph decision problems now known to be solvable in linear time for graphs of bounded treewidth and pathwidth. As a corollary, we show finite-state searchability for graph properties whose solutions can be expressed by leading existential quantification in monadic second-order logic. This can be viewed as a “search problem” form of Courcelle's Theorem on the decidability of monadic second-order graph properties. We describe several possible applications to computing annotations of biological sequences, and discuss how the resulting annotations may be useful in sequence comparison and alignment.

Posted Content
TL;DR: Richter and Wong as mentioned in this paper studied consumer theory from the bounded rationality approach, where commodity quantities, prices, consumer prefrences, utility functions, and demand functions are computable by finite algorithms.
Abstract: This paper studies consumer theory from the bounded rationality ap­ proach proposed in Richter and Wong (1996a), with a "uniformity principle" con­ straining the magnitudes (prices, quantities, etc.) and the operations (to perceive, evaluate, choose, communicate, etc.) that agents can use. In particular, we operate in a computability framework, where commodity quantities, prices, consumer pref­ erences, utility functions, and demand functions are computable by finite algorithms (Richter and Wong (1996a)). We obtain a computable utility representation theorem. We prove an existence theorem for computable maximizers of quasiconcave computable utility functions (preferences), and prove the computability of the demand functions generated by such functions (preferences). We also provide a revealed preference characterization of computable rationality for the finite case. Beyond consumer theory, the results have applications in general equilibrium theory (Richter and Wong (1996a)).