scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 1997"


Journal ArticleDOI
TL;DR: Although there remain many unresolved problems, multigrid or multilevel schemes in the classical framework of finite difference and finite element discretizations exhibit by now a comparatively clear profile.
Abstract: More than anything else, the increase of computing power seems to stimulate the greed for tackling ever larger problems involving large-scale numerical simulation. As a consequence, the need for understanding something like the intrinsic complexity of a problem occupies a more and more pivotal position. Moreover, computability often only becomes feasible if an algorithm can be found that is asymptotically optimal. This means that storage and the number of floating point operations needed to resolve the problem with desired accuracy remain proportional to the problem size when the resolution of the discretization is refined. A significant reduction of complexity is indeed often possible, when the underlying problem admits a continuous model in terms of differential or integral equations. The physical phenomena behind such a model usually exhibit characteristic features over a wide range of scales. Accordingly, the most successful numerical schemes exploit in one way or another the interaction of different scales of discretization. A very prominent representative is the multigrid methodology; see, for instance, Hackbusch (1985) and Bramble (1993). In a way it has caused a breakthrough in numerical analysis since, in an important range of cases, it does indeed provide asymptotically optimal schemes. For closely related multilevel techniques and a unified treatment of several variants, such as multiplicative or additive subspace correction methods, see Bramble, Pasciak and Xu (1990), Oswald (1994), Xu (1992), and Yserentant (1993). Although there remain many unresolved problems, multigrid or multilevel schemes in the classical framework of finite difference and finite element discretizations exhibit by now a comparatively clear profile. They are particularly powerful for elliptic and parabolic problems.

489 citations


Book ChapterDOI
14 Mar 1997
TL;DR: These notes contain some high points from the recent book, emphasising what is different or novel with respect to more traditional treatments of computability and complexity theory, and some new results as well.
Abstract: A programming approach to computability and complexity theory yields proofs of central results that are sometimes more natural than the classical ones; and some new results as well. These notes contain some high points from the recent book [14], emphasising what is different or novel with respect to more traditional treatments. Topics include: Kleene’s s-m-n theorem applied to compiling and compiler generation. Proof that constant time factors do matter: for a natural computation model, problems solvable in linear time have a proper hierarchy, ordered by coefficient values. (In contrast to the “linear speedup” property of Turing machines.) Results on which problems possess optimal algorithms, including Levin’s Search theorem (for the first time in book form). Characterisations in programming terms of a wide range of complexity classes. These are intrinsic: without externally imposed space or time computation bounds. Boolean program problems complete for PTIME, NPTIME, PSPACE.

242 citations


Book
01 May 1997
TL;DR: This authoritative account of the classical theory of computable functions and relations takes the form of a tour through basic recursive function theory, starting with an axiomatic foundation and developing the essential methods in order to survey the most memorable results of the field.
Abstract: Broad in coverage, mathematically sophisticated, and up to date, this book provides an introduction to theories of computability. It treats not only "the" theory of computability (the theory created by Alan Turing and others in the 1930s), but also a variety of other theories (of Boolean functions, automata and formal languages) as theories of computability. These are addressed from the classical perspective of their generation by grammars and from the more modern perspective as rational cones. The treatment of the classical theory of computable functions and relations takes the form of a tour through basic recursive function theory, starting with an axiomatic foundation and developing the essential methods in order to survey the most memorable results of the field. This authoritative account, written by one of the leading lights of the subject, will be required reading for graduate students and researchers in theoretical computer science and mathematics.

129 citations


Journal ArticleDOI
08 Jan 1997
TL;DR: Surprisingly, stratified and well-founded semantics for negation turn out to have basic shortcomings in this context, while inflationary semantics emerges as an appealing alternative.
Abstract: The paper introduces a model of the Web as an infinite, semi-structured set of objects. We reconsider the classical notions of genericity and computability of queries in this new context and relate them to styles of computation prevalent on the Web, based on browsing and searching. We revisit several well-known declarative query languages (first-order logic, Datalog, and Datalog with negation) and consider their computational characteristics in terms the notions introduced in this paper. In particular, we are interested in languages or fragments thereof which can be implemented by browsing, or by browsing and searching combined. Surprisingly, stratified and well-founded semantics for negation turn out to have basic shortcomings in this context, while inflationary semantics emerges as an appealing alternative.

128 citations


Proceedings ArticleDOI
01 May 1997
TL;DR: A new model of query and computation on the Web is presented, focusing on two important aspects that distinguish theAccess to Web data from the access to a standard database system: the navigational nature of the access and the lack of concurrency control.
Abstract: We present a new formal model of query and computation on the Web. We focus on two important aspects that distinguish the access to Web data from the access to a standard database system: the navigational nature of the access and the lack of concurrency control. We show that these two issues have significant effects on the computability of queries. To illustrate the ideas and how they can be used in practice for designing appropriate Web query languages, we consider a particular query language, the Web calculus, an abstraction and extension of the practical Web query language WebSQL.

108 citations


Book
02 Oct 1997
TL;DR: The Bounds of Computability Part II: A Hierarchy of Automata and Formal Languages and the Chomsky Hierarchy Epilogue are presented.
Abstract: Preface Chapter 0 - Mathematical Preliminaries Part I: Models of Computation Chapter 1 - Turning Machines Chapter 2 - Additional Varieties of Turning Machines Chapter 3 - An Introduction to Recursion Theory Chapter 4 - Markov Algorithms Chapter 5 - Register Machines Chapter 6 - Post Systems (Optional) Chapter 7 - The Vector Machine Model of Parallel Computation (Optional) Chapter 8 - The Bounds of Computability Part II: A Hierarchy of Automata and Formal Languages Chapter 9 - Regular Languages and Finite-State Automata Chapter 10 - Context-Free Languages and Pushdown-Stock Automata Chapter 11 - Context-Free Languages and Compiler Design Theory (Optional) Chapter 12 - Context-Sensitive Languages and Linear Bounded Automata Chapter 13 - Generative Grammars an the Chomsky Hierarchy Epilogue

71 citations


Proceedings ArticleDOI
03 Nov 1997
TL;DR: Given the organization of the proposed SAT algorithm, the resulting ILP procedures implement powerful search pruning techniques, including a non-chronological backtracking search strategy, clause recording procedures and identification of necessary assignments.
Abstract: The computation of prime implicants has several and significant applications in different areas, including automated reasoning, non-monotonic reasoning, electronic design automation, among others. The authors describe a new model and algorithm for computing minimum-size prime implicants of propositional formulas. The proposed approach is based on creating an integer linear program (ILP) formulation for computing the minimum-size prime implicant, which simplifies existing formulations. In addition, they introduce two new algorithms for solving ILPs, both of which are built on top of an algorithm for propositional satisfiability (SAT). Given the organization of the proposed SAT algorithm, the resulting ILP procedures implement powerful search pruning techniques, including a non-chronological backtracking search strategy, clause recording procedures and identification of necessary assignments. Experimental results, obtained on several benchmark examples, indicate that the proposed model and algorithms are significantly more efficient than other existing solutions.

71 citations


Book ChapterDOI
01 Jan 1997
TL;DR: This paper surveys the existing models and results in analog, continuous-time computation, and point to some of the open research questions.
Abstract: Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuous-time computation However, while special-case algorithms and devices are being developed, relatively little work exists on the general theory of continuous- time models of computation In this paper, we survey the existing models and results in this area, and point to some of the open research questions

66 citations


Book ChapterDOI
18 Dec 1997
TL;DR: The main result is a direct proof, via a small-step unloading machine, of the correctness of compilation to a closure-based abstract machine and of CIU equivalence for an object-oriented language.
Abstract: We adopt the untyped imperative object calculus of Abadi and Cardelli as a minimal setting in which to study problems of compilation and program equivalence that arise when compiling object-oriented languages. Our main result is a direct proof, via a small-step unloading machine, of the correctness of compilation to a closure-based abstract machine. Our second result is that contextual equivalence of objects coincides with a form of Mason and Talcott's CIU equivalence; the latter provides a tractable means of establishing operational equivalences. Finally, we prove correct an algorithm, used in our prototype compiler, for statically resolving method offsets. This is the first study of correctness of an object-oriented abstract machine, and of CIU equivalence for an object-oriented language.

50 citations


Book ChapterDOI
22 Jun 1997
TL;DR: A linear-time algorithm to translate any formula from computation tree logic (CTL or CTL*) into an equivalent expression in a variable-confined fragment of transitive-closure logic FO(TC) requires only NSPACE[logn].
Abstract: We give a linear-time algorithm to translate any formula from computation tree logic (CTL or CTL*) into an equivalent expression in a variable-confined fragment of transitive-closure logic FO(TC). Traditionally, CTL and CTL* have been used to express queries for model checking and then translated into μ-calculus for symbolic evaluation. Evaluation of μ-calculus formulas is, however, complete for time polynomial in the (typically huge) number of states in the Kripke structure. Thus, this is often not feasible, not parallelizable, and efficient incremental strategies are unlikely to exist. By contrast, evaluation of any formula in FO(TC) requires only NSPACE[logn]. This means that the space requirements are manageable, the entire computation is parallelizable, and efficient dynamic evaluation is possible.

47 citations


Book ChapterDOI
07 Jul 1997
TL;DR: This contribution introduces computability on the set M of probability measures on the Borel subsets of the unit interval to demonstrate that this concept of computability is not merely an ad hoc definition but has very natural properties.
Abstract: While computability theory on many countable sets is well established and for computability on the real numbers several (mutually non-equivalent) definitions are applied, for most other uncountable sets, in particular for measures, no generally accepted computability concepts at all ha,ve been available until now. In this contribution we introduce computability on the set M of probability measures on the Borel subsets of the unit interval [0; 1]. Its main purpose is to demonstrate that this concept of computability is not merely an ad hoc definition but has very natural properties. Although the definitions and many results can of course be transferred to more general spaces of measures, we restrict our attention to M in order to keep the technical details simple and concentrate on the central ideas. In particular, we show that simple obvious reqirements exclude a number of similar definitions, that the definition leads to the expected computability results, that there are other natural definitions inducing the same computability theory and that the theory is embedded smoothly into classical measure theory. As background we consider TTE, Type 2 Theory of Effectivity [KW84, KW85], which provides a frame for very realistic computability definitions. In this approach, computability is defined on finite and infinite sequences of symbols explicitly by Turing machines and on other sets by means of notations and representations. Canonical representations are derived from information structures [Wei97]. We introduce a standard representation \(\delta _m : \subseteq \sum ^\omega \to M\)via some natural information structure defined by a subbase σ (the atomic properties) of some topology τ on M and a standard notation of σ. While several modifications of δ m suggesting themselves at first glance, violate simple and obvious requirements, δ m has several very natural properties and hence should induce an important computability theory. Many interesting functions on measures turn out to be computable, in particular linear combination, integration of continuous functions and any transformation defined by a computable iterated function system with probabilities. Some other natural representations of M are introduced, among them a Cauchy representation associated with the Hutchinson metric, and proved to be equivalent to δ m . As a corollary, the final topology τ of δ m is the well known weak topology on M.

Journal ArticleDOI
TL;DR: In this paper, it was shown that if a social welfare function satisfying Unanimity and Independence also satisfies Pairwise Computability, then it is dictatorial, and this result severely limits on practical grounds Fishburn's resolution (1970) of Arrow's impossibility.
Abstract: A social welfare function for a denumerable society satisfies Pairwise Computability if for each pair (x,y) of alternatives, there exists an algorithm that can decide from any description of each profile on {x,y} whether the society prefers x to y. I prove that if a social welfare function satisfying Unanimity and Independence also satisfies Pairwise Computability, then it is dictatorial. This result severely limits on practical grounds Fishburn's resolution (1970) of Arrow's impossibility. I also give an interpretation of a denumerable “society.”

Book ChapterDOI
Gheorghe Paun1
01 Jan 1997
TL;DR: First, a result is improved about the so-called communicating distributed H systems (systems with seven components are able to characterize the recursively enumerable languages), then two new types of distributed H system are introduced: the separated two-level H systems and the periodically time-varying H systems, and it is proved that in all these cases one can design universal “DNA computers based on splicing”.
Abstract: Because splicing systems with a finite set of rules generate only regular languages, it is necessary to supplement such a system with a control mechanism on the use of rules. One fruitful idea is to use distributed architectures suggested by the grammar systems area. Three distributed computability (language generating) devices based on splicing are discussed here. First, we improve a result about the so-called communicating distributed H systems (systems with seven components are able to characterize the recursively enumerable languages — the best result known up to now is of ten components), then we introduce two new types of distributed H systems: the separated two-level H systems and the periodically time-varying H systems. In both cases we prove characterizations of recursively enumerable languages — which means that in all these cases we can design universal “DNA computers based on splicing”.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: The first asynchronous complexity theorem is presented, applied to decision tasks in the iterated immediate snapshot (IIS) model of Borowsky and Gafni, and it states that the time complexity of any asynchronous algorithm is directly proportional to the level of nonuniform chromatic subdivisions necessary to allow a simplicial map from a task's input complex to its output complex.
Abstract: This paper introduces the use of topological models and methods, formerly used to analyze computability, as tools for the quantification and classification of asynchronous complexity. We present the first asynchronous complexity theorem, applied to decision tasks in the iterated immediate snapshot (IIS) model of Borowsky and Gafni. We do so by introducing a novel form of topological tool called the nonuniform chromatic subdivision. Building on the framework of Herlihy and Shavit's topological computability model, our theorem states that the time complexity of any asynchronous algorithm is directly proportional to the level of nonuniform chromatic subdivisions necessary to allow a simplicial map from a task's input complex to its output complex. To show the power of our theorem, we use it to derive a new tight bound on the time to achieve n process approximate agreement in the IIS model: logd max input−min input � , where d = 3 for two processes and d = 2 for three or more. This closes an intriguing gap between the known upper and lower bounds implied by the work of Aspnes and Herlihy. More than the new bounds themselves, the importance of our asynchronous complexity theorem is that the algorithms and lower bounds it allows us to derive are intuitive and simple, with topological proofs that require no mention of concurrency at all.

Book
01 Jan 1997

Book ChapterDOI
22 Jun 1997
TL;DR: An overview of the verification tool μcke is presented, which is an implementation of a BDD-based μ-calculus model checker and uses several optimization techniques that are lifted from special purpose model checkers to the μ-Calculus.
Abstract: In this paper we present an overview of the verification tool μcke. It is an implementation of a BDD-based μ-calculus model checker and uses several optimization techniques that are lifted from special purpose model checkers to the μ-calculus. This gives the user more expressibility without loosing efficiency.

Book ChapterDOI
11 Jul 1997
TL;DR: It is proved that languages over one-letter alphabet accepted by randomized one-way 1-tape Monte Carlo pushdown automata are regular.
Abstract: Rather often difficult (and sometimes even undecidable) problems become easily decidable for tally languages, i.e. for languages in a single-letter alphabet. For instance, the class of languages recognizable by 1-way nondeterministic pushdown automata equals the class of the context-free languages, but the class of the tally languages recognizable by 1-way nondeterministic pushdown automata, contains only regular languages [LP81]. We prove that languages over one-letter alphabet accepted by randomized one-way 1-tape Monte Carlo pushdown automata are regular. However Monte Carlo pushdown automata can be much more concise than deterministic 1-way finite state automata.

Book ChapterDOI
07 Jul 1997
TL;DR: An account of the basic theory of confluence in the π-calculus is presented, techniques for showing confluence of mobile systems are given, and the utility of some of the theory presented is illustrated via an analysis of a distributed algorithm.
Abstract: An account of the basic theory of confluence in the π-calculus is presented, techniques for showing confluence of mobile systems are given, and the utility of some of the theory presented is illustrated via an analysis of a distributed algorithm

Journal ArticleDOI
TL;DR: Interval analysis is used to characterize the set of all input sequences with a given length that drive a nonlinear discrete-time state-space system from a given initial state to a given set of terminal states.
Abstract: Interval analysis is used to characterize the set of all input sequences with a given length that drive a nonlinear discrete-time state-space system from a given initial state to a given set of terminal states. No requirement other than computability (i.e. ability to be evaluated by a finite algorithm) is put on the nature of the state equations. The method is based on an algorithm for set inversion and approximates the solution set in a guaranteed way.

Journal ArticleDOI
TL;DR: A systematic theoretical procedure for the constructive approximation of non-linear operators and how this procedure can be applied to the modelling of dynamical systems is proposed and shown.
Abstract: In this paper we propose a systematic theoretical procedure for the constructive approximation of non-linear operators and show how this procedure can be applied to the modelling of dynamical systems. We extend previous work to show that the model is stable to small disturbances in the input signal and we pay special attention to the role of real number parameters in the modelling process. The implications of computability are also discussed. A number of specific examples are presented for the particular purpose of illustrating the theoretical procedure.

Book ChapterDOI
23 Sep 1997
TL;DR: The join-calculus as discussed by the authors is a model for distributed programming languages with migratory features, which allows standard polymorphic ML-like typing and thus an integration in a realistic programming language.
Abstract: The join-calculus is a model for distributed programming languages with migratory features. It is an asynchronous process calculus based on static scope and an explicit notion of locality and failures. It allows standard polymorphic ML-like typing and thus an integration in a realistic programming language. It has a distributed implementation on top of the Caml language. We review here some of the results recently obtained in the join-calculus.

Book ChapterDOI
19 Aug 1997
TL;DR: The mechanisms for specifying definitions and for theorem proving are discussed separately, building in parallel two pictures of the different approaches of mechanisation given by these systems.
Abstract: This paper illustrates the differences between the style of theory mechanisation of Coq and of HOL. This comparative study is based on the mechanisation of fragments of the theory of computation in these systems. Examples from these implementations are given to support some of the arguments discussed in this paper. The mechanisms for specifying definitions and for theorem proving are discussed separately, building in parallel two pictures of the different approaches of mechanisation given by these systems.

Journal ArticleDOI
TL;DR: The notion of partial deduction known from logic programming is defined in the framework of Structural Synthesis of Programs (SSP) and completeness and correctness are proven.
Abstract: The notion of partial deduction known from logic programming is defined in the framework of Structural Synthesis of Programs (SSP). Partial deduction for computability statements in SSP is defined. Completeness and correctness of partial deduction in the framework of SSP are proven. Several tactics and stopping criteria are suggested.

Book ChapterDOI
01 Sep 1997
TL;DR: This work characterize real number complexity classes by purely logical means and mainly finds parallel classes which have not been studied in [10].
Abstract: We study real number complexity classes under a logical point of view. Following the approaches by Blum, Shub, and Smale [3] for computability and by Gradel and Meer [10] for descriptive complexity theory over the reals, we characterize such complexity classes by purely logical means. Among them we mainly find parallel classes which have not been studied in [10].

Journal ArticleDOI
TL;DR: An attempt is made to resurrect the pioneering work of Michael Rabin on a class of games that are arithmetically defined and recursion theoretically analysed for effective playability and computational and diophantine complexity.

Book ChapterDOI
01 Jan 1997
TL;DR: This paper contains a brief summary of the two parts of a lecture given at the international Congress for Logic, Methodology, and Philosophy of Science in Florence, August, 1995.
Abstract: This paper contains a brief summary of the two parts of a lecture given at the international Congress for Logic, Methodology, and Philosophy of Science in Florence, August, 1995. More detailed versions will appear in [80] and [30], respectively.

Book ChapterDOI
26 Aug 1997
TL;DR: Experimental results show that the compiler can execute sequential programs written in the process calculus only a few times slower than equivalent C programs, indicating that pure process calculi like the authors' and programming languages based on them can be implemented efficiently, without losing their simplicity, purity, and elegance.
Abstract: We propose a framework for compiling programming languages based on concurrent process calculi, in which computation is expressed by a combination of processes and communication channels. Our framework realizes a compile-time process scheduling and unboxed channels. The compile-time scheduling enables us to execute multiple independent processes without a scheduling pool operation. Unboxed channels allow us to create a channel without memory allocations and to communicate values on registers. The framework is given as a set of translation rules from a concurrent calculus to an ML-like sequential program. Experimental results show that our compiler can execute sequential programs written in the process calculus only a few times slower than equivalent C programs. This indicates that pure process calculi like ours and programming languages based on them can be implemented efficiently, without losing their simplicity, purity, and elegance.

Proceedings ArticleDOI
19 Oct 1997
TL;DR: This paper presents a solution to this computability problem by means of geometrical transformation of the nonlinearities and algebraic transformed of the time-dependent equations to lead to stable and accurate simulations even at relatively low sampling rates.
Abstract: Nonlinear acoustic systems are often described by means of nonlinear maps which act as instantaneous constraints on the solutions of a system of linear differential equations. This description leads to discrete-time models exhibiting non-computable loops. This paper presents a solution to this computability problem by means of geometrical transformation of the nonlinearities and algebraic transformation of the time-dependent equations. The proposed leads to stable and accurate simulations even at relatively low sampling rates.

Book ChapterDOI
01 Sep 1997
TL;DR: An extension of the Grzegorczyk Hierarchy to the BSS theory of computability which is a generalization of the classical theory is given.
Abstract: In this paper, we give an extension of the Grzegorczyk Hierarchy to the BSS theory of computability which is a generalization of the classical theory. We adapt some classical results ([3, 4]) related to the Grzegorczyk hierarchy in the new setting.

Journal ArticleDOI
TL;DR: In this paper, symbolic computability of a bilinearizability test is proved, underlining the main advantages and some basic weaknesses of symbolic computation when applied to nonlinear control problems.