scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 1998"


Book ChapterDOI
Moshe Y. Vardi1
13 Jul 1998
TL;DR: The main result in this paper is an exponential time upper bound for the satisfiability problem of the Μ-calculus with both forward and backward modalities, developed a theory of two-way alternating automata on infinite trees.
Abstract: The Μ-calculus can be viewed as essentially the “ultimate” program logic, as it expressively subsumes all propositional program logics, including dynamic logics, process logics, and temporal logics. It is known that the satisfiability problem for the Μ-calculus is EXPTIME-complete. This upper bound, however, is known for a version of the logic that has only forward modalities, which express weakest preconditions, but not backward modalities, which express strongest postconditions. Our main result in this paper is an exponential time upper bound for the satisfiability problem of the Μ-calculus with both forward and backward modalities. To get this result we develop a theory of two-way alternating automata on infinite trees.

420 citations


Book
01 Jan 1998
TL;DR: In Models of Computation, John Savage re-examines theoretical computer science, offering a fresh approach that gives priority to resource tradeoffs and complexity classifications over the structure of machines and their relationships to languages.
Abstract: From the Publisher: Your book fills the gap which all of us felt existed too long. Congratulations on this excellent contribution to our field." --Jan van Leeuwen, Utrecht University "This is an impressive book. The subject has been thoroughly researched and carefully presented. All the machine models central to the modern theory of computation are covered in depth; many for the first time in textbook form. Readers will learn a great deal from the wealth of interesting material presented." --Andrew C. Yao, Professor of Computer Science, Princeton University "Models of Computation" is an excellent new book that thoroughly covers the theory of computation including significant recent material and presents it all with insightful new approaches. This long-awaited book will serve as a milestone for the theory community." --Akira Maruoka, Professor of Information Sciences, Tohoku University "This is computer science." --Elliot Winard, Student, Brown University In Models of Computation: Exploring the Power of Computing, John Savage re-examines theoretical computer science, offering a fresh approach that gives priority to resource tradeoffs and complexity classifications over the structure of machines and their relationships to languages. This viewpoint reflects a pedagogy motivated by the growing importance of computational models that are more realistic than the abstract ones studied in the 1950s, '60s and early '70s. Assuming onlysome background in computer organization, Models of Computation uses circuits to simulate machines with memory, thereby making possible an early discussion of P-complete and NP-complete problems. Circuits are also used to demonstrate that tradeoffs between parameters of computation, such as space and time, regulate all computations by machines with memory. Full coverage of formal languages and automata is included along with a substantive treatment of computability. Topics such as space-time tradeoffs, memory hierarchies, parallel computation, and circuit complexity, are integrated throughout the text with an emphasis on finite problems and concrete computational models FEATURES: Includes introductory material for a first course on theoretical computer science. Builds on computer organization to provide an early introduction to P-complete and NP-complete problems. Includes a concise, modern presentation of regular, context-free and phrase-structure grammars, parsing, finite automata, pushdown automata, and computability. Includes an extensive, modern coverage of complexity classes. Provides an introduction to the advanced topics of space-time tradeoffs, memory hierarchies, parallel computation, the VLSI model, and circuit complexity, with parallelism integrated throughout. Contains over 200 figures and over 400 exercises along with an extensive bibliography. ** Instructor's materials are available from your sales rep. If you do not know your local sales representative, please call 1-800-552-2499 for assistance, or use the Addison Wesley Longman rep-locator at ...

311 citations


Posted Content
TL;DR: Infinite time Turing machines extend the operation of ordinary Turing machines into transfinite ordinal time, and provide a natural model of infinitary computability, a theoretical setting for the analysis of the power and limitations of supertask algorithms.
Abstract: We extend in a natural way the operation of Turing machines to infinite ordinal time, and investigate the resulting supertask theory of computability and decidability on the reals. The resulting computability theory leads to a notion of computation on the reals and concepts of decidability and semi-decidability for sets of reals as well as individual reals. Every Pi^1_1 set, for example, is decidable by such machines, and the semi-decidable sets form a portion of the Delta^1_2 sets. Our oracle concept leads to a notion of relative computability for reals and sets of reals and a rich degree structure, stratified by two natural jump operators.

184 citations


Journal ArticleDOI
TL;DR: Several types of sticker systems are shown to characterize (modulo a weak coding) the regular languages, hence the power of finite automata, and one variant is proven to be equivalent to Turing machines.
Abstract: We introduce the sticker systems, a computability model, which is an abstraction of the computations using the Watson-Crick complementarity as in Adleman's DNA computing experiment, [1]. Several types of sticker systems are shown to characterize (modulo a weak coding) the regular languages, hence the power of finite automata. One variant is proven to be equivalent to Turing machines. Another one is found to have a strictly intermediate power.

110 citations


Proceedings ArticleDOI
24 Jul 1998
TL;DR: A general approach to designing investment strategies in which, instead of making statistical or other assumptions about the market, natural assumptions of computability are made about possible investment strategies; this approach leads to natural extensions of the notion of Kolmogorov complexity.
Abstract: A typical problem in portfolio selection in stock markets is that it is not clear which of the many available strategies should be used. We apply a general algorithm of prediction with expert advice (the Aggregating Algorithm) to two different idealizations of the stock market. One is the well-known game introduced by Cover in connection with his “universal portfolio” algorithm; the other is a more realistic modification of Cover’s game introduced in this paper, where market’s participants are allowed to take “short positions”, so that the algorithm may be applied to currency and futures markets. Besides applying the Aggregating Algorithm to a countable (or finite) family of arbitrary investment strategies, we also apply it, in the case of Cover’s game, to the uncountable family of “constant rebalanced portfolios” considered by Cover. We generalize Cover’s worst-case bounds for his “universal portfolio” algorithm (which can be regarded as a special case of the Aggregating Algorithm corresponding to learning rate 1) to the case of learning rates not exceeding 1. Finally, we discuss a general approach to designing investment strategies in which, instead of making statistical or other assumptions about the market, natural assumptions of computability are made about possible investment strategies; this approach leads to natural extensions of the notion of Kolmogorov complexity.

109 citations


Journal Article
TL;DR: This work compares transition super-cell systems with classic mechanisms in formal language theory, context-free and matrix grammars, E0L and ET0L systems, interpreted as generating mechanisms of number relations (the authors take the Parikh image of the usual language generated by these mechanisms rather than the language).
Abstract: We continue the investigation of the power of the computability models introduced in [Gh. Paun, Computing with membranes, TUCS Report 208, November 1998] under the name of transition super-cell systems. We compare these systems with classic mechanisms in formal language theory, context-free and matrix grammars, E0L and ET0L systems, interpreted as generating mechanisms of number relations (we take the Parikh image of the usual language generated by these mechanisms rather than the language). Several open problems are also formulated.

96 citations


Proceedings ArticleDOI
01 May 1998
TL;DR: This work presents a new HDL-satisfiability checking algorithm that works directly on the HDL model, and the primary feature of this algorithm is a seamless integration of linear-programming techniques for feasibility checking of arithmetic equations that govern the behavior of datapath modules, and 3-SAT checking for logic equations that governs the behaviorof control modules.
Abstract: Our strategy for automatic generation of functional vectors is based on exercising selected paths in the given hardware description language (HDL) model. The HDL model describes interconnections of arithmetic, logic and memory modules. Given a path in the HDL model, the search for input stimuli that exercise the path can be converted into a standard satisfiability checking problem by expanding the arithmetic modules into logic-gates. However, this approach is not very efficient. We present a new HDL-satisfiability checking algorithm that works directly on the HDL model. The primary feature of our algorithm is a seamless integration of linear-programming techniques for feasibility checking of arithmetic equations that govern the behavior of datapath modules, and 3-SAT checking for logic equations that govern the behavior of control modules. This feature is critically important to efficiency, since it avoids module expansion and allows us to work with logic and arithmetic equations whose cardinality tracks the size of the HDL model. We describe the details of the HDL-satisfiability checking algorithm in this paper. Experimental results which show significant speedups over state-of-the-art gate-level satisfiability checking methods are included.

92 citations


Journal ArticleDOI
TL;DR: Peter Wegner’s definition of computability differs markedly from the classical term as established by Church, Kleene, Markov, Post, Turing, Turing et al, and it is shown that Church's thesis still holds.
Abstract: Peter Wegner’s definition of computability differs markedly from the classical term as established by Church, Kleene, Markov, Post, Turing et al. Wegner identifies interaction as the main feature of today’s systems which is lacking in the classical treatment of computability. We compare the different approaches and argue whether or not Wegner’s criticism is appropriate. Taking into account the major arguments from the literature, we show that Church’s thesis still holds.

42 citations


01 Jan 1998

39 citations


Proceedings ArticleDOI
10 Nov 1998
TL;DR: A new parallel hybrid method for solving the satisfiability problem that combines cellular genetic algorithms and the random walk (WSAT) strategy of GSAT is presented.
Abstract: A new parallel hybrid method for solving the satisfiability problem that combines cellular genetic algorithms and the random walk (WSAT) strategy of GSAT is presented. The method, called CGWSAT, uses a cellular genetic algorithm to perform a global search on a random initial population of candidate solutions and a local selective generation of new strings. Global search is specialized in local search by adopting the WSAT strategy. CGWSAT has been implemented on a Meiko CS-2 parallel machine using a two-dimensional cellular automaton as a parallel computation model. The algorithm has been tested on randomly generated problems and some classes of problems from the DIMACS test set.

33 citations


Proceedings ArticleDOI
15 Apr 1998
TL;DR: This paper provides a system for capturing the complete execution cost of this approach, by accounting for CAD tool execution time, and investigates the applicability of this technology to solving a specific query problem, known as Boolean Satisfiability.
Abstract: Optimization and query problems provide the best clear opportunity for configurable computing systems to achieve a significant performance advantage over ASICs. Programmable hardware can be optimized to solve a specific problem instance that only needs to be solved once, and the circuit can be thrown away after its single execution. This paper investigates the applicability of this technology to solving a specific query problem, known as Boolean Satisfiability. We provide a system for capturing the complete execution cost of this approach, by accounting for CAD tool execution time. The key to this approach is to circumvent the standard CAD tools and directly generate circuits at runtime. A set of example circuits is presented as part of the system evaluation, and a complete implementation on the Xilinx XC6216 FPGA is presented.

Journal ArticleDOI
TL;DR: It is proved that any Turing RE open subset of Rq is a B SS RE set, while a Turing RE closed set may not be a BSS RE set.

Proceedings ArticleDOI
07 Sep 1998
TL;DR: Propositional satisfiability based algorithms for timing analysis are described, which introduce significant perfomance improvements over existing procedures and address the problems of circuit delay computation and path delay validation.
Abstract: The existence of false paths represents a significant and computationally complex problem in the timing analysis of combinational and sequential circuits. In this paper we describe propositional satisfiability based algorithms for timing analysis, which introduce significant perfomance improvements over existing procedures. In particular we address the problems of circuit delay computation and path delay validation, describing algorithms and providing experimental results for both problems.

Journal ArticleDOI
TL;DR: In this article, the authors pose the problem of diffractive computability which is equivalent to the factorization of a matrix into diagonal matrices and circulant matrices, and give a fast algorithm to solve the nonlinear equations involved in designing optical setups of the latter type.

Journal ArticleDOI
TL;DR: It is argued that systems compute even if their processes are not described as algorithmic, and a suggestion for a semantic approach to computation is concluded.
Abstract: This paper challenges two orthodox theses: (a) that computational processes must be algorithmic; and (b) that all computed functions must be Turing-computable. Section 2 advances the claim that the works in computability theory, including Turing‘s analysis of the effective computable functions, do not substantiate the two theses. It is then shown (Section 3) that we can describe a system that computes a number-theoretic function which is not Turing-computable. The argument against the first thesis proceeds in two stages. It is first shown (Section 4) that whether a process is algorithmic depends on the way we describe the process. It is then argued (Section 5) that systems compute even if their processes are not described as algorithmic. The paper concludes with a suggestion for a semantic approach to computation.

Journal ArticleDOI
TL;DR: Proofs of time bounds, both implementation-independent and real-time, and of space requirements, both worst-case and average-case, are given in complete detail.
Abstract: Time and space limitations can be specified, and proven, in exactly the same way as functionality. Proofs of time bounds, both implementation-independent and real-time, and of space requirements, both worst-case and average-case, are given in complete detail.

Journal ArticleDOI
TL;DR: A verification framework for implicit invocation is developed based on Jones' rely/guarantee reasoning for concurrent systems [Jon83, Stø91], and the application of the framework is illustrated with several examples.
Abstract: Implicit invocation [SuN92, GaN91] has become an important architectural style for large-scale system design and evolution. This paper addresses the lack of specification and verification formalisms for such systems. A formal computational model for implicit invocation is presented. We develop a verification framework for implicit invocation that is based on Jones' rely/guarantee reasoning for concurrent systems [Jon83, StO91]. The application of the framework is illustrated with several examples. The merits and limitations of the rely/guarantee paradigm in the context of implicit invocation systems are also discussed.

Proceedings Article
01 Jan 1998
TL;DR: It is inferred that any recursively enumerable language can be represented as the projection of the intersection of two minimal linear languages.
Abstract: We introduce two-sided sticker systems, the two-sided variant of a computability model introduced as an abstraction of Adleman's style of DNA computing and of the matching of the so-called Watson-Crick complements. Several types of sticker systems are shown to have the same power as regular grammars, one variant is found to represent the linear languages, and another one is proved to be able to represent any recursively enumerable language. From this result we infer that any recursively enumerable language can be represented as the projection of the intersection of two minimal linear languages.

Journal ArticleDOI
TL;DR: In this article, the authors develop diagram techniques for proving confluence in abstract reductions systems and give a systematic and uniform framework in which a number of known results, widely scattered throughout the literature, can be understood.
Abstract: We develop diagram techniques for proving confluence in abstract reductions systems. The underlying theory gives a systematic and uniform framework in which a number of known results, widely scattered throughout the literature, can be understood. These results include Newman's lemma, Lemma 3.1 of Winkler and Buchberger, the Hindley–Rosen lemma, the Request lemmas of Staples, the Strong Confluence lemma of Huet, the lemma of De Bruijn.

Proceedings ArticleDOI
25 Aug 1998
TL;DR: In this paper, an evolutionary method for solving the satisfiability problem is presented based on a parallel cellular genetic algorithm which performs global search on a random initial population of individuals and local selective generation of new strings according to new defined genetic operators.
Abstract: The paper presents an evolutionary method for solving the satisfiability problem. It is based on a parallel cellular genetic algorithm which performs global search on a random initial population of individuals and local selective generation of new strings according to new defined genetic operators. The algorithm adopts a diffusion model of information among chromosomes by realizing a two dimensional cellular automaton. Global search is then specialized in local search by changing the assignment of a variable that leads to the greatest decrease in the total number of unsatisfied clauses. A parallel implementation of the algorithm has been realized on a CS-2 parallel machine.

Journal ArticleDOI
TL;DR: Evidence that most trajectories of the Lorenz system of ordinary differential equations are pointwise computable on time intervals of moderate length using an adaptive finite element method and a description of the structure of the solutions as a repeated process of cutting-expanding-flipping-interlacing are given.
Abstract: We give evidence that most trajectories of the Lorenz system of ordinary differential equations are pointwise computable on time intervals of moderate length using an adaptive finite element method. Based on accurate computation, we present a description of the structure of the solutions of the Lorenz system as a repeated process of cutting-expanding-flipping-interlacing. We also give some general remarks on issues of computability and the relation between discrete and continuous dynamical systems.

Book ChapterDOI
01 Jan 1998
TL;DR: The formation of the current concept of computability (recursivity) is one of the major achievements of twentieth-century logical theory and its robustness is a telling indication that the notion so captured is indeed theoretically important.
Abstract: The formation of the current concept of computability (recursivity) is one of the major achievements of twentieth-century logical theory (see e.g. Davis 1965, Rogers 1967, ch. 1). What is especially impressive about this notion is its robustness. Originally, different logicians proposed different and prima facie completely unrelated concepts of computability, among them computability by a Turing machine, definability a la Kleene through a finite number of recursion equations and expressibility in Church’s lambda-calculus. As if by miracle, these different characterizations of computability turned out to coincide. This consilience of apparently unrelated definitions is a telling indication that the notion so captured is indeed theoretically important. It lends prima facie support to the thesis of Church (1936) that mechanically determined (finitely computable) functions (in a pretheoretical sense) are precisely the recursive ones.

Journal ArticleDOI
TL;DR: A model of computation for string functions over single‐sorted, total algebraic structures and some basic features of a general theory of computability within this framework are presented and relationships both to classical computability and to Friedman's concept of eds computability are established.
Abstract: We present a model of computation for string functions over single-sorted, total algebraic structures and study some basic features of a general theory of computability within this framework. Our concept generalizes the Blum-Shub-Smale setting of computability over the reals and other rings. By dealing with strings of arbitrary length instead of tuples of fixed length, some suppositions of deeper results within former approaches to generalized recursion theory become superfluous. Moreover, this gives the basis for introducing computational complexity in a BSS-like manner. Relationships both to classical computability and to Friedman's concept of eds computability are established. Two kinds of nondeterminism as well as several variants of recognizability are investigated with respect to interdependencies on each other and on properties of the underlying structures. For structures of finite signatures, there are universal programs with the usual characteristics. For the general case of not necessarily finite signature, this subject will be studied in a separate, forthcoming paper.

Journal ArticleDOI
TL;DR: It is observed that every first-order logic formula over the untyped version of some many-sorted vocabulary is equivalent to a union of many-Sorted formulas over that vocabulary.
Abstract: We observe that every first-order logic formula over the untyped version of some many-sorted vocabulary is equivalent to a union of many-sorted formulas over that vocabulary. This result has as direct corollary a theorem by Hull and Su on the expressive power of active-domain quantification in the relational calculus.

Posted Content
Masanao Ozawa1
TL;DR: In this article, the conceptual relation between the measurability of quantum mechanical observables and the computability of numerical functions is re-examined and a new formulation is given for the notion of metricability with finite precision in order to reconcile the conflict alleged by M. A. Nielsen [Phys. Rev. Lett. 79, 2915 (1997)] that the measurement of a certain observable contradicts the Church-Turing thesis.
Abstract: The conceptual relation between the measurability of quantum mechanical observables and the computability of numerical functions is re-examined. A new formulation is given for the notion of measurability with finite precision in order to reconcile the conflict alleged by M. A. Nielsen [Phys. Rev. Lett. 79, 2915 (1997)] that the measurability of a certain observable contradicts the Church-Turing thesis. It is argued that any function computable by a quantum algorithm is a recursive function obeying the Church-Turing thesis, whereas any observable can be measured in principle.

Journal ArticleDOI
TL;DR: This paper reviews and formalizes algorithms for probabilistic inferences upon causal Probabilistic networks (CPN), also known as Bayesian networks, and introduces Probanet, a development environment for CPNs.

Book ChapterDOI
01 Jan 1998
TL;DR: The computability and the equational definability of SESs are examined and it is shown that, in the discrete case, there is a natural sense in which an SES is computable if, and only if, it is definable by equations.
Abstract: First, we study the general idea of a spatially extended system (SES) and argue that many mathematical models of systems in computing and natural science are examples of SESs. We examine the computability and the equational definability of SESs and show that, in the discrete case, there is a natural sense in which an SES is computable if, and only if, it is definable by equations. We look at a simple idea of hierarchical structure for SESs and, using respacings and retimings, we define how one SES abstracts, approximates, or is implemented by another SES. Secondly, we study a special kind of SES called a synchronous concurrent algorithm (SCA). We define the simplest kind of SCA with a global clock and unit delay which are computable and equationally definable by primitive recursive equations over time. We focus on two examples of SCAs: a systolic array for convolution and a non-linear model of cardiac tissue. We investigate the hierarchical structure of SCAs by applying the earlier general concepts for the hierarchical structure of SESs. We apply the resulting SCA hierarchy to the formal analysis of both the implementation of a systolic array and the approximation of a biologically detailed model of cardiac tissue.

Journal ArticleDOI
TL;DR: This paper proposes a refinement of the real-number model of computation with the condition “every partial input or output information of an algorithm is finite” to the assumptions of the IBC-model of computation, and explains computability and computational complexity in TTE for the simple case of real functions.

Proceedings ArticleDOI
15 Jun 1998
TL;DR: In this article, the complexity and the efficient approximability of graph and satisfiability problems when specified using various kinds of periodic specifications studied previously were studied. But the complexity of SAT(S) was not characterized.
Abstract: We study the complexity and the efficient approximability of graph and satisfiability problems when specified using various kinds of periodic specifications studied previously. We obtain two general results. First, we characterize the complexities of several basic generalized CNF satisfiability problems SAT(S), when instances are specified using various kinds of 1- and 2-dimensional periodic specifications. We outline how this characterization can be used to prove a number of new hardness results for periodically specified problems for various complexity classes. As one corollary, we show that a number of basic NP-hard problems become EXPSPACE-hard when inputs are represented using 1-dimensional infinite periodic wide specifications, thereby answering an open question. Second, we outline a simple yet a general technique to devise approximation algorithms with provable worst case performance guarantees for a number of combinatorial problems specified periodically. Our efficient approximation algorithms and schemes are based on extensions of the previous ideas. They provide the first nontrivial collection of natural NEXPTIME-hard problems that have an /spl epsiv/-approximation (or PTAS).

Book ChapterDOI
K. Ko1
TL;DR: This chapter presents a survey on one of the polynomial-time complexity theories of numerical computation based on the model of recursive analysis, and a hierarchical classification of the computational complexity of numerical problems is presented in terms of the relations among discrete complexity classes.
Abstract: Publisher Summary The chapter discusses the analysis of Polynomial-Time Computability. Polynomial-time complexity theory identifies the intuitive notion of feasible computability with the formal notion of polynomial-time computability, and allows studying the problems that are feasible and those that are infeasible. This theory closely ties the tools in abstract mathematics with practical computational problems. It has made strong impact on almost every area of discrete computation, including graph theory, combinatorial optimization, computational geometry, computational number theory, and modern cryptography. This chapter presents a survey on one of the polynomial-time complexity theories of numerical computation based on the model of recursive analysis. The advantage of using this model of computation is that it is compatible with the model of discrete complexity theory, and so the complexity of numerical problems may be studied and compared with discrete problems. A hierarchical classification of the computational complexity of numerical problems is presented in terms of the relations among discrete complexity classes. The chapter discusses different approaches to the polynomial-time complexity theory of real functions, including the works of Blum et al.