scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 2001"


Book ChapterDOI
13 Dec 2001
TL;DR: In this paper, the authors investigate computability and complexity properties of a subset of the language, which allows statements about the shape of pointer structures (such as "there is a link from x to y") to be made, but not statements about data held in cells.
Abstract: This paper studies a recently developed an approach to reasoning about mutable data structures, which uses an assertion language with spatial conjunction and implication connectives. We investigate computability and complexity properties of a subset of the language, which allows statements about the shape of pointer structures (such as "there is a link from x to y") to be made, but not statements about the data held in cells (such as "x is a prime number"). We show that validity, even for this restricted language, is not r.e., but that the quantifier-free sublanguage is decidable. We then consider the complexity of model checking and validity for several fragments.

133 citations


Book ChapterDOI
19 Feb 2001
TL;DR: The work of Hartmanis, Stearns, Karp, Cook and others showed that the refinement of the theory to resource-bounded computations gave the means to explain the many intuitions concerning the complexity or ‘hardness’ of algorithmic problems in a precise and rigorous framework.
Abstract: The importance of algorithms is now recognized in all mathematical sciences, thanks to the development of computability and computational complexity theory in the 20th century. The basic understanding of computability theory developed in the nineteen thirties with the pioneering work of mathematicians like Godel, Church, Turing and Post. Their work provided the mathematical basis for the study of algorithms as a formalized concept. The work of Hartmanis, Stearns, Karp, Cook and others in the nineteen sixties and seventies showed that the refinement of the theory to resource-bounded computations gave the means to explain the many intuitions concerning the complexity or ‘hardness’ of algorithmic problems in a precise and rigorous framework.

112 citations


Book
01 Jan 2001
TL;DR: Conise, focused materials cover the most fundamental concepts and results in the field of modern complexity theory, including the theory of NP-completeness, NP-hardness, the polynomial hierarchy, and complete problems for other complexity classes.
Abstract: This revised and extensively expanded edition of Computability and Complexity Theory comprises essential materials that are core knowledge in the theory of computation. The book is self-contained, with a preliminary chapter describing key mathematical concepts and notations. Subsequent chapters move from the qualitative aspects of classical computability theory to the quantitative aspects of complexity theory. Dedicated chapters on undecidability, NP-completeness, andrelative computability focus on the limitations of computability and the distinctions between feasible and intractable. Substantial new content in this edition includes: a chapter on nonuniformity studying Boolean circuits, advice classes and the important result of KarpLipton.a chapter studying properties of the fundamental probabilistic complexity classesa study of the alternating Turing machine and uniform circuit classes. an introduction of counting classes, proving the famous results of Valiant and Vazirani and of Todaa thorough treatment of the proof that IP is identical to PSPACE With its accessibility and well-devised organization, this text/reference is an excellent resource and guide for those looking to develop a solid grounding in the theory of computing. Beginning graduates, advanced undergraduates, and professionals involved in theoretical computer science, complexity theory, and computability will find the book an essential and practical learning tool. Topics and features: Concise, focused materials cover the most fundamental concepts and results in the field of modern complexity theory, including the theory of NP-completeness, NP-hardness, the polynomial hierarchy, and complete problems for other complexity classes Contains information that otherwise exists only in research literature and presents it in a unified, simplified mannerProvides key mathematical background information, including sections on logic and number theory and algebra Supported by numerous exercises and supplementary problems for reinforcement and self-study purposes

102 citations


Journal ArticleDOI
Xian Liu1
TL;DR: This paper proposes a new filled function that needs only one parameter and does not include exponential terms, and has better computability than the traditional ones.
Abstract: The Filled Function Method is an approach to finding global minima of multidimensional nonconvex functions. The traditional filled functions have features that may affect the computability when applied to numerical optimization. This paper proposes a new filled function. This function needs only one parameter and does not include exponential terms. Also, the lower bound of weight factor a is usually smaller than that of one previous formulation. Therefore, the proposed new function has better computability than the traditional ones.

98 citations


Book ChapterDOI
03 Oct 2001
TL;DR: Effective (i.e., recursive) characterizations of the relations that can be computed on networks where all processors use the same algorithm, start from the same state, and know at least a bound on the network size are provided.
Abstract: We provide effective (i.e., recursive) characterizations of the relations that can be computed on networks where all processors use the same algorithm, start from the same state, and know at least a bound on the network size. Three activation models are considered (synchronous, asynchronous, interleaved).

96 citations


Posted Content
TL;DR: A quantum algorithm for Hilbert's tenth problem, which is equivalent to the Turing halting problem and is known to be mathematically noncomputable, is proposed where quantum continuous variables and quantum adiabatic evolution are employed as mentioned in this paper.
Abstract: We explore in the framework of Quantum Computation the notion of {\em Computability}, which holds a central position in Mathematics and Theoretical Computer Science. A quantum algorithm for Hilbert's tenth problem, which is equivalent to the Turing halting problem and is known to be mathematically noncomputable, is proposed where quantum continuous variables and quantum adiabatic evolution are employed. If this algorithm could be physically implemented, as much as it is valid in principle--that is, if certain hamiltonian and its ground state can be physically constructed according to the proposal--quantum computability would surpass classical computability as delimited by the Church-Turing thesis. It is thus argued that computability, and with it the limits of Mathematics, ought to be determined not solely by Mathematics itself but also by Physical Principles.

79 citations


Journal ArticleDOI
TL;DR: The computability of non-linear problems in solid and structural mechanics problems is examined, with particular emphasis on engineering calculations, where many of the factors that diminish computability play a prominent role.
Abstract: The computability of non-linear problems in solid and structural mechanics problems is examined. Several factors which contribute to the level of difficulty of a simulation are discussed: the smoothness and stability of the response, the required resolution, the uncertainties in the load, boundary conditions and initial conditions and inadequacies and uncertainties in the constitutive equation. An abstract measure of the level of difficulty is proposed, and some examples of typical engineering simulations are classified by this measure. We have put particular emphasis on engineering calculations, where many of the factors that diminish computability play a prominent role. Copyright © 2001 John Wiley & Sons, Ltd.

58 citations



Book ChapterDOI
29 Oct 2001
TL;DR: This article combines a semi-decision procedure for recurrence with a semidecision method for length-boundedness of paths in such a way that an automatic verification method for progress properties of linear and polynomial hybrid automata that may only fail on pathological, practically uninteresting cases.
Abstract: Hybrid automata have been introduced in both control engineering and computer science as a formal model for the dynamics of hybrid discrete-continuous systems. While computability issues concerning safety properties have been extensively studied, liveness properties have remained largely uninvestigated. In this article, we investigate decidability of state recurrence and of progress properties.First, we show that state recurrence and progress are in general undecidable for polynomial hybrid automata. Then, we demonstrate that they are closely related for hybrid automata subject to a simple model of noise, even though these automata are infinite-state systems. Based on this, we augment a semi-decision procedure for recurrence with a semidecision method for length-boundedness of paths in such a way that we obtain an automatic verification method for progress properties of linear and polynomial hybrid automata that may only fail on pathological, practically uninteresting cases. These cases are such that satisfaction of the desired progress property crucially depends on the complete absence of noise, a situation unlikely to occur in real hybrid systems.

51 citations


Journal ArticleDOI
TL;DR: An innovative approach for solving satisfiability problems for propositional formulas in conjunctive normal form (SAT) by creating a logic circuit that is specialized to solve each problem instance on field programmable gate arrays (FPGAs).
Abstract: This paper reports on an innovative approach for solving satisfiability problems for propositional formulas in conjunctive normal form (SAT) by creating a logic circuit that is specialized to solve each problem instance on field programmable gate arrays (FPGAs). This approach has become feasible due to recent advances in reconfigurable computing and has opened up an exciting new research field in algorithm design. SAT is an important subclass of constraint satisfaction problems, which can formalize a wide range of application problems. We have developed a series of algorithms that are suitable for a logic circuit implementation, including an algorithm whose performance is equivalent to the Davis-Putnam procedure with powerful dynamic variable ordering. Simulation results show that this method can solve a hard random 3-SAT problem with 400 variables within 1.6 min at a clock rate of 10 MHz. Faster speeds can be obtained by increasing the clock rate. Furthermore, we have actually implemented a 128-variable 256-clause problem instance on FPGAs.

49 citations


Proceedings ArticleDOI
05 Sep 2001
TL;DR: This study focuses on regular neigh borhood structures and shows that arrays of any dimensions, cyclic array and trees are special kind of GBF, a new high-level programming abstraction which extends the concept of data collection.
Abstract: We introduce a new high-level programming abstraction which extends the concept of data collection. The new construct, called GBF (for Group Based Data-Field), is based on an algebra of index sets, called a shape, and a functional extension of the array type, the field type. Shape constructions are based on group theory and put the emphasis on the logical neighborhood of the data structure elements. A field is a function from a shape to some set of values. In this study, we focus on regular neigh borhood structures and we show that arrays of any dimensions, cyclic array and trees are special kind of GBF.The recursive definitions of a GBF are then studied and we provide some elements for an implementation and some computability results in the case of recursive definition.

Journal ArticleDOI
TL;DR: The highly involved proof of the classical theorem of Stafford says: every left ideal of partial differential operators with rational or even polynomial coefficients in n variables can be generated by two elements is reorganized and completed.

Journal ArticleDOI
TL;DR: A new type of computational resource in a distributed setting—namely willingness of various parties to disclose information necessary for the computation—is identified and presented, concerning the communication complexity of rational computation.

Journal ArticleDOI
TL;DR: Several autarky systems for generalized clause-sets are discussed, the relation to polynomial time computability is discussed, and the relation of monoids and autarkies to ordered sets is discussed.

Journal ArticleDOI
TL;DR: The theory and the practice of optimal preconditioning in solving a linear system by iterative processes is founded on some theoretical facts understandable in terms of a class V of spaces of matrices including diagonal algebras and group matrix alge bras.

Book ChapterDOI
08 Jul 2001
TL;DR: It is proved that the downward closure of the language generated by any 0L-system is effectively regular, i.e., context-free parallel rewriting systems, and the corresponding language can be computed.
Abstract: Although the set of reachable states of a lossy channel system (LCS) is regular, it is well-known that this set cannot be constructed effectively. In this paper, we characterize significant classes of LCS for which the set of reachable states can be computed. Furthermore, we show that, for slight generatlizations of these classes, computability can no longer be achieved. To carry out our study, we define rewriting systems which capture the behaviour of LCS, in the sense that (i) they have a FIFO-like semantics and (ii) their languages are downward closed with respect to the substring relation. The main result of the paper shows that, for context-free rewriting systems, the corresponding language can be computed. An interesting consequence of our results is that we get a characterization of classes of meta-transitions whose post-images can be effectively constructed. These meta-transitions consist of sets of nested loops in the control graph of the system, in contrast to previous works on meta-transitions in which only single loops are considered. Essentially the same proof technique we use to show the result mentioned above allows also to establish a result in the theory of 0L-systems, i.e., context-free parallel rewriting systems. We prove that the downward closure of the language generated by any 0L-system is effectively regular.


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new filled function that does not have such disadvantages and showed that the new function is superior to the conventional one. But the computability of conventional filled functions is limited as they are defined on either exponential or logarithmic terms.

Journal ArticleDOI
TL;DR: This paper gives an answer to Weihrauch's question whether and, if not always, when an effective map between the computable elements of two represented sets can be extended to a (partial) computableMap between the represented sets.

Journal Article
TL;DR: A competition between systems for doing exact real number computations was held in September 2000 and the results obtained are presented and a short evaluation of the different approaches used.
Abstract: A competition between systems for doing exact real number computations was held in September 2000. We present the results obtained and give a short evaluation of the different approaches used.

Journal ArticleDOI
TL;DR: It is shown that all nonsurjective cellular automata destroy randomness and surjective Cellular automata preserve randomness, and all one-dimensional cellular automatas preserve nonrandomness.
Abstract: We give various characterizations for algorithmically random configurations on full shift spaces, based on randomness tests We show that all nonsurjective cellular automata destroy randomness and surjective cellular automata preserve randomness Furthermore all one-dimensional cellular automata preserve nonrandomness The last three assertions are also true if one replaces randomness by richness – a form of pseudorandomness, which is compatible with computability The last assertion is true even for an arbitrary dimension

Proceedings ArticleDOI
14 Jun 2001
TL;DR: A decision procedure for LTL over Mazurkiewicz traces is exhibited which generalises the classical automata-theoretic approach to a linear time temporal logic interpreted no longer over sequences but certain partial orders.
Abstract: Linear time temporal logic (LTL) has become a well established tool for specifying the dynamic behaviour of reactive systems with an interleaving semantics, and the automata-theoretic approach has proven to be a very useful mechanism for performing automatic verification in this setting. Especially alternating automata turned out to be a powerful tool in constructing efficient yet simple to understand decision procedures and directly yield further on-the-fly model checking procedures. In this paper we exhibit a decision procedure for LTL over Mazurkiewicz traces which generalises the classical automata-theoretic approach to a linear time temporal logic interpreted no longer over sequences but certain partial orders. Specifically, we construct a (linear) alternating Buchi automaton accepting the set of linearisations of traces satisfying the formula at hand. The salient point of our technique is to apply a notion of independence-rewriting to formulas of the logic. Furthermore, we show that the class of linear and trace-consistent alternating Buchi automata corresponds exactly to LTL formulas over Mazurkiewicz traces, lifting a similar result from Loding and Thomas formulated in the framework of LTL over words.

Posted Content
TL;DR: In this article, Kieu claimed that quantum computation cannot offer anything new about computability but retains the old computability, however, his quantum algorithm does not work, which is the point of my short note.
Abstract: Recently T Kieu (arXiv:quant-ph/0110136) claimed a quantum algorithm computing some functions beyond the Church-Turing class He notes that "it is in fact widely believed that quantum computation cannot offer anything new about computability" and claims the opposite However, his quantum algorithm does not work, which is the point of my short note I still believe that quantum computation leads to new complexity but retains the old computability

Book ChapterDOI
10 Sep 2001
TL;DR: In this article, semantic characterisations of second-order computability over the reals based on Σ-definability theory are introduced via domain theory for operators and real-valued functionals defined on the class of continuous functions.
Abstract: We propose semantic characterisations of second-order computability over the reals based on Σ-definability theory. Notions of computability for operators and real-valued functionals defined on the class of continuous functions are introduced via domain theory. We consider the reals with and without equality and prove theorems which connect computable operators and real-valued functionals with validity of finite Σ-formulas.

Journal Article
TL;DR: It is concluded that computerisation of guidelines is not possible without expertise or authors advice and to improve computability it is necessary to provide authors with a framework that checks ambiguities, and logical errors.
Abstract: In order to develop an internet-based decision support system, making available for French general practitioners several prevention guidelines is was necessary to implement paper based guideline. We propose a framework allowing to transform paper based practice guideline into their electronic form. Three different problems were identified: computability (e.g. determinism of the eCPG), logic (e.g. ambiguities when combining Booleans operators) and external validity (i.e. stability of decision for variations around thresholds and proportion of subjects classified in the various terminal nodes). The last problem concerned documentation of evidence: the level of evidence was associated only with the terminal decision node and not with the pathway through the decision tree. We concluded that computerisation of guidelines is not possible without expertise or authors advice. To improve computability it is necessary to provide authors with a framework that checks ambiguities, and logical errors.

01 Jan 2001
TL;DR: In this article, it was shown that the equational theory of RPA! is non-recursively enumerable in the generalized sense. But this result does not imply that RPA!, as a class of representable polyadic algebras, is also non-computable.
Abstract: In [3] Daigneault and Monk proved that the class of (! dimensional) representable polyadic algebras (RPA! for short) is axiomatizable by finitely many equationschemas. However, this result does not imply that the equational theory of RPA! would be recursively enumerable; one simple reason is that the language of RPA! contains a continuum of operation symbols. Here we prove the following. Roughly, for any reasonable generalization of computability to uncountable languages, the equational theory of RPA! remains non-recursively enumerable, or non-computable, in the generalized sense. This result has some implications on the non-computational character of Keisler’s completeness theorem for his “infinitary logic” in Keisler [6] as well.

Book
09 Aug 2001
TL;DR: In this paper some asymptotic formulas for the number of h-connected (resp. h-strongly connected) graphs or digraphs of order n and diameter equal to k ~ 3 are surveyed and it is deduced that for each fixed h ~ 1 and k ~ 2, almost every h- connected graph orDigraph with diameter k or k + 1 has diameter exactly k.
Abstract: In this paper some asymptotic formulas for the number of h-connected (resp. h-strongly connected) graphs or digraphs of order n and diameter equal to k ~ 3 are surveyed. Since for k ~ 4 all extremal graphs or digraphs G used to prove the lower bound of the estimation have connectivity I\;(G) = h, it follows that these formulas are also valid for the number of graphs or digraphs of order n, diameter equal to k ~ 4 and connectivity I\;(G) = h ~ 1 as n -+ 00. As a consequence, it is deduced that for each fixed h ~ 1 and k ~ 2, almost every h-connected (resp. h-strongly connected) graph or digraph with diameter k or k + 1 has diameter exactly k. Some open problems and conjectures are proposed. 1 Definitions and Preliminary Results All graphs or digraphs in this paper are finite, labeled, without loops or parallel edges or arcs. By K~ we denote the complete digraph of order n such that any two distinct vertices x and y are joined by two directed edges (x,y) and (y,x). For a graph G the degree of a vertex x is denoted by d(x)j for a digraph G the outdegree d;+(x) of a vertex x is the number of vertices of G that are adjacent from x and the indegree d(x) is the number of vertices of G adjacent to x. The connectivity 11:( G) of a graph G is the minimum number of vertices whose removal results in a disconnected or trivial graph. A graph G is said to be h-connected if 11:( G) ~ h. For digraphs we shall consider two kinds of connectivity: The connectivity (resp. strong connectivity), denoted II:(G) (resp. SII:(G)) of a digraph G is the minimum number of vertices whose removal results in a digraph which is not connected (resp. strongly connected) or trivial. A digraph G is said to be h-connected (resp. h-strongly connected) if II:(G) ~ h (resp. SII:(G) ~ h). A connected (resp. strongly connected) digraph is also said to be I-connected (resp. I-strongly connected). The distance d(x, y) between vertices x and y of a connected graph G is the length of a shortest path between them. For a strongly connected digraph G the distance d(x, y) from vertex x to vertex y is the length of a shortest path of the form (x, ... , y). The eccentricity of a vertex x is ecc(x) = maxYEv(G)d(x, y). The diameter of G, denoted d(G), is equal to max:r;EV(G)ecc(x) = max:r;,YEv(G)d(x, y) if G is connected (strongly connected) and 00 otherwise. Consider V(G) = {I, ... ,n} and denote for every h ~ 1 by G(njh,d = k) and G(nj h,d ~ k) j Ds(nj h,d = k) and Ds(n; h,d ~ k), resp. D(n; h,d = k) and

Journal ArticleDOI
TL;DR: An extension of the reflective relational machine of Abiteboul et al. is defined, which is called untyped reflective relationalmachine, and it is proved that this model is complete considering the whole class CQ (i.e., both typed andUntyped queries), and an undecidable fragment of L ω 1 ω c is defined which exactly captures the sub-class of the total and typed computable queries.

01 Jan 2001
TL;DR: This work discusses two topics of adaptive computational methods for differential equations: (i) individual time-stepping and (ii) subgrid modeling, and presents some applications including the computability of sub grid modeling.
Abstract: We discuss two topics of adaptive computational methods for differential equations: (i) individual time-stepping and (ii) subgrid modeling, and we present some applications including the computability and predictability of the Solar Sys- tem and aspects of subgrid modeling in convection-diffusion-reaction systems.

Book ChapterDOI
02 Jul 2001
TL;DR: In this chapter, the notion of naturally computable partial multi-valued function is introduced and algebraic representations of complete classes of Naturally computable functions over various data structures are constructed.
Abstract: Partial multi-valued functions represent semantics of nondeterministic programs. The notion of naturally computable partial multi-valued function is introduced and algebraic representations of complete classes of naturally computable functions over various data structures are constructed.