scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 2007"


Journal ArticleDOI
TL;DR: In this paper, it was shown that the size of the smallest self-assembly program that builds a shape and the shape's descriptional (Kolmogorov) complexity should be related.
Abstract: The connection between self-assembly and computation suggests that a shape can be considered the output of a self-assembly “program,” a set of tiles that fit together to create a shape. It seems plausible that the size of the smallest self-assembly program that builds a shape and the shape’s descriptional (Kolmogorov) complexity should be related. We show that when using a notion of a shape that is independent of scale, this is indeed so: in the tile assembly model, the minimal number of distinct tile types necessary to self-assemble a shape, at some scale, can be bounded both above and below in terms of the shape’s Kolmogorov complexity. As part of the proof, we develop a universal constructor for this model of self-assembly that can execute an arbitrary Turing machine program specifying how to grow a shape. Our result implies, somewhat counterintuitively, that self-assembly of a scaled-up version of a shape often requires fewer tile types. Furthermore, the independence of scale in self-assembly theory appears to play the same crucial role as the independence of running time in the theory of computability. This leads to an elegant formulation of languages of shapes generated by self-assembly. Considering functions from bit strings to shapes, we show that the running-time complexity, with respect to Turing machines, is polynomially equivalent to the scale complexity of the same function implemented via self-assembly by a finite set of tile types. Our results also hold for shapes defined by Wang tiling—where there is no sense of a self-assembly process—except that here time complexity must be measured with respect to nondeterministic Turing machines.

267 citations


Book ChapterDOI
08 Sep 2007
TL;DR: This work shows sequence assembly to be NP-hard under two different models: string graphs and de Bruijn graphs, and gives the first, to the knowledge, optimal polynomial time algorithm for genome assembly that explicitly models the double-strandedness of DNA.
Abstract: Graph-theoretic models have come to the forefront as some of the most powerful and practical methods for sequence assembly. Simultaneously, the computational hardness of the underlying graph algorithms has remained open. Here we present two theoretical results about the complexity of these models for sequence assembly. In the first part, we show sequence assembly to be NP-hard under two different models: string graphs and de Bruijn graphs. Together with an earlier result on the NP-hardness of overlap graphs, this demonstrates that all of the popular graph-theoretic sequence assembly paradigms are NP-hard. In our second result, we give the first, to our knowledge, optimal polynomial time algorithm for genome assembly that explicitly models the double-strandedness of DNA. We solve the Chinese Postman Problem on bidirected graphs using bidirected flow techniques and show to how to use it to find the shortest doublestranded DNA sequence which contains a given set of k-long words. This algorithm has applications to sequencing by hybridization and short read assembly.

196 citations


Book
01 Jan 2007
TL;DR: A presentation of the theory of computing, including coverage of the Theory of formal languages and automata, compatability, computational complexity, and deterministic parsing of context-free languages.
Abstract: A presentation of the theory of computing, including coverage of the theory of formal languages and automata, compatability, computational complexity, and deterministic parsing of context-free languages.

122 citations



Book
12 Dec 2007
TL;DR: New developments in the theory and practice of computation from a mathematical perspective are examined, with topics ranging from classical computability to complexity, from biocomputing to quantum computing.
Abstract: In recent years, classical computability has expanded beyond its original scope to address issues related to computability and complexity in algebra, analysis, and physics. The deep interconnection between "computation" and "proof" has originated much of the most significant work in constructive mathematics and mathematical logic of the last 70 years. Moreover, the increasingly compelling necessity to deal with computability in the real world (such as computing on continuous data, biological computing, and physical models) has brought focus to new paradigms of computation that are based on biological and physical models. These models address questions of efficiency in a radically new way and even threaten to move the so-called Turing barrier, i.e. the line between the decidable and the un-decidable. This book examines new developments in the theory and practice of computation from a mathematical perspective, with topics ranging from classical computability to complexity, from biocomputing to quantum computing. The book opens with an introduction by Andrew Hodges, the Turing biographer, who analyzes the pioneering work that anticipated recent developments concerning computations allegedly new paradigms. The remaining material covers traditional topics in computability theory such as relative computability, theory of numberings, and domain theory, in addition to topics on the relationships between proof theory, computability, and complexity theory. New paradigms of computation arising from biology and quantum physics are also discussed, as well as the computability of the real numbers and its related issues. This book is suitable for researchers and graduate students in mathematics, philosophy, and computer science with a special interest in logic and foundational issues. Most useful to graduate students are the survey papers on computable analysis and biological computing. Logicians and theoretical physicists will also benefit from this book.

112 citations


Posted Content
TL;DR: This paper shows that any computable metric space with a computable probability measure is isomorphic to the Cantor space in a computables and measure-theoretic sense and admits a universal uniform randomness test.
Abstract: In this paper we investigate algorithmic randomness on more general spaces than the Cantor space, namely computable metric spaces. To do this, we first develop a unified framework allowing computations with probability measures. We show that any computable metric space with a computable probability measure is isomorphic to the Cantor space in a computable and measure-theoretic sense. We show that any computable metric space admits a universal uniform randomness test (without further assumption).

102 citations


Journal ArticleDOI
TL;DR: The third edition, to summarize the author, adds examples, expands the selection of topics, and provides more flexibility to the instructor in the design of a course.
Abstract: Languages and Machines: An Introduction to the Theory of Computer Science Third Edition by Thomas A. Sudkamp Addison Wesley This is a textbook that introduces computer science theory, using formal abstract mathematics, for Junior and Senior computer science students. The third edition, to summarize the author, adds examples, expands the selection of topics, and provides more flexibility to the instructor in the design of a course. The book is in five parts: (I) Foundations; (II) Grammars, Automata, and Languages; (III) Computability; (IV) Computational Complexity; and, (V) Deterministic Parsing. Many exercises are provided throughout to assist the student in understanding the material. Chet Langin

90 citations


Journal ArticleDOI
TL;DR: An operational semantics of a basic class of P systems is defined, and two implementations of the operational semantics using rewriting logic are given, including two operational correspondence results.

78 citations


Journal ArticleDOI
TL;DR: It is shown that, in an appropriate framework, the GPAC and computable analysis are actually equivalent from the computability point of view, at least in compact intervals, and that all real computable functions over compact intervals can be defined by GPACs.

74 citations


Book
01 Jan 2007
TL;DR: This chapter discusses automata theory in the context of finite state machines, which is concerned with Turing Machines, and its applications in linguistics, where Turing Machines are concerned with language recognition.
Abstract: PART I: INTRODUCTION 1 Why Study Automata Theory? 2 Review of Mathematical Concepts 2.1 Logic 2.2 Sets 2.3 Relations 2.4 Functions 2.5 Closures 2.6 Proof Techniques 2.7 Reasoning about Programs 2.8 References 3 Languages and Strings 3.1 Strings 3.2 Languages 4 The Big Picture: A Language Hierarchy 4.1 Defining the Task: Language Recognition 4.2 The Power of Encoding 4.3 A Hierarchy of Language Classes 5 Computation 5.1 Decision Procedures 5.2 Determinism and Nondeterminism 5.3 Functions on Languages and Programs PART II: FINITE STATE MACHINES AND REGULAR LANGUAGES 6 Finite State Machines 6.2 Deterministic Finite State Machines 6.3 The Regular Languages 6.4 Programming Deterministic Finite State Machines 6.5 Nondeterministic FSMs 6.6 Interpreters for FSMs 6.7 Minimizing FSMs 6.8 Finite State Transducers 6.9 Bidirectional Transducers 6.10 Stochastic Finite Automata 6.11 Finite Automata, Infinite Strings: Buchi Automata 6.12 Exercises 7 Regular Expressions 7.1 What is a Regular Expression? 7.2 Kleene's Theorem 7.3 Applications of Regular Expressions 7.4 Manipulating and Simplifying Regular Expressions 8 Regular Grammars 8.1 Definition of a Regular Grammar 8.2 Regular Grammars and Regular Languages 8.3 Exercises 9 Regular and Nonregular Languages 9.1 How Many Regular Languages Are There? 9.2 Showing That a Language Is Regular.124 9.3 Some Important Closure Properties of Regular Languages 9.4 Showing That a Language is Not Regular 9.5 Exploiting Problem-Specific Knowledge 9.6 Functions on Regular Languages 9.7 Exercises 10 Algorithms and Decision Procedures for Regular Languages 10.1 Fundamental Decision Procedures 10.2 Summary of Algorithms and Decision Procedures for Regular Languages 10.3 Exercises 11 Summary and References PART III: CONTEXT-FREE LANGUAGES AND PUSHDOWN AUTOMATA 144 12 Context-Free Grammars 12.1 Introduction to Grammars 12.2 Context-Free Grammars and Languages 12.3 Designing Context-Free Grammars 12.4 Simplifying Context-Free Grammars 12.5 Proving That a Grammar is Correct 12.6 Derivations and Parse Trees 12.7 Ambiguity 12.8 Normal Forms 12.9 Stochastic Context-Free Grammars 12.10 Exercises 13 Pushdown Automata 13.1 Definition of a (Nondeterministic) PDA 13.2 Deterministic and Nondeterministic PDAs 13.3 Equivalence of Context-Free Grammars and PDAs 13.4 Nondeterminism and Halting 13.5 Alternative Definitions of a PDA 13.6 Exercises 14 Context-Free and Noncontext-Free Languages 14.1 Where Do the Context-Free Languages Fit in the Big Picture? 14.2 Showing That a Language is Context-Free 14.3 The Pumping Theorem for Context-Free Languages 14.4 Some Important Closure Properties of Context-Free Languages 14.5 Deterministic Context-Free Languages 14.6 Other Techniques for Proving That a Language is Not Context-Free 14.7 Exercises 15 Algorithms and Decision Procedures for Context-Free Languages 15.1 Fundamental Decision Procedures 15.2 Summary of Algorithms and Decision Procedures for Context-Free Languages 16 Context-Free Parsing 16.1 Lexical Analysis 16.2 Top-Down Parsing 16.3 Bottom-Up Parsing 16.4 Parsing Natural Languages 16.5 Stochastic Parsing 16.6 Exercises 17 Summary and References PART IV: TURING MACHINES AND UNDECIDABILITY 18 Turing Machines 18.1 Definition, Notation and Examples 18.2 Computing With Turing Machines 18.3 Turing Machines: Extensions and Alternative Definitions 18.4 Encoding Turing Machines as Strings 18.5 The Universal Turing Machine 18.6 Exercises 19 The Church-Turing 19.1 The Thesis 19.2 Examples of Equivalent Formalisms 20 The Unsolvability of the Halting Problem 20.1 The Language H is Semidecidable but Not Decidable 20.2 Some Implications of the Undecidability of H 20.3 Back to Turing, Church, and the Entscheidungsproblem 21 Decidable and Semidecidable Languages 21.2 Subset Relationships between D and SD 21.3 The Classes D and SD Under Complement 21.4 Enumerating a Language 21.5 Summary 21.6 Exercises 22 Decidability and Undecidability Proofs 22.1 Reduction 22.2 Using Reduction to Show that a Language is Not Decidable 22.3 Rice's Theorem 22.4 Undecidable Questions About Real Programs 22.5 Showing That a Language is Not Semidecidable 22.6 Summary of D, SD/D and (R)SD Languages that Include Turing Machine Descriptions 22.7 Exercises 23 Undecidable Languages That Do Not Ask Questions about Turing Machines 23.1 Hilbert's 10th Problem 23.2 Post Correspondence Problem 23.3 Tiling Problems 23.4 Logical Theories 23.5 Undecidable Problems about Context-Free Languages APPENDIX C: HISTORY, PUZZLES, AND POEMS 43 Part I: Introduction 43.1 The 15-Puzzle Part II: Finite State Machines and Regular Languages 44.1 Finite State Machines Predate Computers 44.2 The Pumping Theorem Inspires Poets REFERENCES INDEX Appendices for Automata, Computability and Complexity: Theory and Applications: * Math Background* Working with Logical Formulas* Finite State Machines and Regular Languages* Context-Free Languages and PDAs* Turing Machines and Undecidability* Complexity* Programming Languages and Compilers* Tools for Programming, Databases and Software Engineering* Networks* Security* Computational Biology* Natural Language Processing* Artificial Intelligence and Computational Reasoning* Art & Entertainment: Music & Games* Using Regular Expressions* Using Finite State Machines and Transducers* Using Grammars

50 citations


Book ChapterDOI
26 Aug 2007
TL;DR: A natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata is suggested, and Decidability and complexity questions are considered.
Abstract: We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class of regular languages and still closed under boolean language operations, are considered. Several such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata. Decidability and complexity questions are also considered.

Journal ArticleDOI
TL;DR: The present work compares and unifies different relaxed notions of computability to cover also discontinuous functions based on the concept of the jump of a representation: both a TTE-counterpart to the well known recursion-theoretic jump on Kleene's Arithmetical Hierarchy of hypercomputation and a formalization of revising computation in the sense of Shoenfield.

Journal ArticleDOI
TL;DR: The marble run shows that a formal theory for computation by physical systems needs strong conditions on the notion of experimental procedure and, specifically, on methods for the construction of equipment, and proposes to extend the methodology by defining languages to express experimental procedures and theConstruction of equipment.


Journal ArticleDOI
TL;DR: It is argued that purported conceptual analyses based upon Turing’s work involve a subtle but persistent circularity in their analysis of computability.
Abstract: Church’s thesis asserts that a number-theoretic function is intuitively computable if and only if it is recursive. A related thesis asserts that Turing’s work yields a conceptual analysis of the intuitive notion of numerical computability. I endorse Church’s thesis, but I argue against the related thesis. I argue that purported conceptual analyses based upon Turing’s work involve a subtle but persistent circularity. Turing machines manipulate syntactic entities. To specify which number-theoretic function a Turing machine computes, we must correlate these syntactic entities with numbers. I argue that, in providing this correlation, we must demand that the correlation itself be computable. Otherwise, the Turing machine will compute uncomputable functions. But if we presuppose the intuitive notion of a computable relation between syntactic entities and numbers, then our analysis of computability is circular. 1 §1. Turing machines and number-theoretic functions A Turing machine manipulates syntactic entities: strings consisting of strokes and blanks. I restrict attention to Turing machines that possess two key properties. First, the machine eventually halts when supplied with an input of finitely many adjacent strokes. Second, when the 1 I am greatly indebted to helpful feedback from two anonymous referees from this journal, as well as from: C. Anthony Anderson, Adam Elga, Kevin Falvey, Warren Goldfarb, Richard Heck, Peter Koellner, Oystein Linnebo, Charles Parsons, Gualtiero Piccinini, and Stewart Shapiro. I received extremely helpful comments when I presented earlier versions of this paper at the UCLA Philosophy of Mathematics Workshop, especially from Joseph Almog, D. A. Martin, and Yiannis Moschovakis, and at the ASL Spring Meeting 2004, especially from Shaughan Lavine, Rohit Parikh, and Richard Zach. I am also grateful to participants in a UC Santa Barbara reading group where the paper was discussed, especially Nathan Salmon and Anthony Brueckner.

Book ChapterDOI
23 Sep 2007
TL;DR: Issues investigated include abstract definitions of computability of stream queries; the connection between abstract computability, continuity, monotonicity, and non-blocking operators; and bounded memory computabilityof stream queries using abstract state machines (ASMs).
Abstract: Data streams are modeled as infinite or finite sequences of data elements coming from an arbitrary but fixed universe. The universe can have various built-in functions and predicates. Stream queries are modeled as functions from streams to streams. Both timed and untimed settings are considered. Issues investigated include abstract definitions of computability of stream queries; the connection between abstract computability, continuity, monotonicity, and non-blocking operators; and bounded memory computability of stream queries using abstract state machines (ASMs).

Journal ArticleDOI
TL;DR: A correct and complete translation algorithm that converts a class of propositional linear-time temporal-logic formulae to deterministic finite (-trace) automata is presented and a practical implementation of the interface has been developed, providing an enabling technology for writing readable control specifications in PTL that it translates for discrete-event control synthesis in deterministic infinite automata.
Abstract: This paper presents and analyzes a correct and complete translation algorithm that converts a class of propositional linear-time temporal-logic (PTL) formulae to deterministic finite (-trace) automata. The translation algorithm is proposed as a specification interface for finitary control design of discrete-event systems (DESs). While there has been a lot of computer science research that connects PTL formulae to omega-automata, there is relatively little prior work that translates state-based PTL formulae in the context of a finite-state DES model, to event-based finite automata-the formalism on which well-established control synthesis methods exist. The proposed translation allows control requirements to be more easily described and understood in temporal logic, widely recognized as a useful specification language for its intuitively appealing operators that provide the natural-language expressiveness and readability needed to express and explain these requirements. Adding such a translation interface could therefore effectively combine specifiability and readability in temporal logic with prescriptiveness and computability in finite automata. The former temporal-logic features support specification while the latter automata features support the prescription of DES dynamics and algorithmic computations. A practical implementation of the interface has been developed, providing an enabling technology for writing readable control specifications in PTL that it translates for discrete-event control synthesis in deterministic finite automata. Two application examples illustrate the use of the proposed temporal-logic interface. Practical implications of the complexity of the translation algorithm are discussed.

Journal ArticleDOI
TL;DR: An equational specification of a network of analog modules connected by channels, processing data from a metric space A, and operating with respect to a global continuous clock T is given and a custom-made concrete computation theory is introduced that shows that if the module functions are concretely computable then so is the network function @F.

Journal ArticleDOI
TL;DR: The dependency pair method is enhanced in order to prove termination using recursive structure analysis in simply-typed term rewriting systems, which is one of the computational models of functional programs.
Abstract: We enhance the dependency pair method in order to prove termination using recursive structure analysis in simply-typed term rewriting systems, which is one of the computational models of functional programs. The primary advantage of our method is that one can exclude higher-order variables which are difficult to analyze theoretically, from recursive structure analysis. The key idea of our method is to analyze recursive structure from the viewpoint of strong computability. This property was introduced for proving termination in typed λ-calculus, and is a stronger condition than the property of termination. The difficulty in incorporating this concept into recursive structure analysis is that because it is defined inductively over type structure, it is not closed under the subterm relation. This breaks the correspondence between strong computability and recursive structure. In order to guarantee the correspondence, we propose plain function-passing as a restriction, which is satisfied by many non-artificial functional programs.

Proceedings ArticleDOI
24 Jun 2007
TL;DR: In this paper, the authors point out a potential weakness in the application of the celebrated minimum description length (MDL) principle for model selection, and show that a model which has a shorter two-part code-length than another is not necessarily better (unless of course it achieves the global minimum).
Abstract: We point out a potential weakness in the application of the celebrated minimum description length (MDL) principle for model selection. Specifically, it is shown that (although the index of the model class which actually minimizes a two-part code has many desirable properties) a model which has a shorter two- part code-length than another is not necessarily better (unless of course it achieves the global minimum). This is illustrated by an application to infer a grammar (DFA) from positive examples. We also analyze computability issues, and robustness under receding of the data. Generally, the classical approach is inadequate to express the goodness-of-fit of individual models for individual data sets. In practice however, this is precisely what we are interested in: both to express the goodness of a procedure and where and how it can fail. To achieve this practical goal, we paradoxically have to use the, supposedly impractical, vehicle of Kolmogorov complexity.

Proceedings ArticleDOI
01 Jan 2007
TL;DR: New encodings for representing CSPs as equivalent Boolean Satisfiability (SAT) problems are defined and schemes for static ordering of the Boolean variables in a Conjunctive Normal Form (CNF) representation of a CSP are studied.

BookDOI
01 Apr 2007
TL;DR: Theorem Prover for Proof-Terms in Coq as discussed by the authors is a theorem prover for proof-terms with dual affine/intuitionistic calculus. But it is not suitable for proofs with linear recursive functions.
Abstract: Rewriting Foundations.- The Hydra Battle Revisited.- Orderings and Constraints: Theory and Practice of Proving Termination.- Narrowing, Abstraction and Constraints for Proving Properties of Reduction Relations.- Computability Closure: Ten Years Later.- Reduction Strategies and Acyclicity.- Proof and Computation.- Towards Rewriting in Coq.- Superdeduction at Work.- Remarks on Semantic Completeness for Proof-Terms with Laird's Dual Affine/Intuitionistic ?-Calculus.- Linear Recursive Functions.- Towards Safety and Security.- Deducibility Constraints, Equational Theory and Electronic Money.- Applying a Theorem Prover to the Verification of Optimistic Replication Algorithms.- Towards Modular Algebraic Specifications for Pointer Programs: A Case Study.- Modeling Permutations in Coq for Coccinelle.

Posted Content
TL;DR: In this paper, a certain phenomenon of consciousness is demonstrated to be fully represented as a computational process using a quantum computer, based on the computability criterion discussed with Turing machines, the model constructed is shown to necessarily involve a non-computable element.
Abstract: With the great success in simulating many intelligent behaviors using computing devices, there has been an ongoing debate whether all conscious activities are computational processes. In this paper, the answer to this question is shown to be no. A certain phenomenon of consciousness is demonstrated to be fully represented as a computational process using a quantum computer. Based on the computability criterion discussed with Turing machines, the model constructed is shown to necessarily involve a non-computable element. The concept that this is solely a quantum effect and does not work for a classical case is also discussed.

Proceedings ArticleDOI
Mangassarian, Veneris, Safarpour, Najm, Abadir 
01 Jan 2007


Journal ArticleDOI
TL;DR: This work discusses and compares two notions of stability for a continuous dynamical system, viz. shadowing and robustness, and relates them to both the practical and theoretical computability of the system.
Abstract: Computers are used extensively to simulate continuous dynamical systems. However, different conceptual and mathematical structures underlie discrete machines and continuous dynamics, so the question arises as to the ability of the computer to simulate or, more generally, to check the properties of a continuous system. We discuss and compare two notions of stability for a continuous dynamical system, viz. shadowing and robustness, and relate them to both the practical and theoretical computability of the system. We first discuss what we can learn from the stability of a system, using a finite-precision machine. We then show, following the work in Collins (2005), that shadowing fails but robustness succeeds in ensuring the checkability of a reachability property.

Journal ArticleDOI
TL;DR: The answer to this question is shown to be no, and a certain phenomenon of consciousness is demonstrated to be fully represented as a computational process using a quantum computer.
Abstract: With the great success in simulating many intelligent behaviors using computing devices, there has been an ongoing debate whether all conscious activities are computational processes. In this paper, the answer to this question is shown to be no. A certain phenomenon of consciousness is demonstrated to be fully represented as a computational process using a quantum computer. Based on the computability criterion discussed with Turing machines, the model constructed is shown to necessarily involve a non-computable element. The concept that this is solely a quantum effect and does not work for a classical case is also discussed.

Book ChapterDOI
01 Jan 2007
TL;DR: This chapter describes a systematic exposition of automata theory, and defines the classes of languages accepted—namely, orthomodular lattice-valued regular languages and context-free languages.
Abstract: Publisher Summary It is noted that a theory of computation based on quantum logic is to be established as a logical foundation of quantum computation. Finite automata and pushdown automata are considered the simplest abstract mathematical models of computing machines. Automata theory is an essential part of computation theory. This chapter describes a systematic exposition of automata theory. In context to this theory, quantum logic is treated as an orthomodular lattice-valued logic. The approach employed in developing this theory is essentially the semantical analysis. This chapter introduces notions of orthomodular lattice-valued finite automata and pushdown automata and their various variants. It defines the classes of languages accepted—namely, orthomodular lattice-valued regular languages and context-free languages. This chapter also re-examines various properties of automata in the framework of quantum logic, including the Kleene theorem concerning equivalence between finite automata and regular expressions, equivalence between pushdown automata and context-free grammars, and the pumping lemma both for regular languages and for context-free languages.

Journal ArticleDOI
TL;DR: This work adopts the model-theoretic approach to tree relations and study relations definable over the structure consisting of the set of all trees and the aforementioned predicates, and relates definability of sets and relations of trees to computability by tree automata.
Abstract: We study relations on trees defined by first-order constraints over a vocabulary that includes the tree extension relation T p T′ (holding if and only if every branch of T extends to a branch of T′), unary node-tests, and a binary relation checking whether the domains of two trees are equal. We consider both ranked and unranked trees. These are trees with and without a restriction on the number of children of nodes. We adopt the model-theoretic approach to tree relations and study relations definable over the structure consisting of the set of all trees and the aforementioned predicates. We relate definability of sets and relations of trees to computability by tree automata. We show that some natural restrictions correspond to familiar logics in the more classical setting where every tree is a structure over a fixed vocabulary, and to logics studied in the context of XML pattern languages. We then look at relational calculi over collections of trees, and obtain quantifier-restriction results that give us bounds on the expressive power and complexity. As unrestricted relational calculi can express problems that are complete for each level of the polynomial hierarchy, we look at their restrictions, corresponding to the restricted logics over the family of all unranked trees, and find several calculi with low (NC1) data complexity which still express properties important for database and document applications. We also give normal forms for safe queries in the calculus.

Proceedings ArticleDOI
01 Jan 2007