# Showing papers in "Theoretical Computer Science in 1988"

••

TL;DR: This contribution was made possible only by the miraculous fact that the first members of the Editorial Board were sharing the same conviction about the necessity of Theoretical Computer Science.

Abstract: The collection of TCS issues is about 1 meter high, 17,000 pages long and it contains 1100 papers. When in 1974 Einar Fredriksson and myself started talking about the creation of a journal dedicated to Theoretical Computer Science we were very far from even dreaming that it could take such an extension within twelve years. We were also a bit shy: what could such a journal, very theoretical indeed and hard to read, be useful to, and who would read it? Fortunately, some people encouraged us and indeed helped us a lot, Mike Paterson who was at that time President of EATCS and who accepted to become Associate Editor, Albert Meyer who was a very active editor at the beginning, Arto Salomaa, who was to become President of EATCS shortly afterwards. Indeed, I should mention all the first members of the Editorial Board, for TCS would never have come to existence without them. Theoretical Computer Science is not a clearly defined discipline with neat borderlines: it is more a state of mind, the conviction that the observed computation phenomena can be formally described and analysed as any physical phenomenon; the conviction that such a formal description helps to understand these phenomena and to master them in order to design better algorithms, better computers, better systems. Our fundamental activity is not to prove theorems in strange mathematical theories, it is to model a complicated reality and in this respect it has to be compared with theoretical physics or what we call in French “Mecanique rationnelle”. This comparison can be pursued rather far, for we also use all possible mathematical concepts and methods and when we do not find appropriate ones in traditional mathematics we create them. The aim is quite clear: using the compact and unambiguous language of mathematics brings to life concepts and methods which will be useful to all designers, builders and users of computer systems, exactly in the same way as matrix calculus or Fourier series and transforms are useful to all engineers and technicians in the electric and electronic industry. And when one thinks about the amount of time it took to build the mathematical theory of matrices and to polish and simplify it up to the state in which it could be taught to all future engineers and become a tool in daily use, we can be extremely satisfied by the development of Theoretical Computer Science. It is true that concepts and methods which were still vague and unclear when TCS was created became essential tools for all industrial designers and manufacturers, in algorithmics, in semantics, in automata theory and control, etc. . . . Certainly, TCS can be proud to have contributed to this development. Coming back to what I was saying a few minutes ago, this contribution was made possible only by the miraculous fact that the first members of the Editorial Board were sharing the same conviction about the necessity of Theoretical Computer Science

1,480 citations

••

TL;DR: A polynomial algorithm for determining if two structures are stuttering equivalent is given and the relevance of the results for temporal logic model checking and synthesis procedures is discussed.

Abstract: We show that if two finite Kripke structures can be distinguished by some CTL ∗ formula that contains both branching-time and linear-time operators, then the structures can be distinguished by a CTL formula that contains only branching-time operators. Our proof involves showing that, for any finite Kripke structure M , it is possible to construct a CTL formula F M that uniquely characterizes M . Since one Kripke structure may be a trivial unrolling of another, we use a notion of equivalence between Kripke structures that is similar to the notion of bisimulation studied by Milner [15]. Our first construction of F M requires the use of the nexttime operator. We also consider the case in which the nexttime operator is disallowed in CTL formulas. The proof, in this case, requires another notion of equivalence— equivalence with respect to stuttering and is much more difficult since it is possible for two inequivalent states to have exactly the same finite behaviors (modulo stuttering), but different infinite behaviors. We also give a polynomial algorithm for determining if two structures are stuttering equivalent and discuss the relevance of our results for temporal logic model checking and synthesis procedures.

468 citations

••

TL;DR: The purpose of this paper is to report the development of the first real-time models of CSP to be compatible with the properties and proof systems of the abovementioned untimed models.

Abstract: The parallel language CSP [9], an earlier version of which was described in [7], has become a major tool for the analysis of structuring methods and proof systems involving parallelism. The significance of CSP is in the elegance by which a few simply stated constructs (e.g., sequential and parallel composition, nondeterministic choice, concealment, and recursion) lead to a language capable of expressing the full complexity of distributed computing. The difficulty in achieving satisfactory semantic models containing these constructs has been in providing an adequate treatment of nondeterminism, deadlock, and divergence. Fortunately, as a result of an evolutionay development in [S], [lo], [15], [l], [14], [2], and [4] we now have several such models. The purpose of this paper is to report the development of the first real-time models of CSP to be compatible with the properties and proof systems of the abovementioned untimed models. Our objective in this development is the construction of a timed CSP model which satisfies the following: (1) Continuous with respect to time. The time domain should consist of all nonnegative real numbers, and there should be no lower bound on the time difference between consecutive observable events from two processes operating asynchronously in parallel. (2) Realistic. A given process should engage in only finitely many events in a bounded period of time. (3) Continuous and distributive with respect to semantic operators. All semantic operators should be continuous, and all the basic operators as defined in [2], except recursion, should distribute over nondeterministic choice. (4) Verijiable design. The model should provide a basis for the definition, specification, and verification of time critical processes with an adequate treatment of nondeterminism, which assists in avoidance of deadlock and divergence.

433 citations

••

TL;DR: This work investigates the problem of representing acyclic digraphs in the plane in such a way that all edges flow in the same direction, e.g., from the left to the right or from the bottom to the top.

Abstract: Acyclic digraphs are widely used for representing hierarchical structures. Examples include PERT networks, subroutine-call graphs, family trees, organization charts, Hasse diagrams, and ISA hierarchies in knowledge representation diagrams. We investigate the problem of representing acyclic digraphs in the plane in such a way that all edges flow in the same direction, e.g., from the left to the right or from the bottom to the top. Three plane representations are considered: straight drawings, visibility representations, and grid drawings. We provide efficient algorithms that construct these representations with all edges flowing in the same direction. The time complexity is O(n) for visibility representations and grid drawings, and O(n log n) for straight drawings, where n is the number of vertices of the digraph. For covering digraphs of lattices, the complexity of constructing straight drawings is O(n). We also show that the planar digraphs that admit any one of these representations are exactly the subgraphs of planar st-graphs.

273 citations

••

TL;DR: It is found that the equational theory of sets, pomsets under concatenation, parallel composition and union is finitely axiomatizable, whereas the theory of languages under the analogous operations is not.

Abstract: Pomsets have been introduced as a model of concurrency. Since a pomset is a string in which the total order has been relaxed to be a partial order, in this paper we view them as a generalization of strings, and investigate their algebraic properties. In particular, we investigate the axiomatic properties of pomsets, sets of pomsets and ideals of pomsets, under such operations as concatenation, parallel composition, union and their associated closure operations. We find that the equational theory of sets, pomsets under concatenation, parallel composition and union is finitely axiomatizable, whereas the theory of languages under the analogous operations is not. A similar result is obtained for ideals of pomsets, which incorporate the notion of subsumption which is also known as augmentation. Finally, we show that the addition of any closure operation (parallel or serial) leads to nonfinite axiomatizability of the resulting equational theory.

225 citations

••

TL;DR: It is shown that there exist exponentially many square-free and cube-free strings of each length over these alphabets and arguments for the nonexistence of various RT(n)th power-free homomorphisms are provided.

Abstract: A string is called kth power-free, if it does not have xk as a nonempty substring. For all nonnegative rational numbers k, kth power-free strings and kth power-free homomorphisms are investigated and the shortest uniformly growing square-free (k=2) and cube-free (k=3) homomorphisms mapping into least alphabets with three and two-letters are introduced. It is shown that there exist exponentially many square-free and cube-free strings of each length over these alphabets. Sharpening the kth power-freeness to the repetitive threshold RT(n) of n letter alphabets, we provide arguments for the nonexistence of various RT(n)th power-free homomorphisms.

184 citations

••

[...]

TL;DR: Although traditionally most of the development in the theory of traces follows the string-language-theoretic line, it is demonstrated to the reader that the graph- theoretic point of view may be more appropriate.

Abstract: The theory of traces , originated by A. Mazurkiewicz in 1977, is an attempt to provide a mathematical description of the behavior of concurrent systems. Its aim is to reconcile the sequential nature of observations of the system behavior on the one hand and the nonsequential nature of causality between the actions of the system on the other hand. One can see the theory of traces to be rooted in formal string language theory with the notion of partial commutativity playing the central role. Alternatively one can see the theory of traces to be rooted in the theory of labeled acyclic directed graphs (or even in the theory of labeled partial orders). This paper attempts to present a major portion of the theory of traces in a unified way. However, it is not a survey in the sense that a number of new notions are introduced and a number of new results are proved. Although traditionally most of the development in the theory of traces follows the string-language-theoretic line, we try to demonstrate to the reader that the graph-theoretic point of view may be more appropriate. The paper essentially consists of two parts. The first one (Sections 1 through 4) is concerned with the basic theory of traces. The second one (Section 5) presents applications of the theory of traces to the theory of the behavior of concurrent systems, where the basic system model we have chosen is the condition/event system introduced by C.A. Petri.

173 citations

••

TL;DR: The Min Cut Linear Arrangement Problem is NP-complete for trees with polynomial size edge weights and this is used to show the NP-completeness of Search Number, Vertex Separation, Progressive Black/White Pebble Demand, and Topological Bandwidth for planar graphs with maximum vertex degree 3.

Abstract: We show that the Min Cut Linear Arrangement Problem (Min Cut) is NP-complete for trees with polynomial size edge weights and derive from this the NP-completeness of Min Cut for planar graphs with maximum vertex degree 3. This is used to show the NP-completeness of Search Number, Vertex Separation, Progressive Black/White Pebble Demand, and Topological Bandwidth for planar graphs with maximum vertex degree 3.

159 citations

••

TL;DR: More standard proof systems and semantics are provided and part of Girard's results are extended by investigating the consequence relations associated with Linear Logic and by proving corresponding Strong completeness theorems.

Abstract: Linear logic is a new logic which was recently developed by Girard in order to provide a logical basis for the study of parallelism. It is described and investigated in [9]. Girard's presentation of his logic is not so standard. In this paper we shall provide more standard proof systems and semantics. We shall also extend part of Girard's results by investigating the consequence relations associated with Linear Logic and by proving corresponding Strong completeness theorems. Finally, we shall investigate the relation between Linear Logic and previously known systems, especially Relevance Logics.

152 citations

••

TL;DR: A logical system and a family of models are proposed, a completeness result is proved and a decision procedure described, and one interpretation of belief which obeys a very strong persistence axiom is put forward and used in the analysis of the “wise men” puzzle.

Abstract: In the conclusion of [7] Halpern and Moses expressed their interest in a logical system in which one could talk about knowledge and belief (and belief about knowledge, knowledge about belief and so on). We investigate such systems. In the first part of the paper knowledge and belief, without time, are considered. Common knowledge and common belief are defined and compared. A logical system and a family of models are proposed, a completeness result is proved and a decision procedure described. In the second part of the paper, time is considered. Different notions of beliefs are distinguished, obeying different properties of persistence. One interpretation of belief which obeys a very strong persistence axiom is put forward and used in the analysis of the “wise men” puzzle.

137 citations

••

TL;DR: Abstract Linear Logic provides a refinement of functional programming and suggests a new implementation technique, with the following features: a synthesis of strict and lazy evaluation, a clean semantics of side effects, and no garbage collector.

Abstract: Linear Logic [6] provides a refinement of functional programming and suggests a new implementation technique, with the following features: • a synthesis of strict and lazy evaluation, • a clean semantics of side effects, • no garbage collector.

••

TL;DR: A calculus of concurrent processes is proposed that embodies the ability to regard a finite computation as a single event, in dealing with the semantics of concurrency, and shows that this abstraction mechanism, together with the idea of compound actions, allows us to deal with a variety of synchronization and communication disciplines.

Abstract: The overall intention of this work is to investigate the ability to regard a finite computation as a single event, in dealing with the semantics of concurrency. We propose a calculus of concurrent processes that embodies this ability in two respects: the first one is that of execution, the second that of operation. As usual, we formalize the execution of a process as a labelled transition relation. But our point is that at each step the performed action is a compound one, namely a labelled poset, not just an atom. The action reflects the causal and concurrent structure of the process, and we claim that the bisimulation relative to such transition systems brings out a clear distinction between concurrency and sequential nondeterminism. Next we introduce a second transition relation, formalizing the operation of a process on data. As in the usual semantics of sequential programs, a process operates on data by means of its terminated sequences of computations. Then we obtain atomic actions by abstracting the whole operation of a process as a single event. We show that this abstraction mechanism, together with the idea of compound actions, allows us to deal with a variety of synchronization and communication disciplines.

••

TL;DR: An improvement on the algorithm for finding the transitive closure of an acyclic digraph G with worst case runtime O(n·ered) and space O( n·k, where k is the width of a chain decomposition.

Abstract: In [6] Geralcikova, Koubek describe an algorithm for finding the transitive closure of an acyclic digraph G with worst case runtime O(n·ered), where n is the number of nodes and ered is the number of edges in the transitive reduction of G. We present an improvement on their algorithm which runs in worst case time O(k·ered) and space O(n·k), where k is the width of a chain decomposition. For the expected values in the Gn,p model of a random acyclic digraph with 0 < p < 1 we have:
$$\begin{gathered}E(k) = O(\frac{{\ln (p \cdot n)}}{p}) \hfill \\E(e_{red} ) = O(\min (n \cdot |lnp|,p \cdot n^2 )) = O(n \cdot \ln n) \hfill \\E(k \cdot e_{red} ) = \left\{ {\begin{array}{*{20}c}{O(n^2 )for\frac{{ln^2 n}}{n} \leqslant p < 1} \\{O(n^2 \cdot \ln \ln n)otherwise} \\\end{array} } \right. \hfill \\\end{gathered}$$

••

TL;DR: Techniques for studying complexity classes that are not covered by known recursive enumerations of machines are developed by using them to examine the probabilistic class BPP and it is shown that there is a relativized world where BPPA has no complete languages.

Abstract: This paper develops techniques for studying complexity classes that are not covered by known recursive enumerations of machines. Often, counting classes, probabilistic classes, and intersection classes lack such enumerations. Concentrating on the counting class UP, we show that there are relativizations for which UPA has no complete languages and other relativizations for which PB ≠ UPB ≠ NPB and UPB has complete languages. Among other results we show that P ≠ UP if and only if there exists a set S in P of Boolean formulas with at most one satisfying assignment such that S ∩ SAT is not in P. P ≠ UP ∩ coUP if and only if there exists a set S in P of uniquely satisfiable Boolean formulas such that no polynomial-time machine can compute the solutions for the formulas in S. If UP has complete languages then there exists a set R in P of Boolean formulas with at most one satisfying assignment so that SAT ∩ R is complete for UP. Finally, we indicate the wide applicability of our techniques to counting and probabilistic classes by using them to examine the probabilistic class BPP. There is a relativized world where BPPA has no complete languages. If BPP has complete languages then it has a complete language of the form B ∩ MAJORITY, where B ∈ P and MAJORITY = {f | f is true for at least half of all assignments} is the canonical PP-complete set.

••

TL;DR: Results of Haken are extended to give an exponential lower bound on the size of resolution proofs for propositional formulas encoding a generalized pigeonhole principle, showing that resolution proof systems do not p-simulate constant-formula-depth Frege proof systems.

Abstract: We extend results of Haken to give an exponential lower bound on the size of resolution proofs for propositional formulas encoding a generalized pigeonhole principle. These propositional formulas express the fact that there is no one-one mapping from c·n objects to n objects when c >1. As a corollary, resolution proof systems do not p-simulate constant-formula-depth Frege proof systems.

••

TL;DR: A syntax-directed generalization of Owicki–Gries's Hoare logic for a parallel while language is presented, based on Hoare asserted programs of the form {Γ, A } p { B, Δ} where Γ, Δ are sets of first-order formulas.

Abstract: A syntax-directed generalization of Owicki–Gries's Hoare logic for a parallel while language is presented. The rules are based on Hoare asserted programs of the form {Γ, A } p { B , Δ} where Γ, Δ are sets of first-order formulas. These triples are interpreted with respect to an operational semantics involving potential computations where Γ, Δ are sets of invariants.

••

TL;DR: A procedure is shown, building the principal type scheme of a term through the construction of the most general unifier for intersection type schemes.

Abstract: The intersection type discipline for the λ-calculus (ITD) is an extension of the classical functionality theory of Curry. In the ITD a term satisfying a given property has a principal type scheme in an extended meaning, i.e., there is a type scheme deducible for it from which all and only the type schemes deducible for it are reachable, by means of suitable operations. The problem of finding the principal type scheme for a term, if it exists, is semidecidable. In the paper a procedure is shown, building the principal type scheme of a term through the construction of the most general unifier for intersection type schemes.

••

TL;DR: A general method for parallelisation for the same class of problems on more powerful parallel computers is presented and it is shown that the dynamic programming problems considered can be computed in log2n time using n6/log(n) processors on a parallel random access machine without write conflicts.

Abstract: A general method for parallelism of some dynamic programming algorithms on VLSI was presented in [6]. We present, a general method for parallelisation for the same class of problems on more powerful parallel computers. The method is demonstrated on three typical dynamic programming problems: computing the optimal order of matrix multiplications, the optimal binary search tree and optimal triangulation of polygons (see[1,2]). For these problems the dynamic programming approach gives algorithms having a similar structure. They can be viewed as straight line programs of size O(n3). the general method of parallelisation of such programs described by Valiant et al [16] then leads directly to algorithms working in log2 time with O(n9) processors. However we adopt an alternative approach and show that a special feature of dynamic programming problems can be used. They can be thought as generalized parsing problems: find a tree of the optimal decomposition of the problem into smaller subproblems. A parallel pebble game on fees [10,11] is used to decrease the number of processors and to simplify the structure of the algorithms. We show that the dynamic programming problems considered can be computed in log2n time using n6/log(n) processors on a parallel random access machine without write conflicts (CREW P-RAM). The main operation is essentially matrix multiplication, which is easily implementable on parallel computers with a fixed interconnection network of processors (ultracomputers, in the sense of [15]). Hence the problems considered also can be computed in log2n time using n6 processors on a perfect shuffle computer (PSC) or a cube connected computer (CCC). An extension of the algorithm from [14] for the recognition of context-free languages on PSC and CCC can be used. If the parallel random access machine with concurrent writes (CRCW P-RAM is used then the minimum of m numbers can be determined in constant time (see [8]) and consequently the parallel time for the computation of dynamic programming problems can be reduced from log2(n) to log(n). We investigate also the parallel computation of Fees realising the optimal cost of dynamic programming problems.

••

TL;DR: An E-unification algorithm based on flattening and SLD-resolution is developed and proved sound and complete by establishing a correspondence between narrowing sequences and resolution sequences.

Abstract: A comparison is performed between narrowing and SLD-resolution as regards their use in semantic unification (or E-unification). An E-unification algorithm based on flattening and SLD-resolution is developed and proved sound and complete by establishing a correspondence between narrowing sequences and resolution sequences. An E-unification algorithm based on a refined (“selection”) narrowing strategy is derived by adapting the SLD-strategy to narrowing. Finally, possible applications to the domain of logic+functional programming are considered.

••

TL;DR: It is shown that if the problem of finding a minimum dominating set in a tournament can be solved in n O(log n ) time, then for every constant C, there is also a polynomial-time algorithm for the satisfiability problem of boolean formulas in conjunctive normal form with m clauses and C log 2 m variables.

Abstract: The problem of finding a minimum dominating set in a tournament can be solved in n O(log n ) time. It is shown that if this problem has a polynomial-time algorithm, then for every constant C , there is also a polynomial-time algorithm for the satisfiability problem of boolean formulas in conjunctive normal form with m clauses and C log 2 m variables. On the other hand, the problem can be reduced in polynomial time to a general satisfiability problem of length L with O(log 2 L ) variables. Another relation between the satisfiability problem and the minimum dominating set in a tournament says that the former can be solved in 2 O(√ v ) n K time (where v is the number of variables, n is the length of the formula, and K is a constant) if and only if the latter has a polynomial-time algorithm.

••

Boston College

^{1}TL;DR: An effective criterion for determining whether a given language has dot-depth 2 is conjecture and the condition is shown to be necessary in general, and sufficient for languages over a two-letter alphabet.

Abstract: This paper is a contribution to the problem of effectively determining the dot-depth of a star-free language, a problem in the theory of automata and formal languages with close connections to algebra and formal logic. We conjecture an effective criterion for determining whether a given language has dot-depth 2. The condition is shown to be necessary in general, and sufficient for languages over a two-letter alphabet. The condition involves a novel use of categories in the study of semigroup-theoretic problems.

••

TL;DR: Several facts about multihead pushdown automata are obtained, indicating that the study of alternating multihead finite automata may lead to useful results about nonalternating automata.

Abstract: We define alternating multihead finite automata, a generalization of nondeterministic multihead finite automata based on the alternating Turing machine model introduced by Chandra, Kozen, and Stockmeyer (1981). We study the relationships between the classes of languages accepted by alternating multihead finite automata and the classes accepted by deterministic and nondeterministic multihead finite automata and pushdown automata. We also examine basic questions about alternating multihead finite automata (for example, are k + 1 heads better than k ?). We conclude by placing upper bounds on the deterministic time and space complexity of the classes of languages accepted by alternating multihead finite automata. As corollaries to our results about alternating multihead finite automata, we obtain several facts about multihead pushdown automata, indicating that the study of alternating multihead finite automata may lead to useful results about nonalternating automata.

••

TL;DR: Clause implication is shown to be undecidable using the undecidability of finitely generated stable transitive relations on free terms to reduce the search space in Automated Deduction Systems.

Abstract: Clause implication, A ⇒ B of two clauses A and B , is shown to be undecidable using the undecidability of finitely generated stable transitive relations on free terms. Clause implication is undecidable even in the case where A consists of four literals. The decision problem of clause implication is equivalent to the decision problem of clause sets that consist of one clause and some ground units, hence the undecidability results hold also for these clause sets. Clause implication is an important problem in Automated Deduction Systems, as it can be used advantageously to reduce the search space.

••

TL;DR: The results presented in this paper concern the axiomatizability problem of first-order temporal logic with linear and discrete time and show that the logic is incomplete, i.e., it cannot be provided with a finitistic and complete proof system.

Abstract: The results presented in this paper concern the axiomatizability problem of first-order temporal logic with linear and discrete time. We show that the logic is incomplete, i.e., it cannot be provided with a finitistic and complete proof system. We show two incompleteness theorems. Although the first one is weaker (it assumes some first-order signature), we decided to present it, for its proof is much simpler and contains an interesting fact that finite sets are characterizable by means of temporal formulas. The second theorem shows that the logic is incomplete independently of any particular signature.

••

TL;DR: The uniform tag sequences of Cobham are generalized to the case where the homomorphism ϕ is not necessarily uniform, but rather satisfies the analogue of an algebraic equation.

Abstract: We generalize the uniform tag sequences of Cobham, which arise as images of fixed points of k-uniform homomorphisms, to the case where the homomorphism ϕ is not necessarily uniform, but rather satisfies the analogue of an algebraic equation. We show that these sequences coincide with 1. (a) the class of sequences accepted by a finite automaton with “generalized digits” as input, and 2. (b) generalizations of the “locally catenative formula” of Rozenberg and Lindenmayer. Examples include the infinite Fibonacci word, which is generated as the fixed point of the homomorphism ϕ(a) = ab, ϕ(b) = a, and sequences of Rauzy and De Bruijn.

••

TL;DR: The relationships among these problems as well as prove some positive results concerning CA's are investigated, and it is shown that the languageL={0n1m|m,n>0, mdividesn}is a real-time CA language, disproving a conjecture of Bucher and Culik.

Abstract: There are many fundamental open problems concerning cellular arrays (CA's). For example:
(1)
Is the class of real-time CA languages closed under reversal (concatenation)?
(2)
(2) Are linear-time CA's more powerful than real-time CA's?
(3)
(3) Are nonlinear-time CA's more powerful than linear-time CA's?
4
(4) Does one-way communication reduce the computing power of a CA?
Although some of these problems appear to be easier to resolve than the others, e.g., problem (1) seems easier than (2), no solution to any of these problems is forthcoming. In this paper, we investigate the relationships among these problems as well as prove some positive results concerning CA's. We show:
a
(a) the class of real-time CA languages is closed under reversal if and only if linear-time CA's are equivalent to real-time CA's;
b
if CA's are more powerful than CA's restricted to one-way data communication (i.e., one-way CA's), then nonlinear-time CA's are more powerful than linear-time CA's;
c
(c) if the class of real-time CA languages is closed under reversal, then it is also closed under concatenation. In the case of unary CA languages, we show that the class is closed under concatenation.
We also show that the languageL={0n1m|m,n>0, mdividesn}is a real-time CA language, disproving a conjecture of Bucher and Culik.

••

TL;DR: In this article, the authors present an algorithm for constraining dans l'ordre la liste des mots de Lyndon de longueur donnee, a mot primitif, which is the mot that is le plus petit mot in sa classe de conjugaison.

Abstract: Resume Etant donne un alphabet ordonne, un mot de Lyndon est un mot primitif qui est le plus petit dans sa classe de conjugaison. Nous donnons ici un algorithme qui construit dans l'ordre la liste des mots de Lyndon de longueur donnee. Cet algorithme est optimal en ce sens que le calcul du mot de Lyndon suivant de la liste se fait en temps lineaire a partir du mot precedent et sans utilisation d'une memoire auxiliaire. Cet algorithme trouve son application dans la construction d'une section des classes de conjugaison d'une longueur donnee; c'est une response a une question posee a l'auter par M.P. Schutzenberger.

••

TL;DR: The NP-completeness of the ∃∀∀-class is used to prove that for certain formulas there exist no equivalent quantifier-free formulas of polynomial length.

Abstract: We investigate the complexity of subclasses of Presburger arithmetic, i.e., the first-order theory of natural numbers with addition. The subclasses are defined by restricting the quantifier prefix to finite lists Q 1 … Q s . For all m ⩾ 2 we find formula classes, defined by prefixes with m +1 alternations and m +5 quantifiers, which are Σ p m - respectively Π p m -complete. For m =1, the class of ∃∀∀-formulas is shown to be NP-complete. For m =0 and for all natural numbers t , the class of ∃ t -formulas is known to be in P. Thus we have a nice characterisation of the polynomial-time hierarchy by classes of Presburger formulas. Finally, the NP-completeness of the ∃∀∀-class is used to prove that for certain formulas there exist no equivalent quantifier-free formulas of polynomial length.

••

TL;DR: An axiomatic theory of sets and rules is formulated, which permits the use of sets as data structures and allows rules to operate on rules, numbers, or sets, and which combines the λ-calculus with traditional set theories.

Abstract: An axiomatic theory of sets and rules is formulated, which permits the use of sets as data structures and allows rules to operate on rules, numbers, or sets. We might call it a “polymorphic set theory”. Our theory combines the λ-calculus with traditional set theories. A natural set-theoretic model of the theory is constructed, establishing the consistency of the theory and bounding its proof-theoretic strength, and giving in a sense its denotational semantics. Another model, a natural recursion-theoretic model, is constructed, in which only recursive operations from integers to integers are represented, even though the logic can be classical. Some related philosophical considerations on the notions of set, type, and data structure are given in an appendix.