scispace - formally typeset
Search or ask a question

Showing papers in "Bulletin of The European Association for Theoretical Computer Science in 1989"


Journal Article
TL;DR: It is shown that, if the authors restrict thesauri by requiring their probability distributions to be uniform, then they and parametric conditions are equivalent, and the thesaurus point of view suggests some possible extensions of the theory.
Abstract: The 0-1 law for first-order properties of finite structures and its proof via extension axioms were first obtained in the context of arbitrary finite structures for a fixed finite vocabulary. But it was soon observed that the result and the proof continue to work for structures subject to certain restrictions. Examples include undirected graphs, tournaments, and pure simplicial complexes. We discuss two ways of formalizing these extensions, Oberschelp’s (1982) parametric conditions and our (2003) thesauri. We show that, if we restrict thesauri by requiring their probability distributions to be uniform, then they and parametric conditions are equivalent. Nevertheless, some situations admit more natural descriptions in terms of thesauri, and the thesaurus point of view suggests some possible extensions of the theory.

44 citations





Journal Article
TL;DR: A model for unbounded nondeterministic computation is proposed which provides a very natural basis for the structural analogy between recursive function theory and computational complexity theory and presents an alternative version of the halting problem.
Abstract: In this note we propose a model for unbounded nondeterministic computation which provides a very natural basis for the structural analogy between recursive function theory and computational complexity theory: P : NP = REC : RE At the same time this model presents an alternative version of the halting problem which has been known for a decade to be highly intractable.

17 citations









Journal Article
TL;DR: An efficient algorithm is seen that solves the problem of finding an equivalent DFA with the minimum possible number of states with important applications to the efficiency of procedures that use finite automata.
Abstract: Given a DFA M , can we find an equivalent DFA (i.e., one that recognizes the same language as M) with the minimum possible number of states? This is a very natural question, and has important applications to the efficiency of procedures that use finite automata. In this note, we will see an efficient algorithm that solves this problem. To begin, we need a couple of definitions. Let M be any DFA with alphabet Σ. Then M naturally defines an equivalence relation ∼ M over Σ * , given by x ∼ M y iff M ends in the same state on inputs x and y. Note that the number of equivalence classes is finite (being equal to the number of states of M). Now let L = L(M) be the language recognized by M. This language also defines a natural equivalence relation ∼ L , as follows. Call two strings x, y ∈ Σ * indistinguishable by L if, for all z ∈ Σ * , xz ∈ L ⇔ yz ∈ L. Otherwise we say that x and y are distinguishable. Then the relation ∼ L is defined by x ∼ L y iff x and y are indistinguishable.



Journal Article
TL;DR: It is shown by diagonalization that there exists an oracle $A$ such that DSPACE [S(n)$] = D SPACE [$S( n)$ log $n] and there is a counterexample to the belief that is a theorem has contradictory relativizations.
Abstract: We construct a computable space bound $S(n)$, with $n^{2} > S(n) > n^{3}$ and show by diagonalization that DSPACE [$S(n)$] = DSPACE [$S(n)$ log $n$]. Moreover, we can show that there exists an oracle $A$ such that DSPACE [$S(n)$] $ eq$ DSPACE$^{A}$[$S(n)$ log $n$]. This is a counterexample to the belief that is a theorem has contradictory relativizations, then is cannot be proved using standard techniques like diagonalization [7].



Journal Article
TL;DR: W: Bulletin of the European Association for Theoretical Computer Science (EATCS), 38:199-210, 1989.
Abstract: W: Bulletin of the European Association for Theoretical Computer Science (EATCS), 38:199-210, 1989.

Journal Article
TL;DR: In this paper, a relevant part of the infinite game theory is explained, where the classical question of the existence of winning strategies turns out to be of importance to practice and the classical game theoretic question of infinite games is answered.
Abstract: Infinite games are widely used in mathematical logic. Recently infinite games were used in connection to concurrent computational processes that do not necessarily terminate. For example, operating system may be seen as playing a game “against” the disruptive forces of users. The classical question of the existence of winning strategies turns out to be of importance to practice. We explain a relevant part of the infinite game theory. Reprinted in 1993 World Scientific book Current Trends in Theoretical Computer Science pages 235-24.




Journal Article
TL;DR: The question "What is Structural Complexity Theory?" has been the source of some lively discussions at several recent conferences as discussed by the authors, and there is no commonly accepted answer but the intersection of almost all answers is nonempty.
Abstract: At several recent conferences, the question “What is Structural Complexity Theory?” has been the source of some lively discussions. At this time there does not exist one commonly accepted answer but the intersection of almost all answers is nonempty. The purpose of this paper is to describe one answer to this question. We will not describe in detail recent technical results, although some will be mentioned as examples, but rather will provide comments about themes and paradigms which may be useful in organizing much of the material. We assume that the reader is familiar with (or has access to) the book Structural Complexity I, by Balcazar, Diaz, and Gabarro [BDG88]. What is desired in the formulation of a theory of computational complexity is a method for dealing with the quantitative aspects of computing. Such a method would depend upon a general theory that would provide a means for defining and studying the “inherent difficulty” of computing functions (or, more generally, solving problems). Such a theory would explain the relationships among assorted computational models and among the various complexity measures that can be defined in the context of the models and their different modes of operation, and explain why some functions are inherently difficult to compute. While any such theory must necessarily be mathematical in nature, it cannot be mathematics as such; rather, it must reflect aspects of real computing and contribute to the formal development of computer science. From the study of specific problems, it has become a widely accepted notion that a problem is not “feasible” unless it can be solved using at most polynomial space and a problem is not “tractable” unless it can be solved using at most polynomial time. Much of the effort in complexity theory has been placed on determining just what functions are

Journal Article
TL;DR: In this article, the group structure of the law performed in the address arithmetic unit is used to obtain non-commutative access to metacyclic memory in the von Neumann computer.
Abstract: Memory in the von Neumann computer is usually viewed as a linear array. We prove that this view does not follow from the consecutive nature of this memory, but from the group structure of the law performed in the address arithmetic unit. By changing that law, we can get a memory with a non commutative access. As an example we describe the metacyclic memory.