scispace - formally typeset
Search or ask a question

Showing papers in "The Bulletin of Symbolic Logic in 1997"


Journal ArticleDOI
TL;DR: In this paper, the complexity of the satisfiability problem for FO2 is shown to be NEXPTIME-complete, and it is shown that every satisfiable FO2-sentence has a model whose size is at most exponential in the size of the sentence.
Abstract: We identify the computational complexity of the satisfiability problem for FO2, the fragment of first-order logic consisting of all relational first-order sentences with at most two distinct variables. Although this fragment was shown to be decidable a long time ago, the computational complexity of its decision problem has not been pinpointed so far. In 1975 Mortimer proved that FO2 has the finite-model property, which means that if an FO2-sentence is satisiable, then it has a finite model. Moreover, Mortimer showed that every satisfiable FO2-sentence has a model whose size is at most doubly exponential in the size of the sentence. In this paper, we improve Mortimer's bound by one exponential and show that every satisfiable FO2-sentence has a model whose size is at most exponential in the size of the sentence. As a consequence, we establish that the satisfiability problem for FO2 is NEXPTIME-complete.

357 citations


Journal ArticleDOI
TL;DR: A survey of the recent applications of continuous domains for providing simple computational models for classical spaces in mathematics including the real line, countably based locally compact spaces, complete separable metric spaces, separable Banach spaces and spaces of probability distributions is presented.
Abstract: We present a survey of the recent applications of continuous domains for providing simple computational models for classical spaces in mathematics including the real line, countably based locally compact spaces, complete separable metric spaces, separable Banach spaces and spaces of probability distributions. It is shown how these models have a logical and effective presentation and how they are used to give a computational framework in several areas in mathematics and physics. These include fractal geometry, where new results on existence and uniqueness of attractors and invariant distributions have been obtained, measure and integration theory, where a generalization of the Riemann theory of integration has been developed, and real arithmetic, where a feasible setting for exact computer arithmetic has been formulated. We give a number of algorithms for computation in the theory of iterated function systems with applications in statistical physics and in period doubling route to chaos; we also show how efficient algorithms have been obtained for computing elementary functions in exact real arithmetic.

145 citations


Journal ArticleDOI
TL;DR: The genesis of the lambda calculus and its two major areas of application are presented: the representation of computations and the resulting functional programming languages on the one hand and the representations of reasoning and the result systems of computer mathematics on the other hand.
Abstract: One of the most important contributions of A. Church to logic is his invention of the lambda calculus. We present the genesis of this theory and its two major areas of application: the representation of computations and the resulting functional programming languages on the one hand and the representation of reasoning and the resulting systems of computer mathematics on the other hand.

142 citations


Journal ArticleDOI
TL;DR: A number of letters were exchanged between Church and Paul Bernays during the period from December 1934 to August 1937; they throw light on critical developments in Princeton during that period and reveal novel aspects of Church's distinctive contribution to the analysis of the informal notion of effective calculability.
Abstract: Alonzo Church's mathematical work on computability and undecidability is well-known indeed, and we seem to have an excellent understanding of the context in which it arose. The approach Church took to the underlying conceptual issues, by contrast, is less well understood. Why, for example, was “Church's Thesis” put forward publicly only in April 1935, when it had been formulated already in February/March 1934? Why did Church choose to formulate it then in terms of Godel's general recursiveness, not his own λ-definability as he had done in 1934? A number of letters were exchanged between Church and Paul Bernays during the period from December 1934 to August 1937; they throw light on critical developments in Princeton during that period and reveal novel aspects of Church's distinctive contribution to the analysis of the informal notion of effective calculability. In particular, they allow me to give informed, though still tentative answers to the questions I raised; the character of my answers is reflected by an alternative title for this paper, Why Church needed Godel's recursiveness for his Thesis. In Section 5, I contrast Church's analysis with that of Alan Turing and explore, in the very last section, an analogy with Dedekind's investigation of continuity.

115 citations


Journal ArticleDOI
TL;DR: Two new dichotomy theorems for Borel equivalence relations are announced and presented in context by giving an overview of related recent developments.
Abstract: We announce two new dichotomy theorems for Borel equivalence relations, and present the results in context by giving an overview of related recent developments.

53 citations


Journal ArticleDOI
TL;DR: The seminal results of set theory are woven together in terms of a unifying mathematical motif, one whose transmutations serve to illuminate the historical development of the subject.
Abstract: Set theory, it has been contended, developed from its beginnings through a progression of mathematical moves, despite being intertwined with pronounced metaphysical attitudes and exaggerated foundational claims that have been held on its behalf. In this paper, the seminal results of set theory are woven together in terms of a unifying mathematical motif, one whose transmutations serve to illuminate the historical development of the subject. The motif is foreshadowed in Cantor's diagonal proof, and emerges in the interstices of the inclusion vs. membership distinction, a distinction only clarified at the turn of this century, remarkable though this may seem. Russell runs with this distinction, but is quickly caught on the horns of his well-known paradox, an early expression of our motif. The motif becomes fully manifest through the study of functions of the power set of a set into the set in the fundamental work of Zermelo on set theory. His first proof in 1904 of his Well-Ordering Theoremis a central articulation containing much of what would become familiar in the subsequent development of set theory. Afterwards, the motif is cast by Kuratowski as a fixed point theorem, one subsequently abstracted to partial orders by Bourbaki in connection with Zorn's Lemma. Migrating beyond set theory, that generalization becomes cited as the strongest of fixed point theorems useful in computer science. Section 1 describes the emergence of our guiding motif as a line of development from Cantor's diagonal proof to Russell's Paradox, fueled by the clarification of the inclusion vs. membership distinction. Section 2 engages the motif as fully participating in Zermelo's work on the Well-Ordering Theorem and as newly informing on Cantor's basic result that there is no bijection . Then Section 3 describes in connection with Zorn's Lemma the transformation of the motif into an abstract fixed point theorem, one accorded significance in computer science.

33 citations


Journal ArticleDOI
TL;DR: This survey is intended to introduce to logicians some notions, methods and theorems in set theory which arose-largely through the work of Saharon Shelah-out of (successful) attempts to solve problems in abelian group theory, principally the Whitehead problem and the closely related problem of the existence of almost free abelIAN groups.
Abstract: Introduction. This survey is intended to introduce to logicians some notions, methods and theorems in set theory which arose-largely through the work of Saharon Shelah-out of (successful) attempts to solve problems in abelian group theory, principally the Whitehead problem and the closely related problem of the existence of almost free abelian groups. While Shelah's first independence result regarding the Whitehead problem used established set-theoretical methods (discussed below), his later work required new ideas; it is on these that we focus. We emphasize the nature of the new ideas and the historical context in which they arose, and we do not attempt to give precise technical definitions in all cases, nor to include a comprehensive survey of the algebraic results. In fact, very little algebraic background is needed beyond the definitions of group and group homomorphism. Unless otherwise specified, we will use the word "group" to refer to an abelian group, that is, the group operation is commutative. The group operation will be denoted by +, the identity element by 0, and the inverse of a by -a. We shall use na as an abbreviation for a + a + + a [n times] if n is positive, and na = (-n)(-a) if n is negative. A group is called free if and only if it has a basis, that is, a linearly independent subset which generates the group. (The notions are the same as for vector spaces except that the scalars are from the ring of integers; so not every group is free.) Equivalently, a group is free if and only if it is isomorphic to a direct sum of copies of Z, the group of integers under addition. A crucial fact is that a subgroup of a free group is free [16, Theorem 14.5]. At a couple of points we will have occasion to refer to groups which are not necessarily commutative. In this context-the variety of all groups-the characterization of "free" is different; in fact, with the exceptions of the trivial group (of cardinality 1) and of Z, free groups in this variety are not commutative. However, it is still the case that subgroups of free groups are free [28, p. 95]. The Whitehead problem asks whether a certain necessary condition for a group to be free (defined in Section 1) is also sufficient. The almost

23 citations


Journal ArticleDOI
TL;DR: My purpose is to review some important issues that explicitly or implicitly constitute its themes and to discuss Tarski’s definition of the concept of logical consequence.
Abstract: Tarski's 1936 paper, “On the concept of logical consequence”, is a rather philosophical, non-technical paper that leaves room for conflicting interpretations. My purpose is to review some important issues that explicitly or implicitly constitute its themes. My discussion contains four sections: (1) terminological and conceptual preliminaries, (2) Tarski's definition of the concept of logical consequence, (3) Tarski's discussion of omega-incomplete theories, and (4) concluding remarks concerning the kind of conception that Tarski's definition was intended to explicate. The third section involves subsidiary issues, such as Tarski's discussion concerning the distinction between material and formal consequence and the important question ofthe criterion for distinguishing between logical and non-logical terms. §1. Preliminaries. In this paper an argument is a two-part system composed of a set of propositions P (the premise-set) and a single proposition c (the conclusion). The expression ‘c is a [logical] consequence of P’ is used with the same meaning as the expression ‘c is [logically] implied by P’. The expressions ‘is a logical consequence of’ and the converse ‘implies’ are relational. Often, I shall be talking in the same sense of validity of an argument. Validity is a property of arguments; an argument with premise-set P and conclusion c is valid if and only if P implies c; i.e., c is a logical consequence of P. Notice that this notion of argument is strictly ontic; it does not involve any agent that thinks, determines or establishes that a given proposition is or is not a consequence of a given set of propositions.

22 citations


Journal ArticleDOI
TL;DR: A model-theoretic approach to ordinal analysis via the nite combinatorial notion of an -large set of natural numbers, which is applied to obtain ordinal analyses of a number of interesting subsystems of rst- and second-order arithmetic.
Abstract: We describe a model-theoretic approach to ordinal analysis via the nite combinatorial notion of an -large set of natural numbers. In contrast to syntactic approaches that use cut elimination, this approach involves constructing nite sets of numbers with combinatorial properties that, in nonstandard instances, give rise to models of the theory being analyzed. This method is applied to obtain ordinal analyses of a number of interesting subsystems of rst- and second-order arithmetic.

21 citations


Journal ArticleDOI
TL;DR: The category of presheaves over PTIME-functions is used in order to show that Cook and Urquhart's higher-order function algebra P V defines exactly the PTIMEfunctions and a syntax-free generalisation ofPTIME-computability to higher types is obtained.
Abstract: We use the category of presheaves over PTIME-functions in order to show that Cook and Urquhart's higher-order function algebra P V defines exactly the PTIMEfunctions. As a byproduct we obtain a syntax-free generalisation of PTIME-computability to higher types. By restricting to sheaves for a suitable topology we obtain a model for intuitionistic predicate logic with E -induction over PV? and use this to re-establish that the provably total functions in this system are polynomial time computable. Finally, we apply the categorytheoretic approach to a new higher-order extension of Bellantoni-Cook's system BC of safe recursion. ?

17 citations


Journal ArticleDOI
TL;DR: An approach to the fine structure of L based solely on elementary model theoretic ideas is presented, and its use in a proof of Global Square in L is illustrated, to avoid the Levy hierarchy of formulas and the subtleties of master codes and projecta.
Abstract: We present here an approach to the fine structure of L based solely on elementary model theoretic ideas, and illustrate its use in a proof of Global Square in L . We thereby avoid the Levy hierarchy of formulas and the subtleties of master codes and projecta, introduced by Jensen [3] in the original form of the theory. Our theory could appropriately be called ”Hyperfine Structure Theory”, as we make use of a hierarchy of structures and hull operations which refines the traditional L α -or J α -sequences with their Σ n -hull operations. §1. Introduction . In 1938, K. Godel defined the model L of set theory to show the relative consistency of Cantor's Continuum Hypothesis. L is defined as a union of initial segments which satisfy: L 0 = ∅, L λ = ∪ α L α for limit ordinals λ, and, crucially, L α + 1 = the collection of 1st order definable subsets of L α . Since every transitive model of set theory must be closed under 1st order definability, L turns out to be the smallest inner model of set theory. Thus it occupies the central place in the set theoretic spectrum of models. The proof of the continuum hypothesis in L is based on the very uniform hierarchical definition of the L -hierarchy. The Condensation Lemma states that if π : M → L α is an elementary embedding, M transitive, then some ; the lemma can be proved by induction on α. If a real, i.e., a subset of ω, is definable over some L α ,then by a Lowenheim-Skolem argument it is definable over some countable M as above, and hence over some , 1 . This allows one to list the reals in L in length ω 1 and therefore proves the Continuum Hypothesis in L .

Journal ArticleDOI
TL;DR: A definability result is proved, showing that every finite element of the model is the interpretation of some term of the language FPC, a type theory with products, sums, function spaces and recursive types.
Abstract: A new games model of the language FPC, a type theory with products, sums, function spaces and recursive types, is described. A definability result is proved, showing that every finite element of the model is the interpretation of some term of the language. ?

Journal ArticleDOI
Joachim Lambek1
TL;DR: Some of the more recent developments of these notions of computability and their relevance to linguistics and logic are discussed.
Abstract: As an undergraduate I was taught to multiply two numbers with the help of log tables, using the formula Having graduated to teach calculus to Engineers, I learned that log tables were to be replaced by slide rules It was then that Imade the fateful decision that there was no need for me to learn how to use this tedious device, as I could always rely on the students to perform the necessary computations In the course of time, slide rules were replaced by pocket calculators and personal computers, but I stuck to my original decision My computer phobia did not prevent me from taking an interest in the theoretical question: what is a computation? This question goes back to Hilbert's 10th problem (see Browder [1976]), which asks for an algorithm, or computational procedure, to decide whether any given polynomial equation is solvable in integers It quickly leads to the related question: which numerical functions f : ℕ n → ℕ are computable? While Hilbert's 10th problem was only resolved in 1970, this related question had some earlier answers, of which I shall single out the following three: (1) f is recursive (Godel, Kleene), (2) f is computable on an abstract machine (Turing, Post), (3) f is definable in the untyped λ-calculus (Church, Kleene) These tentative answers were shown to be equivalent by Church [1936] and Turing [1936–7] I shall discuss here some of the more recent developments of these notions of computability and their relevance to linguistics and logic I hope to be forgiven for dwelling on some of the work I have been involved with personally, with greater emphasis than is justified by its historical significance

Journal ArticleDOI
TL;DR: A survey of the model theory of finite fields, starting with Ax, up to recent results using local stability theory can be found in this paper, where the authors show that pseudo-finite fields have a good behaviour: they don't have the finite cover property, nor the strict order property; the dimension of a definable set (which can be defined in purely algebraic terms) is a good notion of dimension.
Abstract: s of invited plenary lectures and tutorials I HOWARDBECKER,Path-connectedness, simple connectedness and descriptive set theory. Department of Mathematics, University of S. Carolina, Columbia, SC 29208, USA. E-mail: becker@math.sc.edu. This talk is concerned with two classes of questions. (1) Assume ¬CH . Does there exist a compact subset of R which has exactly א1 pathcomponents? ForR the answer is yes. ForR the answer is no (assuming weak large cardinal axioms, which may or may not be necessary). The proof of both of these facts is descriptive set theoretic, and closely related to the proofs in (2), below. (2) Let PCn and SCn denote the pointsets of path-connected and simply connected sets, respectively, in the space of compact subsets of R. What is the complexity of PCn and SCn in terms of the projective hierarchy? For n ≥ 3, PCn is true Π2; SC2 is true Π1; and for n ≥ 4, SCn is true Π2. For PC2 and SC3, the problem is open, although some upper and lower bounds are known. I ZOÉ CHATZIDAKIS,Model theory of finite and pseudo-finite fields, a survey. Equipe de Logique Mathématique, Université Paris VII -CNRS, Paris, France. E-mail: zoe@logique.jussieu.fr. In this talk we give a survey of the model theory of finite fields, starting with the work of Ax, up to recent results using local stability theory. In 1968, J. Ax gave an axiomatisation of the theory Tf of finite fields, and showed it is 1995 EUROPEAN SUMMER MEETING OF THE ASSOCIATION FOR SYMBOLIC LOGIC 75 decidable. He also described the infinite models of the theory Tf , the so-called pseudo-finite fields, and gave simple invariants for their elementary theories. He isolated an important property shared by these fields: they are pseudo-algebraically closed (PAC).The studyof PAC fields was later pursued by algebraists and model-theorists, and produced many interesting examples of well-behaved theories of fields. Further developments in the model-theoretic study of finite fields arose in the 90’s. Estimates of Lang-Weil type were obtained for definable subsets of finite fields: a definable subset of Fq has asymptotically size q , for some positive rational and integer d ; d can be interpreted as the dimension, and as a measure. These estimates have many applications, and show in particular that pseudo-finite fields, even though unstable, have a good behaviour: they don’t have the finite cover property, nor the strict order property; the dimension of a definable set (which can be defined in purely algebraic terms) is a good notion of dimension. These properties allow us to use local stability theory to study pseudo-finite fields, as well as related PAC fields; this yields various applications to algebraic groups defined over a finite or pseudo-finite field F : if G is such a group, then there are various bounds on the index of definable subgroups of G(F ), and definability of subgroups generated by Zariski irreducible definable subsets of G(F ). Moreover, going from pseudo-finite to finite, gives strong approximation results for dense subgroups of algebraic groups defined over Z. [1] J. Ax, The elementary theory of finite fields, Annals of Mathematics, vol. 88 (1968), pp. 239–271. [2]M. Fried and M. Jarden, Field Arithmetic, Ergebnisse 11, Springer-Verlag, Berlin, Heidelberg, 1986. [3] Z. Chatzidakis, L. van den Dries, andA.Macintyre, Definable sets over finite fields, Journal für die Reine und Angewandte Mathematik, vol. 427 (1992), pp. 107–135. [4] E. Hrushovski andA. Pillay, Definable subgroups of algebraic groups over finite fields, Journal für die Reine und Angewandte Mathematik, vol. 462 (1995), pp. 69–91. I KEVIN COMPTON, 0-1 laws and finite model theory. Department of EECS, University of Michigan, Ann Arbor, MI 48109-1003, USA. E-mail: kjc@eecs.umich.edu. A class C of structures obeys a 0-1 law with respect to a logic L if, for every sentence S in L, the probability that S holds in a random structure of size n in C approaches a limit of either 0 or 1 as n goes to infinity. The most famous result of this type is due to Glebskii et al. and Fagin. It says that the class of all structures over a relational vocabulary obeys a 0-1 law with respect to first-order logic. There have been many extensions and modifications of this theorem. Variations include the following. • Change the class C. Consider structures with functions, permutations, equivalence relations, etc. Allow underlying linear orders or other relations. • Change the probability measure. Count structures with a fixed universe. Count isomorphism types. Assign edge probabilities in graphs. Generate partial order in an n-cube. • Extend or change the logic L. Consider monadic second-order logic. Consider fixpoint logic. Consider infinitary logic with a bounded number of variables. Consider Sigma-1-1 sentences with syntactic restrictions. Consider modal logic. • If a 0-1 law does not hold, prove a weaker result. A limit law holds if the probability of every sentence approaches some limit. A slow variation law holds if the difference between consecutive probabilities approaches 0. We will provide a framework to classify the many results now known, and give a general itemize of the techniques used to prove them. 76 LOGIC COLLOQUIUM ’95 I S. BARRYCOOPER, Beyond Gödel’s theorem—the failure to capture information content. School of Mathematics, University of Leeds, Leeds LS2 9JT, England, United Kingdom. E-mail: pmt6sbc@gps.leeds.ac.uk. The work of Gödel, Church, Turing and others from the thirties onwards presents us with by now familiar restrictions on mathematics. On the other hand, it has been possible to believe that science to some extent avoids such basic theoretical limitations. Recent work concerning Turing definability and automorphisms of Turing-related structures suggests that scientific observation may be inadequate for analysing physical reality. In science, as in mathematics, a ‘theory of everything’ may prove to be a theoretical impossibility. I ANUJ DAWAR,Model theoretic methods for complexity theory. Department of Computer Science, University of Wales Swansea, Swansea SA2 8PP, Wales, United Kingdom. E-mail: a.dawar@swansea.ac.uk. Research in themodel theory of finite structures received a large impetus from the discovery that important computational complexity classes have natural characterizations as definability classes on finite models. Such results raised the hope that model theoretic methods could be brought to bear on notoriously outstanding problems in complexity theory. This hope has been partly frustrated by the fact that the standardmethods of model theory fail when only finite structures are considered. This failure is tied to the central role that first order logic has played in the classical model theory. First order logic does not occupy such a central position in finite model theory for two important reasons: • First order logic is too strong. This is in the sense that any two elementarily equivalent finite models are isomorphic. • First order logic is too weak. In the sense that any class of finite structures that is finitely axiomatizable is of very low complexity. Both these weaknesses can be addressed by considering restrictions of first order logic to finitely many variables. This yields an interesting (from the model theoretic point of view) notion of elementary equivalence, that is also useful in studying extensions of first order logic. Moreover, many central problems of complexity theory can still be recast in this framework. We discuss questions about the model theory of finite variable logics arising from such considerations. I RODDOWNEY, On low2 recursively enumerable degrees. Department of Mathematics, Victoria University, Wellington, New Zealand. E-mail: Rod.Downey@vuw.ac.nz. A set A is called lown if its n-th Turing jump has the same degree as ∅. The lown sets form a natural collection of degrees. The low2 sets form a particularly interesting class since they seem to lie at the boundary of definability in, for instance, the recursively enumerable degrees. In this talk, I plan to describe some recent progress towards understanding and defining the low2 (recursively enumerable) sets. I plan to speak on joint work with Richard Shore where we have proven that the low2 (r.e.)(Δ2) degrees are definable as are the low2 r.e. sets under the ordering with two reducibilities {T,m}. I also plan to discuss a new technique developed with Shore which enables us to embed lattices below a given nonlow2 degree. This is related to work with Shore and Cholak concerning nonembedding 1-3-1 into the r.e. degrees. Finally, I plan to briefly discuss recent work jointly with Steffen Lempp where we have proven that the low2 class of contiguous degrees are definable in the uppersemilattice of recursively enumerable degrees, solving a 20 year old question of Ladner. 1995 EUROPEAN SUMMER MEETING OF THE ASSOCIATION FOR SYMBOLIC LOGIC 77 I IVO HERZOG, The model theory of modules. Department of Mathematics, Notre Dame University, Notre Dame, IN 46556, USA. E-mail: iherzog@artin.helios.nd.edu. Let R be an associative ring with unit. The language L(R) for (left) R-modules consists of the (nonlogical) symbols L(R) = (+, 0, r)r∈R necessary to express a linear equation r1x1 + · · · rnxn . = 0 with (left) coefficients in R. A positive-primitive (pp-) formula of L(R) is an existentially quantified system of linear equations. A brief history of the model theory of modules will clarify the central rôle pp-formulae play: Baur proved the elimination of quantifiers (modulo a complete theory of R-modules) up to pp-formulae and Baur, Garavaglia and Monk, independently gave a very nice classification of complete theories of R-modules in terms of pp-formulae. If the ring R is a commutative field, then every pp-formula is equivalent to a system of linear equations. The model theoretic analysis of R-modules by use of the pp-formulae is thus a generalization of Linear Algebra. As with other such generalization

Journal ArticleDOI
TL;DR: Erdős and Rado as discussed by the authors introduced the partition calculus, a generalization of Ramsey theory to uncountable cardinals and infinite order types, which is the basis of our partition calculus.
Abstract: With the death of Paul Erdős on September 20, 1996, not only did the twentieth century lose one of its finest mathematicians, but the world lost one of the strongest ever to have appeared. Erdős was born on March 26, 1913 to two mathematics teachers in Hungary, and almost immediately began a career in mathematics. At the age of four he discovered the negative numbers, and amused himself by multiplying four-digit numbers in his head. By the age of 21 Erdős had received his Ph.D. from Pázmány University, where he was already exposed to logic, or at least to logicians. For example, in 1930 he found a simple proof of Chebyshev’s theorem that for any integer n there is always a prime between n and 2n, and László Kalmár helped him to prepare his paper for publication. Also, in 1932 George Szekeres, one of Erdős’s friends, rediscovered Ramsey’s Theorem and applied it to a problem posed by Eszter Klein. Ramsey’s Theorem greatly impressed Erdős, and he spent much of his life pursuing applications of it and similar propositions. Already by 1935 he and Szekeres had a joint paper with applications of Ramsey’s Theorem to graph theory. From 1934 to 1938 Erdős was in Manchester, England and at approximately the same time Richard Rado received a scholarship to Cambridge University. Rado had finished his Ph.D. with Issai Schur in 1931 inGermany on a problem related to Ramsey theory. By 1933 it was already difficult for a Jew to find employment in Germany. Erdős and Rado met in Cambridge in 1934 and began a collaboration that changed the face of set theory. What they invented together was the partition calculus, a generalization ofRamsey theory to uncountable cardinals and infinite order types. The consequences of this theory are still being investigated. In 1938 Erdős came to the United States, and gradually began the habit of traveling between institutions that eventually gained him his reputation as a sort of mathematical circuit rider. Erdős’s breadth of knowledge and achievement in mathematics was enormous. He studied large parts of number theory and combinatorics as well as set theory. It is impossible to find any bounds to his interests. Through his collaborations with hundreds of other people, he created new areas of study and new mathematical methods. For example, in number theory and combinatorics he developed the use of random methods (also used in parts

Journal ArticleDOI
TL;DR: The Association for Symbolic Logic (AFL) has published a collection of contributed talks as mentioned in this paper, including a survey of the most important abstracts from the past 20 years of the AFL.
Abstract: s of contributed talks given in person or by title by members of the Association for Symbolic Logic follow. For the Program Committee Sy D. Friedman Abstracts of contributed talkss of contributed talks I KLAUS AMBOS-SPIES, DENIS R. HIRSHFELDT, and RICHARD A. SHORE, Undecidability and 1-types in an interval of the computably enumerable degrees. Mathematisches Institut, Universität Heidelberg, D-69120 Heidelberg, Germany. E-mail: ambos@math.uni-heidelberg.de. Department of Mathematics, Cornell University, Ithaca, NY 14853, USA. E-mail: drh@math.cornell.edu. Department of Mathematics, Cornell University, Ithaca, NY 14853, USA. c © 1997, Association for Symbolic Logic 1079-8986/97/0303-0007/$2.90

Journal ArticleDOI
TL;DR: In this article, Bordet et al. define a propositional proof system as a polynomial-time function from the set of strings in some alphabet, onto the sets of tautologies.
Abstract: s of invited lectures, tutorials, and contributed talks given by members of the Association for Symbolic Logic follow. For the Program Committee Daniel Lascar Abstracts of invited talkss of invited talks I M. LUISA BONET, Complexity of propositional proofs. Depto de Lenguajes y Sistemas Informaticos, Universidad Politécnicas de Cataluña, Pau Gargallo 5, 08028 Barcelona, Spain. E-mail: bonet@goliat.upc.es. We are going to talk about propositional proof systems and their relationship with the field of complexity theory in Computer Science. Proof theorists have defined a number of propositional proofs systems, among them, resolution, sequent calculus, Frege systems (Hilbert style), etc. In 1974, Cook and Reckhow gave a new definition of propositional proof system, that allowed them to relate the lengths of proof of tautologies, withmajor questions in complexity theory. They defined a propositional proof system, as a polynomial time function from the set of strings in some alphabet, onto the set of tautologies. This means that a propositional proof system, is one in which we can check that some sequence of symbols is a proof of a tautology, in polynomial time. From this definition, they proved that the existence of a 244 LOGIC COLLOQUIUM ’96 propositional proof system that had “short” proofs for all tautologies, was equivalent to the assertion NP=Co-NP. A corollary of this is that if all propositional proof systems have classes of tautologies that require “long” proofs, then P is not equal to NP. This corollary gave rise to Cook’s program. The idea is to take different proof systems, and try to prove superpolynomial lower bounds for them. I.e., for particular proof systems, to find families of tautologies that require superpolynomial length proofs. After about 20 years of this program, some successes have been obtained. In 1985 Armin Haken proved that the propositional version of the pigeonhole principle requires exponential size resolution proofs. After that superpolynomial lower bounds were proven for the following systems: Bounded-depth Frege (Atjai ’88, Pitassi-Impagliazzo-Beame ’92, KrajicekPudlak-Woods ’92), Cutting Planes (Bonet-Pitassi-Raz ’94, Pudlak ’95, Hakin-Cook ’95), and Nullstellensatz proof systems of bounded degree (Beame-Impagliazzo-Krajicek-PitassiPudlak ’95). The previous lower bounds required difficult techniques, and knowledge of fields like algebraic geometry, circuit complexity theory, etc. But still at this stage, nobody knows how to prove lower bounds for sophisticated proof systems like Frege or sequent calculus, and much less, lower bounds for systems more powerful than those. We will talk about different possibilities on how to continue Cook’s program. One direction is to try to work on techniques to prove polynomial lower bounds for Frege systems. Even that seems a very difficult task. Another direction, is to explore and maybe define proof systems somewhat less powerful than Frege or sequent calculus, but more sophisticated than the ones we already know how to prove lower bounds for. I L. GORDEEV, Finite proof theory. Mathematics Institute, University of Tübingen, Auf derMorgenstelle 10, D-72076 Tübingen, Germany. E-mail: mife001@mailserv.zdv.uni-tuebingen.de. The familiar Gentzen-style proof theory deals differently with propositional connectives and quantifiers. Although propositional rules of inference admit direct proof search algorithms which successively reduce formulas to their principal subformulas, this is not the case of the rules of quantification. This is because the treatment of “for all x” and “there exist x” in both modus-ponens and sequent calculi actually imitate infinite model theoretical interpretations by viewing ‘x’ as a variable element in an arbitrary infinite structure. It is assumed that bound variables can every time be separated from the arbitrarily many given free variables, which implies that the set of all available (names of) variables should be infinite. That this assumption is not quite harmless shows standard completeness proofs either for cut free sequent calculi or resolution-type systems, which always provide infinite models consisting of (the names of) eigenvariables or suitable terms—although very often finite models would suffice. Note that strictly speaking Gentzen’s rule of universal quantification does not preserve the subformula property, since the correlated premise-eigenvariable can (in fact, almost everywhere must) differ from all variables occurring in the conclusion. By contrast, algebraic formalisms of predicate logic usually do not require any assumption of infinity—in fact, just finite models (algebras) are often more important there than infinite ones. These formalisms in turn are closely related to the modus-ponens predicate calculi with finitely many distinct variables. For example, the canonical formalism of relation algebras is equivalent to the modus-ponens logic (with equality) of four distinct variables. So what about proof theory of predicate logic with n distinct variables (abbreviated as nVAR logic)? It turns out that sequent calculi proper are not suitable for n-VAR logic because they do really need infinitely many distinct eigenvariables. A possible improvement provides an idea of a rewriting system imitating Gentzen’s rules (in the restricted variable domain) 1996 EUROPEAN SUMMER MEETING OF THE ASL 245 on arbitrary subformula levels simultaneously. To avoid obvious variable collisions one can strengthen “premise subformulas” either by “deleting” some “old” variables, or by adding the corresponding “new” universal quantifiers. Loosely speaking, these operations (reductions) produce new combinations of old subformulas. This idea was implemented in [1] for n-VAR logic without function symbols and equality (engere Praedikatenlogik). The resulting rewriting systems (reduction calculi) RPCn prove all theorems of n-VAR logic, and they admit almost direct proof search algorithms, which is important to the automated theorem proving. Moreover, RPC4 and RPC5 share certain semi-completeness properties which make them very comprehensive. In the present lecture it will be shown how to extend this approach onto n-VAR logic with equality. There are strong connections between the resulting reduction calculi RPCEn and both relation and cylindric algebras. In particular, it will be shown that RPCEn enable us to solve some problems posed in [2] and [3]. [1] L. Gordeev, Cut free formalization of logic with finitely many variables, Part I, Lecture Notes in Computer Science, no. 933, Springer-Verlag, 1995, pp. 136–150. [2] L. Henkin, J. D. Monk, and A. Tarski, Cylindric Algebras, Part I, North-Holland, 1971. [3] A. Tarski and S. Givant, A formalization of set theory without variables, American Mathematical Society Colloquium Publications, vol. 41 (1987). I MARTIN HYLAND, Proofs as process. DPMMS, University of Cambridge, 16 Mill Lane, Cambridge CB2 1SB, England. E-mail: M.Hyland@dpmms.cam.ac.uk. In this talk I aim to explain two related ideas. (1)A game theoretic approach to the representationof proofs inminimal logic. This gives a fully compositional extension of dialogues first proposed by P. Lorenzen. The game theoretic semantics can be derived from one for intuitionistic linear logic, and has an interpretation in terms of essentially sequential processes. (2) A new semantics for classical linear logic based on another notion of process derived from recent work of A. Joyal on free bicompletions. When combined with the ideas of Girard’s LC and LU, this suggests a programme for analysing the ‘constructive content’ of classical proofs. I STEFFEN LEMPP, Decidability and undecidability in the enumerable Turing degrees. Mathematics Department, University of Wisconsin-Madison, Madison, WI 53706, USA. E-mail: lempp@math.wisc.edu. Much of the work in classical computability theory over the last few decades has focused on the Turing degrees of the (recursively) enumerable sets. These degrees form one of the fundamental structures of mathematics and can be characterized in various ways, e.g., as the degrees of word problems of finitely presented groups, or as the degrees of solution sets of Diophantine equations. I will summarize some of the research over the last decade, with particular attention given to questions of decidability and undecidability in its first-order theory. The initial algebraic investigations of this degree structure in the 1960s and 1970s led to the first proof of the undecidability of its first-order theory by Harrington and Shelah in 1982. This theory was then shown to be as complicated as first-order arithmetic by Harrington and Slaman, and to be non-א0-categorical by Lerman, Shore and Soare. More recently, a sharper undecidability result, for the Π3 fragment of the theory, was obtained byLempp, Nies and Slaman, and a nontrivial automorphism of the degree structure 246 LOGIC COLLOQUIUM ’96 was exhibited by Cooper. I will also talk about recent results on definability and on recent work in progress toward showing its Π2-theory to be decidable, in particular about the characterization of the finite lattices embeddable into this structure, which is still open. I ALI NESIN, Omega stable groups of finite Morley rank. Mathematics Department, University of California, Irvine, Irvine, CA 92717, USA. E-mail: anesin@math.uci.edu. Groups of finite Morley rank are groups on which a concept of dimension can be defined. These groups look like algebraic groups but are not necessarily so. However, simple groups of finite Morley rank are conjectured to be algebraic groups. This conjecture is known under the name of Cherlin Zil’ber Conjecture. I will give a survey of the present state of classification of groups of finite Morley rank and talk about the latest developments. I GRAHAM PRIEST, Inconsistent arithmetics and non-Euclidean geometries. Department of Philosophy, University of Queensland, Brisbane 4072, Australia. E-mail: ggp@lingua.cl

Journal ArticleDOI
TL;DR: In this article, the authors use the logic DJdQ, a weak relevant logic capturing the notion of meaning containment, to rid the theory of paradoxes and enable simple consistency to be proved.
Abstract: s of contributed talks I STEPHEN BARKER, Assertion, sentence meaning and truth. Philosophy Department, University of Melbourne, Vic 3052, Australia. A key tenet of orthodox linguistic theorising is that a fundamental distinction needs to be made between the theory of sense—semantics in the narrow sense—and the theory of force or speech-act structure and interpretation. I argue against this tenet, sketching an alternative view according to which pragmatics and speech-act theory make up the whole of meaning theory. I ROSS BRADY, Simple consistency of higher-order predicate theory. Philosophy Department, Latrobe University, Vic 3083, Australia. We extend the range of quantification to include predicates and sentences, adding axioms and rules for the expanded logic, to yield a higher-order predicate theory. Instead of using Russell’s theory of orders, we use the logic DJdQ, a weak relevant logic capturing the notion of meaning containment, to rid the theory of paradoxes and enable simple consistency to be proved. We introduce three forms of Comprehension Axiom, one for sentences, ∃p (p = A), one for predicates and one for individuals, ∃y ∀x (Dyx = A), where ‘D’ is a 2-place relation for denotation. We also have an extensionality rule for predicates and for individuals. We prove simple consistency using a transfinite sequence of transfinite sequences of 3-valued matrix models. We finish with a brief discussion of the solutions to the Liar Paradox, the Paradox of Heterologicality, and Berry’s, Konig’s and Richard’s Paradoxes. c © 1997, Association for Symbolic Logic 1079-8986/97/0303-0005/$1.40

Journal ArticleDOI
TL;DR: The Association for Symbolic Logic (AFL) as discussed by the authors has a program devoted to the normalization of proofs in classical logic, which is a major topic of interest in this paper.
Abstract: s of invited and contributed talks given (in person or by title) by members of the Association for Symbolic Logic follow. For the Program Committee Alexander S. Kechris Abstracts of contributed talkss of contributed talks I KENSUKE BABA AND SACHIO HIROKAWA,Normalization of proofs in classical logic. Department of Informatics, Kyushu University, Kasuga-Kohen 6-1, Kasuga, Fukuoka 816 Japan. E-mail: baba@i.kyushu-u.ac.jp. E-mail: hirokawa@i.kyushu-u.ac.jp. According to Curry-Howard isomorphism, formulas correspond to types and proofs correspond to -terms. There have been many research to extend this correspondence to classical logic. One formulation is to use a new combinator C : ((A→⊥)→⊥)→ A whose c © 1997, Association for Symbolic Logic 1079-8986/97/0303-0006/$2.10