Author

# Felix Joachimski

Bio: Felix Joachimski is an academic researcher from Ludwig Maximilian University of Munich. The author has contributed to research in topics: Lambda calculus & Natural deduction. The author has an hindex of 7, co-authored 10 publications receiving 227 citations.

##### Papers

More filters

••

TL;DR: The analogous treatment of a new system with generalized applications inspired by generalized elimination rules in natural deduction, advocated by von Plato, shows the flexibility of the approach which does not use the strong computability/candidate style à la Tait and Girard.

Abstract: Inductive characterizations of the sets of terms, the subset of strongly normalizing terms and normal forms are studied in order to reprove weak and strong normalization for the simply-typed λ-calculus and for an extension by sum types with permutative conversions. The analogous treatment of a new system with generalized applications inspired by generalized elimination rules in natural deduction, advocated by von Plato, shows the flexibility of the approach which does not use the strong computability/candidate style a la Tait and Girard. It is also shown that the extension of the system with permutative conversions by η-rules is still strongly normalizing, and likewise for an extension of the system of generalized applications by a rule of ``immediate simplification''. By introducing an infinitely branching inductive rule the method even extends to Godel's T.

112 citations

••

TL;DR: A purely syntactic and untyped variant of Normalisation by Evaluation for the $\lambda$-calculus is presented in the framework of a two-level $\beta\eta\!\up$-reduction with rewrite rules to model the inverse of the evaluation functional.

Abstract: A purely syntactic and untyped variant of Normalisation by Evaluation for the $\lambda$-calculus is presented in the framework of a two-level $\lambda$-calculus with rewrite rules to model the inverse of the evaluation functional. Among its operational properties there is a standardisation theorem that formally establishes the adequacy of implementation in functional programming languages. An example implementation in Haskell is provided. The relation to the usual type-directed Normalisation by Evaluation is highlighted, using a short analysis of $\eta$-expansion that leads to a perspicuous strong normalisation and confluence proof for $\beta\eta\!\up$-reduction as a byproduct.

36 citations

••

10 Jul 2000TL;DR: Standardization is shown by means of an inductive definition of standard reduction that closely follows the inductive term structure and captures the intuitive notion of standardness even for permutative reductions.

Abstract: As a minimal environment for the study of permutative reductions an extension ΛJ of the untyped λ-calculus is considered. In this non-terminating system with non-trivial critical pairs, confluence is established by studying triangle properties that allow to treat permutative reductions modularly and could be extended to more complex term systems with permutations. Standardization is shown by means of an inductive definition of standard reduction that closely follows the inductive term structure and captures the intuitive notion of standardness even for permutative reductions.

24 citations

••

TL;DR: An appropriate notion of reduction is analyzed and proven to be confluent by means of a detailed analysis of the usual Tait/Martin-Lof style development argument, which yields bounds for the lengths of those joining reduction sequences that are guaranteed to exist by confluence.

Abstract: The coinductive λ-calculus Aco arises by a coinductive interpretation of the grammar of the standard λ-calculus A and contains non-well-founded λ-terms. An appropriate notion of reduction is analyzed and proven to be confluent by means of a detailed analysis of the usual Tait/Martin-Lof style development argument. This yields bounds for the lengths of those joining reduction sequences that are guaranteed to exist by confluence. These bounds also apply to the well-founded λ-calculus, thus adding quantitative information to the classic result.

19 citations

01 Jan 2002

TL;DR: In this article, the authors studied strongly normalizing terms and normal forms in the λ-calculus and showed that the system with permutative conversions by η-rules is strongly normalising.

Abstract: Inductive characterizations of the sets of terms, the subset of strongly normalizing terms and normal forms are studied in order to reprove weak and strong normalization for the simplytyped λ-calculus and for an extension by sum types with permutative conversions. The analogous treatment of a new system with generalized applications inspired by generalized elimination rules in natural deduction, advocated by von Plato, shows the flexibility of the approach which does not use the strong computability/candidate style a la Tait and Girard. It is also shown that the extension of the system with permutative conversions by η-rules is still strongly normalizing, and likewise for an extension of the system of generalized applications by a rule of “immediate simplification”. By introducing an infinitely branching inductive rule the method even extends to Godel’s T.

17 citations

##### Cited by

More filters

01 Jan 2006

TL;DR: This section gives a semantics to kinds and constructors and show soundness of kinding, constructor equality and subtyping, i.

Abstract: ion. LEQ-λ ∆, X : pκ ` F ≤ F′ : κ′ ∆ ` λXF ≤ λXF′ : pκ → κ′ Application. There are two kinds of congruence rules for application: one kind states that if functions F and F′ are in the subtyping relation, so are their values F G and F′ G at a certain argument G. LEQ-APP ∆ ` F ≤ F′ : pκ → κ′ p−1∆ ` G : κ ∆ ` F G ≤ F′ G : κ′ The other kind of rules concern the opposite case: If F is a function and two arguments G and G′ are in a subtyping relation, so are the values F G and F G′ of the function at these arguments. However, such a relation can only exist if F is either covariant or contravariant. LEQ-APP+ ∆ ` F : +κ → κ′ ∆ ` G ≤ G′ : κ ∆ ` F G ≤ F G′ : κ′ LEQ-APP− ∆ ` F : −κ → κ ′ −∆ ` G′ ≤ G : κ ∆ ` F G ≤ F G′ : κ′ What about a comparable rule for non-variant constructors? It is derivable: ∆ ` F : ◦κ → κ′ ◦−1∆ ` G ≤ G′ : κ ◦−1∆ ` G′ ≤ G : κ ◦−1∆ ` G = G′ : κ ∆ ` F G = F G′ : κ′ ∆ ` F G ≤ F G′ : κ′ Successor and infinity. LEQ-S-R ∆ ` a : ord ∆ ` a ≤ s a : ord LEQ-∞ ∆ ` a : ord ∆ ` a ≤ ∞ : ord Lemma 2.14 (Validity II) If D :: ∆ ` F ≤ F′ : κ then ∆ ` F : κ and ∆ ` F′ : κ. Proof. By induction on D, using validity of equality (Lemma 2.12) in case of LEQ-REFL. 2.3 Semantics and Soundness In this section we give a semantics to kinds and constructors and show soundness of kinding, constructor equality and subtyping. 30 CHAPTER 2. SIZED HIGHER-ORDER SUBTYPING 2.3.1 Interpretation of Kinds Constructors F of kind κ will be interpreted as operators F which live in the denotation [[κ]] of their kinds. Each kind will be interpreted as a poset (partially ordered set) ([[κ]],v), which is even a complete lattice in each case. Interpretation of base kind ∗. For the moment, we assume a complete lattice [[∗]] of countable sets A ∈ [[∗]] ordered by inclusion, with a maximal set >∗ ∈ [[∗]] such that A ⊆ >∗ for all A ∈ [[∗]]. Later, we will let [[∗]] be the collection of all saturated sets ⊆ SN where >∗ = SN is the set of strongly normalizing terms. So, let [[∗]] : complete lattice of sets A v∗ A′ :⇐⇒ A ⊆ A′ d∗ A := ⋂A (where A ⊆ [[∗]]). By assumption, the poset ([[∗]],v∗) is closed under infima, i. e., for a non-empty subset A ⊆ [[∗]] the infimum d∗ A ∈ [[∗]] exists and is equal to the intersection ⋂ A. Intersection can be extended to empty collections by letting d∗ ∅ = >∗. Once empty intersections are defined, we can define arbitrary suprema by ⊔∗ A := d∗{B ∈ [[∗]] | B w∗ A for all A ∈ A}. Note that we do not require that the supremum is the union of sets; it might actually be something bigger. On the set [[∗]] we assume a binary operation “→” (function space construction) such that A → B v∗ A′ → B′ if A′ v∗ A and B v∗ B′. Interpretation of base kind ord. Constructors “a” of kind ord denote settheoretic ordinals in our semantics. We choose an initial segment [0;>ord] =: [[ord]] of the ordinals for the interpretation of ord. At the moment we leave it open which ordinal >ord denotes; we will fill it in later. [[ord]] := >ord + 1 α vord α′ :⇐⇒ α ≤ α′ Notation. We introduce a notation F vpκ F ′ for polarized inclusion and the notion F vp F ′ ∈ [[κ]] which expresses polarized inclusion for two operators F ,F ′ plus the fact that both are in the set [[κ]]. F v+κ F ′ :⇐⇒ F v F ′ F v−κ F ′ :⇐⇒ F ′ v F F v◦κ F ′ :⇐⇒ F v F ′ and F ′ v F F vp F ′ ∈ [[κ]] :⇐⇒ F ,F ′ ∈ [[κ]] and F vpκ F ′ F v F ′ ∈ [[κ]] :⇐⇒ F v+ F ′ ∈ [[κ]] 2.3. SEMANTICS AND SOUNDNESS 31 Interpretation of function kinds. Semantically, a constructor F of kind pκ → κ′ is a covariant (p = +), contravariant (p = −) or non-variant (p = ◦) operator. We define the posets ([[κ]],v) for higher kinds by induction on κ. [[pκ → κ′]] := {F ∈ [[κ]]→ [[κ′]] | F (G) v F (G ′) ∈ [[κ′]] for all G vp G ′ ∈ [[κ]]} F vpκ→κ F ′ :⇐⇒ F (G) vκ F ′(G) for all G ∈ [[κ]] Lemma 2.15 (Partial order) For each kind κ, the relation v denotes a partial order on [[κ]]. Proof. By induction on κ. For base kinds κ0 ∈ {∗, ord} reflexivity, transitivity and antisymmetry hold by definition. To prove transitivity for higher kinds, assumeκ = pκ1 → κ2 andF1 v F2 ∈ [[κ]],F2 v F3 ∈ [[κ]], and an arbitrary G ∈ [[κ1]]. Since by ind. hyp. G v1 G, we have F1(G) v F2(G) ∈ [[κ2]] and F2(G) v F3(G) ∈ [[κ2]] by definition. By induction hypothesis F1(G) v2 F3(G), and since G was arbitraryF1 vpκ1→κ2 F3. Reflexivity and antisymmetry are proven analogously. Pointwise infima, upper bounds and suprema. For higher kinds, we define inductively pointwise infimum and maximal element as follows. dpκ→κ′ F ∈ [[κ]]→ [[κ′]] for F ⊆ [[pκ → κ′]] ( dpκ→κ′ F)(G) := dκ′{F (G) | F ∈ F} >pκ→κ ∈ [[pκ → κ′]] >pκ→κ(G) := >κ A simple proof by induction on κ shows that > is really the maximal element of [[κ]] for any kindκ. Extending the observations for kind ∗, we can now define empty infima and arbitrary suprema for all kinds. dκ ∅ := > ⊔κ F := dκ{H ∈ [[κ]] | H w F for all F ∈ F} Lemma 2.16 (Supremum is pointwise) ( ⊔pκ→κ′ F)(G) = ⊔κ′{F (G) | F ∈ F}. The posets [[κ]] now are equipped with everything required for complete lattices. Lemma 2.17 (Complete lattice) For all kinds κ, the triple ([[κ]], dκ ,⊔κ) forms a complete lattice. Proof. We only need to show that dκ F ∈ [[κ]] is the well-defined greatest lower bound for F ⊆ [[κ]] by induction on κ. For base kinds, there is nothing to prove. 32 CHAPTER 2. SIZED HIGHER-ORDER SUBTYPING 1. Well-definedness: Show dpκ→κ′ F ∈ [[pκ → κ′]]. Assume G vp G ′ ∈ [[κ]]. Then F (G) v F (G ′) ∈ [[κ′]] for all F ∈ F. Since the infimum is welldefined at kind κ′ by induction hypothesis, this entails ( dpκ→κ′ F)(G) = dκ′{F (G) | F ∈ F} vκ vκ dκ′{F (G ′) | F ∈ F} = (dpκ→κ′ F)(G ′). 2. Lower bound: Show dpκ→κ′ F vpκ→κ F for all F ∈ F. Assume G ∈ [[κ]] arbitrary. Since by induction hypothesis, dκ′ is a lower bound, ( dpκ→κ′ F)(G) = dκ′{F (G) | F ∈ F} vκ F (G) for any F ∈ F. 3. Greatest lower bound: Let H v F ∈ [[pκ → κ′]] for all F ∈ F. Show H vpκ→κ dpκ→κ′ F. For G ∈ [[κ]] arbitrary, H(G) vκ F (G) for any F ∈ F by assumption. Since by induction hypothesis dκ′ is a greatest lower bound, H(G) vκ dκ′{F (G) | F ∈ F} = (dpκ→κ′ F)(G). 2.3.2 Semantics of Constructors In the following we develop a semantics of constructors through their derivations of well-kindedness. This indirect path is necessary since the constructors are domain-free. E. g., it is not determined which function is denoted by the constructor λXX; it could be the identity function on [[κ]] for any kind κ. In joint work with Ralph Matthes I have investigated polarized kinding and semantics of Church-style constructors [AM04]. There, λX :+κ.X denotes exactly one set-theoretic function: the identity on [[κ]]. The following development resembles closely the cited work, however, we take the detour via derivations here. Sound valuations. Let θ be a mapping from constructor variables to sets. We say θ ∈ [[∆]] if θ(X) ∈ [[κ]] for all (X : pκ) ∈ ∆. A partial order on valuations is established as follows: θ v θ′ ∈ [[∆]] :⇐⇒ θ(X) vp θ′(X) ∈ [[κ]] for all (X : pκ) ∈ ∆ We use v− for w and v◦ for =, and v+ as synonym for v. It is clear that θ vq θ′ ∈ [[∆]] iff θ(X) vpq θ′(X) ∈ [[κ]] for all (X : pκ) ∈ ∆. 2.3. SEMANTICS AND SOUNDNESS 33 Lemma 2.18 If θ v θ′ ∈ [[∆]], then θ vp θ′ ∈ [[p−1∆]]. Proof. By cases on p. Interesting is only case p = ◦. Assume X : qκ ∈ [[◦−1∆]], which is only possible if q = ◦ and X : ◦κ ∈ [[∆]]. We have to show θ(X) v◦◦ θ′(X) ∈ [[κ]] which follows from the premise of the lemma. Remark 2.19 The opposite implication does not hold in case p = ◦. Denotation of constructors. If D :: ∆ ` F : κ and θ is a function from type variables to sets, we define the set [[D]]θ by recursion on D as follows. Case D = X : pκ ∈ ∆ p ≤ + ∆ ` X : κ We define [[D]]θ = θ(X). Case D = C :κ ∈ Σ ∆ ` C : κ In this case, we simply return the semantics of C, which is defined elsewhere: [[D]]θ = Sem(C). Case D = D′ ∆,X : pκ ` F : κ′ ∆ ` λXF : pκ → κ′ The semantics of D is a function over [[κ]], defined by [[D]]θ(G ∈ [[κ]]) := [[D]]θ[X 7→G]. Note that this is only possible if we know the domain of the function (κ, in this case). This is the reason why we define the semantics of derivations instead of constructors (where we would not have the domain available).

93 citations

••

TL;DR: Various notions of proof-theoretic validity are investigated in detail and particular emphasis is placed on the relationship between semantic validity concepts and validity concepts used in normalization theory.

Abstract: The standard approach to what I call “proof-theoretic semantics”, which is mainly due to Dummett and Prawitz, attempts to give a semantics of proofs by defining what counts as a valid proof. After a discussion of the general aims of proof-theoretic semantics, this paper investigates in detail various notions of proof-theoretic validity and offers certain improvements of the definitions given by Prawitz. Particular emphasis is placed on the relationship between semantic validity concepts and validity concepts used in normalization theory. It is argued that these two sorts of concepts must be kept strictly apart.

85 citations

•

01 Dec 2011TL;DR: This book provides a unique self-contained text for advanced students and researchers in mathematical logic and computer science and develops the theoretical underpinnings of the first author's proof assistant MINLOG.

Abstract: Driven by the question, 'What is the computational content of a (formal) proof?', this book studies fundamental interactions between proof theory and computability. It provides a unique self-contained text for advanced students and researchers in mathematical logic and computer science. Part I covers basic proof theory, computability and Gdel's theorems. Part II studies and classifies provable recursion in classical systems, from fragments of Peano arithmetic up to 11-CA0. Ordinal analysis and the (Schwichtenberg-Wainer) subrecursive hierarchies play a central role and are used in proving the 'modified finite Ramsey' and 'extended Kruskal' independence results for PA and 11-CA0. Part III develops the theoretical underpinnings of the first author's proof assistant MINLOG. Three chapters cover higher-type computability via information systems, a constructive theory TCF of computable functionals, realizability, Dialectica interpretation, computationally significant quantifiers and connectives and polytime complexity in a two-sorted, higher-type arithmetic with linear logic.

76 citations

••

TL;DR: It is shown how to derive a compiler and a virtual machine from a compositional interpreter and the derivation provides a non-trivial illustration of Reynolds's warning about the evaluation order of a meta-language.

Abstract: We show how to derive a compiler and a virtual machine from a compositional interpreter. We first illustrate the derivation with two evaluation functions and two normalization functions. We obtain Krivine's machine, Felleisen et al.'s CEK machine, and a generalization of these machines performing strong normalization, which is new. We observe that several existing compilers and virtual machines--e.g., the Categorical Abstract Machine (CAM), Schmidt's VEC machine, and Leroy's Zinc abstract machine--are already in derived form and we present the corresponding interpreter for the CAM and the VEC machine. We also consider Hannan and Miller's CLS machine and Landin's SECD machine. We derived Krivine's machine via a call-by-name CPS transformation and the CEK machine via a call-by-value CPS transformation. These two derivations hold both for an evaluation function and for a normalization function. They provide a non-trivial illustration of Reynolds's warning about the evaluation order of a meta-language.

60 citations

•

12 Jun 2014

TL;DR: In this article, structural proof theory of pure logic is extended to axiomatic systems and to what is known as philosophical logic, with examples drawn from theories of order, lattice theory and elementary geometry.

Abstract: This book continues from where the authors' previous book, Structural Proof Theory, ended. It presents an extension of the methods of analysis of proofs in pure logic to elementary axiomatic systems and to what is known as philosophical logic. A self-contained brief introduction to the proof theory of pure logic is included that serves both the mathematically and philosophically oriented reader. The method is built up gradually, with examples drawn from theories of order, lattice theory and elementary geometry. The aim is, in each of the examples, to help the reader grasp the combinatorial behaviour of an axiom system, which typically leads to decidability results. The last part presents, as an application and extension of all that precedes it, a proof-theoretical approach to the Kripke semantics of modal and related logics, with a great number of new results, providing essential reading for mathematical and philosophical logicians.

57 citations