scispace - formally typeset
Search or ask a question
Book ChapterDOI

The theory of calculi with explicit substitutions revisited

Delia Kesner1
11 Sep 2007-Vol. 4646, pp 238-252
TL;DR: Very simple technology is used to establish a general theory of explicit substitutions for the lambda-calculus which enjoys fundamental properties such as simulation of one-step beta-reduction, confluence on metaterms, preservation of beta-strong normalisation, strong normalisation of typed terms and full composition.
Abstract: Calculi with explicit substitutions (ES) are widely used in different areas of computer science. Complex systems with ES were developed these last 15 years to capture the good computational behaviour of the original systems (with meta-level substitutions) they were implementing. In this paper we first survey previous work in the domain by pointing out the motivations and challenges that guided the development of such calculi. Then we use very simple technology to establish a general theory of explicit substitutions for the lambda-calculus which enjoys fundamental properties such as simulation of one-step beta-reduction, confluence on metaterms, preservation of beta-strong normalisation, strong normalisation of typed terms and full composition. The calculus also admits a natural translation into Linear Logic's proof-nets.

Summary (2 min read)

1 Introduction

  • This paper is aboutexplicit substitutions(ES), an intermediate formalism that - by decomposing thehigher-ordersubstitution operation into more atomic steps - allows a better understanding of the execution models of complex langu ges.
  • Theλx-calculus corresponds to the minimal behaviour2 that can be found in most of the calculi with ES appearing in the literature.
  • More sophisticated treatments of substitutions also consider a composition operator allowing muchmore interactions between them.
  • Section 2 introduces syntax forΛesterms and appropriate notions of equivalence and reduction.
  • Finally, strong normalisation based on PSN is proved in this same section.

2 Syntax

  • The syntactic object [x/u], which is not a term itself, is called anexplicit substitution.
  • Indeed, when using different symbolsx andy to talk about twonestedbound variables, as for example in the terms(λy.t)[x/u] andt[x/u][y/v], the authors implicitly mean x 6= y.
  • The authors leave to the reader the verification that composition of simultaneous substitution can be expressed within ourλes-reduction relation.
  • The authors now establish basic connections betweenλ andλes-reduction.
  • First prove that =Es u impliesL(t) = L(u) by the well-known substitution lemma [4] ofλ-calculus.

3 Confluence on metaterms

  • Metatermsare terms containingmetavariablesdenotingincompleteprograms/proofs in a higher-order unification framework [25].
  • Thus for example, a CRS metaterm like M(x, y) specifies thatx andy mayoccur in the instantiation ofM , butM can also be further instantiated by any other term not containingx andy at all.
  • This decoration says nothing about thestructureof the incomplete proof itself but is sufficient to guaranteethat different occurrences of the same metavariable inside a metaterm are never i stantiated by different metaterms.
  • Reductionon metaterms must be understood in the same way reduction on terms: theλes-relation is generated by theBs-relation onEs-equivalence classes ofmetaterms.
  • This can be fortunately recovered in the casof theλes-calculus.

3.1 The confluence proof

  • This section develops a confluence proof for reduction onλes-metaterms based on Tait and Martin-Löf’s technique: define a simultaneous reduction relation denoted⇛es; prove that⇛∗es and→ ∗ es are the same relation; show that⇛ ∗ es is confluent; and finally conclude.
  • While many steps in this proof are similar to thoseappearing in other proofs of confluence for theλ-calculus, some special considerations are to be used here in order to accommodate correctly the substitution calculus as well as the equational part of their notion of reduction (see in particular Lemma 6).
  • Thees-normal forms of metaterms are unique moduloEs so thatt =Es u implieses(t) =Es es(u).
  • The simultaneous relation is stable in the following sense.
  • The relation⇛∗es enjoys the diamond property (Lemma 6) so that it turns out to be confluent [9].

4 Preservation ofβ-strong normalisation

  • Preservation ofβ-strong normalisation (PSN) in calculi with ES received a lot of attention (see for example [2, 6, 10, 32]), starting from an unexpected result given by Melliès [40] who has shown that there areβ-strongly normalisable terms inλ-calculus that are not strongly normalisable when evaluated by the reduction rules of an explicit version of theλ-calculus.
  • For that, the authors use a simulation proof technique based on the following steps.

4.1 Theλesw-calculus

  • The notion ofstrict term will be essential: every subtermλx.t and t[x/u] is such thatx ∈ t and every subtermWx(t) is such thatx /∈ t. Besides equations and rules inλes, those in the following table are also considered.
  • This is not the case for example forλx.
  • Wy(t) orWy(t)[x/u] where the variablesx andy may be equal or different, that’s the reason to explicitly add the side-conditionx 6= y in some of the previous equations and rules.
  • The relation generated by the reduction rulessw (resp.Bsw) modulo the equivalence relation=Esw is denoted by→esw (resp.→λesw).
  • From now on, the authors only work with strict terms, a choice that is jutified by the fact thatλesw-reduction relation preserves strict terms.

4.2 TheΛI -calculus

  • TheΛI -calculus is another intermediate language used as technical tool to prove PSN.
  • The authors consider the extended notions of free variables and (meta)level substitution on ΛI -terms.
  • A binary relation (and not a function)I is used to relateλesw andΛI -terms, this becauseΛesw-terms are translated intoΛI-syntax by adding somegarbageinformation which is not uniquely determined.
  • Reduction inλesw can be related to reduction inΛI by means of the following simulation property (proved by induction on the reduction/equivalence step).
  • This leads to a contradiction with the hypothesis.

5 The typedλes-calculus

  • Simply typesare built over a countable set of atomic symbols (base types)and the type constructor→ (functional types).
  • The axiom rule types a variable in a minimal environment but variablesnot appearing free may be introduced by binder symbols by means of the rulesabs andsubs.
  • The typing rules forλes ensure that every environmentΓ containsexactlythe set of free variables of the termt.
  • The connexion betweentyped derivations inλ-calculus (written⊢λ) and typed derivations inλes-calculus is stated as follows, whereΓ |S denotes the environment Γ restricted to the set of variablesS. Lemma 11. Theorem 3 (Strong Normalisation).

6 Conclusion

  • The authors propose simple syntax and simple equations and rewriting rules to model a formalism enjoyinggood properties, specially confluence on metaterms, preservation ofβ-strong normalisation, strong normalisation of typed terms and implementation of full composition.
  • Note however thatλes-reduction can be translated to the correspondent notion of reduction in this calculus : thus for exampleApp1 can be obtained byApp followed byGc.
  • In other words, it would be more efficient to work with a pure rew iting system (without equations) verifying the same properties thanλes.
  • The authors believe that simultaneous substitutions will be needed to avoid axiomC while some technology like de Bruijn notation will be needed to avoid axiomα (as in theλσ⇑ -calculus).

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI
Olivier Danvy1
19 May 2008
TL;DR: The overall method builds on previous work by the author and his students on a syntactic correspondence between reduction semantics and abstract Machines and on a functional correspondence between evaluators and abstract machines.
Abstract: We document an operational method to construct reduction-free normalization functions. Starting from a reduction-based normalization function from a reduction semantics, i.e., the iteration of a one-step reduction function, we successively subject it to refocusing (i.e., deforestation of the intermediate successive terms in the reduction sequence), equational simplification, refunctionalization (i.e., the converse of defunctionalization), and direct-style transformation (i.e., the converse of the CPS transformation), ending with a reduction-free normalization function of the kind usually crafted by hand. We treat in detail four simple examples: calculating arithmetic expressions, recognizing Dyck words, normalizing lambda-terms with explicit substitutions and call/cc, and flattening binary trees. The overall method builds on previous work by the author and his students on a syntactic correspondence between reduction semantics and abstract machines and on a functional correspondence between evaluators and abstract machines. The measure of success of these two correspondences is that each of the inter-derived semantic artifacts (i.e., man-made constructs) could plausibly have been written by hand, as is the actual case for several ones derived here.

56 citations

Proceedings ArticleDOI
28 May 2012
TL;DR: A simple form of standardization, here called factorization, for explicit substitutions calculi, i.e. lambda-calculi where beta-reduction is decomposed in various rules, is studied and an abstract theorem deducing factorization from some axioms on local diagrams is developed.
Abstract: We study a simple form of standardization, here called factorization, for explicit substitutions calculi, i.e. lambda-calculi where beta-reduction is decomposed in various rules. These calculi, despite being non-terminating and non-orthogonal, have a key feature: each rule terminates when considered separately. It is well-known that the study of rewriting properties simplifies in presence of termination (e.g. confluence reduces to local confluence). This remark is exploited to develop an abstract theorem deducing factorization from some axioms on local diagrams. The axioms are simple and easy to check, in particular they do not mention residuals. The abstract theorem is then applied to some explicit substitution calculi related to Proof-Nets. We show how to recover standardization by levels, we model both call-by-name and call-by-value calculi and we characterize linear head reduction via a factorization theorem for a linear calculus of substitutions.

54 citations

Journal ArticleDOI
TL;DR: A typing system with non-idempotent intersection types, typing a term syntax covering three different calculi, and the instance based on filters is shown to be better at proving strong normalisation results for {\ lambda}S and {\lambda}lxr.
Abstract: We present a typing system with non-idempotent intersection types, typing a term syntax covering three different calculi: the pure {\lambda}-calculus, the calculus with explicit substitutions {\lambda}S, and the calculus with explicit substitutions, contractions and weakenings {\lambda}lxr. In each of the three calculi, a term is typable if and only if it is strongly normalising, as it is the case in (many) systems with idempotent intersections. Non-idempotency brings extra information into typing trees, such as simple bounds on the longest reduction sequence reducing a term to its normal form. Strong normalisation follows, without requiring reducibility techniques. Using this, we revisit models of the {\lambda}-calculus based on filters of intersection types, and extend them to {\lambda}S and {\lambda}lxr. Non-idempotency simplifies a methodology, based on such filter models, that produces modular proofs of strong normalisation for well-known typing systems (e.g. System F). We also present a filter model by means of orthogonality techniques, i.e. as an instance of an abstract notion of orthogonality model formalised in this paper and inspired by classical realisability. Compared to other instances based on terms (one of which rephrases a now standard proof of strong normalisation for the {\lambda}-calculus), the instance based on filters is shown to be better at proving strong normalisation results for {\lambda}S and {\lambda}lxr. Finally, the bounds on the longest reduction sequence, read off our typing trees, are refined into an exact measure, read off a specific typing tree (called principal); in each of the three calculi, a specific reduction sequence of such length is identified. In the case of the {\lambda}-calculus, this complexity result is, for longest reduction sequences, the counterpart of de Carvalho's result for linear head-reduction sequences.

52 citations

Book ChapterDOI
23 Aug 2010
TL;DR: An untyped structural λ-calculus, called λj, is introduced, which combines action at a distance with exponential rules decomposing the substitution by means of weakening, contraction and dereliction, and fundamental properties such as confluence and preservation of β-strong normalisation are proved.
Abstract: Inspired by a recent graphical formalism for λ-calculus based on Linear Logic technology, we introduce an untyped structural λ-calculus, called λj, which combines action at a distance with exponential rules decomposing the substitution by means of weakening, contraction and dereliction. Firstly, we prove fundamental properties such as confluence and preservation of β-strong normalisation. Secondly, we use λj to describe known notions of developments and superdevelopments, and introduce a more general one called XL-development. Then we show how to reformulate Regnier's s-equivalence in λj so that it becomes a strong bisimulation. Finally, we prove that explicit composition or de-composition of substitutions can be added to λj while still preserving β-strong normalisation.

46 citations


Cites background or methods or result from "The theory of calculi with explicit..."

  • ...In contrast to known PSN proofs for calculi with ES and composition of substitutions [3, 13, 15], we get a very concise and simple proof of the IE property, and thus of PSN, due to the fact that λj has no propagation rule....

    [...]

  • ...Therefore we shall show the IE property by adapting the technique in [13]....

    [...]

  • ...Different cut elimination systems [6, 15, 13], called explicit substitution (ES) calculi, were explained in terms of, or inspired by, the notion of reduction of MELL Proof-Nets....

    [...]

  • ...However, näıve rules may break the PSN property, so that safe composition rules are needed to recover both PSN and confluence on terms with metavariables [13]....

    [...]

  • ...Confluence of calculi with ES can be easily proved by using Tait and Martin Löf’s technique (see for example the case of λes [13])....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a typing system with non-idempotent intersection types is presented, where a term is typable if and only if it is strongly normalising, as it is the case in (many) systems with idempotent intersections.
Abstract: We present a typing system with non-idempotent intersection types, typing a term syntax covering three different calculi: the pure {\lambda}-calculus, the calculus with explicit substitutions {\lambda}S, and the calculus with explicit substitutions, contractions and weakenings {\lambda}lxr. In each of the three calculi, a term is typable if and only if it is strongly normalising, as it is the case in (many) systems with idempotent intersections. Non-idempotency brings extra information into typing trees, such as simple bounds on the longest reduction sequence reducing a term to its normal form. Strong normalisation follows, without requiring reducibility techniques. Using this, we revisit models of the {\lambda}-calculus based on filters of intersection types, and extend them to {\lambda}S and {\lambda}lxr. Non-idempotency simplifies a methodology, based on such filter models, that produces modular proofs of strong normalisation for well-known typing systems (e.g. System F). We also present a filter model by means of orthogonality techniques, i.e. as an instance of an abstract notion of orthogonality model formalised in this paper and inspired by classical realisability. Compared to other instances based on terms (one of which rephrases a now standard proof of strong normalisation for the {\lambda}-calculus), the instance based on filters is shown to be better at proving strong normalisation results for {\lambda}S and {\lambda}lxr. Finally, the bounds on the longest reduction sequence, read off our typing trees, are refined into an exact measure, read off a specific typing tree (called principal); in each of the three calculi, a specific reduction sequence of such length is identified. In the case of the {\lambda}-calculus, this complexity result is, for longest reduction sequences, the counterpart of de Carvalho's result for linear head-reduction sequences.

41 citations

References
More filters
Journal ArticleDOI
30 Jan 1987

3,947 citations

Book
30 Apr 2012
TL;DR: In this article, the Lambda-Calculus has been studied as a theory of composition and reduction, and the theory of reduction has been used to construct models of Lambda Theories.
Abstract: Towards the Theory. Introduction. Conversion. Reduction. Theories. Models. Conversion. Classical Lambda Calculus. The Theory of Combinators. Classical Lambda Calculus (Continued). The Lambda-Calculus. Bohm Trees. Reduction. Fundamental Theorems. Strongly Equivalent Reductions. Reduction Strategies. Labelled Reduction. Other Notions of Reduction. Theories. Sensible Theories. Other Lambda Theories. Models. Construction of Models. Local Structure of Models. Global Structure of Models. Combinatory Groups. Appendices: Typed Lambda Calculus. Illative Combinatory Logic. Variables. References.

2,632 citations

Book
01 Jan 1998
TL;DR: This chapter discusses abstract reduction systems, universal algebra, and Grobner bases and Buchberger's algorithm, and a bluffer's guide to ML Bibliography Index.
Abstract: Preface 1. Motivating examples 2. Abstract reduction systems 3. Universal algebra 4. Equational problems 5. Termination 6. Confluence 7. Completion 8. Grobner bases and Buchberger's algorithm 9. Combination problems 10. Equational unification 11. Extensions Appendix 1. Ordered sets Appendix 2. A bluffer's guide to ML Bibliography Index.

2,515 citations


"The theory of calculi with explicit..." refers background or methods in this paper

  • ...The relation ∗es enjoys the diamond property (Lemma 6) so that it turns out to be confluent [9]....

    [...]

  • ...We refer the reader to [28] for detailed proofs and to [9, 47] for standard notions from rewriting that we will use throughout the paper....

    [...]

Book
01 Jan 1990
TL;DR: This book provides a formal definition of Standard ML for the benefit of all concerned with the language, including users and implementers, and the authors have defined their semantic objects in mathematical notation that is completely independent of StandardML.
Abstract: From the Publisher: Standard ML is general-purpose programming language designed for large projects. This book provides a formal definition of Standard ML for the benefit of all concerned with the language, including users and implementers. Because computer programs are increasingly required to withstand rigorous analysis, it is all the more important that the language in which they are written be defined with full rigor. The authors have defined their semantic objects in mathematical notation that is completely independent of Standard ML.

2,389 citations

Journal ArticleDOI
TL;DR: This column presents an intuitive overview of linear logic, some recent theoretical results, and summarizes several applications oflinear logic to computer science.
Abstract: Linear logic was introduced by Girard in 1987 [11] . Since then many results have supported Girard' s statement, \"Linear logic is a resource conscious logic,\" and related slogans . Increasingly, computer scientists have come to recognize linear logic as an expressive and powerful logic with connection s to a variety of topics in computer science . This column presents a.n intuitive overview of linear logic, some recent theoretical results, an d summarizes several applications of linear logic to computer science . Other introductions to linear logic may be found in [12, 361 .

2,304 citations

Frequently Asked Questions (2)
Q1. What are the contributions mentioned in the paper "The theory of calculi with explicit substitutions revisited" ?

In this paper the authors first survey previous work in the domain by pointing out the motivations and challenges that guided the development of such calculi. 

The authors leave this for future work. Note however that λes-reduction can be translated to the correspondent notion of reduction in this calculus: thus for example App1 can be obtained by App followed by Gc. The authors believe that simultaneous substitutions will be needed to avoid axiom C while some technology like de Bruijn notation will be needed to avoid axiom α ( as in the λσ⇑ -calculus ).