scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Distilling abstract machines

TL;DR: The distillation process unveils that abstract machines in fact implement weak linear head reduction, a notion of evaluation having a central role in the theory of linear logic, and shows that the LSC is a complexity-preserving abstraction of abstract machines.
Abstract: It is well-known that many environment-based abstract machines can be seen as strategies in lambda calculi with explicit substitutions (ES). Recently, graphical syntaxes and linear logic led to the linear substitution calculus (LSC), a new approach to ES that is halfway between small-step calculi and traditional calculi with ES. This paper studies the relationship between the LSC and environment-based abstract machines. While traditional calculi with ES simulate abstract machines, the LSC rather distills them: some transitions are simulated while others vanish, as they map to a notion of structural congruence. The distillation process unveils that abstract machines in fact implement weak linear head reduction, a notion of evaluation having a central role in the theory of linear logic. We show that such a pattern applies uniformly in call-by-name, call-by-value, and call-by-need, catching many machines in the literature. We start by distilling the KAM, the CEK, and a sketch of the ZINC, and then provide simplified versions of the SECD, the lazy KAM, and Sestoft's machine. Along the way we also introduce some new machines with global environments. Moreover, we show that distillation preserves the time complexity of the executions, i.e. the LSC is a complexity-preserving abstraction of abstract machines.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
08 Jan 2014
TL;DR: This paper focuses on standardization for the linear substitution calculus, a calculus with ES capable of mimicking reduction in lambda-calculus and linear logic proof-nets, and relies on Gonthier, Lévy, and Melliès' axiomatic theory for standardization.
Abstract: Standardization is a fundamental notion for connecting programming languages and rewriting calculi. Since both programming languages and calculi rely on substitution for defining their dynamics, explicit substitutions (ES) help further close the gap between theory and practice.This paper focuses on standardization for the linear substitution calculus, a calculus with ES capable of mimicking reduction in lambda-calculus and linear logic proof-nets. For the latter, proof-nets can be formalized by means of a simple equational theory over the linear substitution calculus.Contrary to other extant calculi with ES, our system can be equipped with a residual theory in the sense of Levy, which is used to prove a left-to-right standardization theorem for the calculus with ES but without the equational theory. Such a theorem, however, does not lift from the calculus with ES to proof-nets, because the notion of left-to-right derivation is not preserved by the equational theory. We then relax the notion of left-to-right standard derivation, based on a total order on redexes, to a more liberal notion of standard derivation based on partial orders.Our proofs rely on Gonthier, Levy, and Mellies' axiomatic theory for standardization. However, we go beyond merely applying their framework, revisiting some of its key concepts: we obtain uniqueness (modulo) of standard derivations in an abstract way and we provide a coinductive characterization of their key abstract notion of external redex. This last point is then used to give a simple proof that linear head reduction --a nondeterministic strategy having a central role in the theory of linear logic-- is standard.

61 citations


Cites background or methods from "Distilling abstract machines"

  • ...This notion of reduction is deterministic and tightly related to the p-calculus [3] and Krivine Abstract Machine [7, 14]....

    [...]

  • ...In parallel, Accattoli, Barenbaum and Mazza [7] studied the relationship between abstract machines and calculi at a distance obtaining that weak LHR is the strategy implemented by the Krivine Abstract Machine, up to a certain no­tion of structural equivalence....

    [...]

  • ...Moreover, [7] contains similar correspondences between machines for call-by-value (the CEK and Leroy’s ZINC) and callby-need (Sestoft’s machine) and some variations over weak LHR....

    [...]

  • ...In parallel, Accattoli, Barenbaum and Mazza [7] studied the relationship between abstract machines and calculi at a distance obtaining that weak LHR is the strategy implemented by the Krivine Abstract Machine, up to a certain notion of structural equivalence....

    [...]

  • ...This notion of reduction is deterministic and tightly related to the π-calculus [3] and Krivine Abstract Machine [7, 14]....

    [...]

Proceedings ArticleDOI
14 Jul 2014
TL;DR: The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties, and the first complete positive answer to this long-standing problem of λ-calculus.
Abstract: Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is λ-calculus a reasonable machine? Is there a way to measure the computational complexity of a λ-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of λ-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating λ-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modelled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the λ-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the λ-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to β-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.

48 citations


Additional excerpts

  • ...In particular, building on the already established relationships between the LSC and abstract machines [5], we expect to be able to design an abstract machine implementing LOU evaluation and testing for usefulness in time linear in the size of the starting term....

    [...]

Proceedings ArticleDOI
05 Sep 2016
TL;DR: The Bang Calculus is introduced, an untyped functional calculus in which the promotion operation of Linear Logic is made explicit and where application is a bilinear operation, and an adequacy theorem is proved by means of a resourcebang Calculus whose design is based on Differential Linear Logic.
Abstract: We introduce and study the Bang Calculus, an untyped functional calculus in which the promotion operation of Linear Logic is made explicit and where application is a bilinear operation. This calculus, which can be understood as an untyped version of Call-By-Push-Value, subsumes both Call-By-Name and Call-By-Value lambda-calculi, factorizing the Girard's translations of these calculi in Linear Logic. We build a denotational model of the Bang Calculus based on the relational interpretation of Linear Logic and prove an adequacy theorem by means of a resource Bang Calculus whose design is based on Differential Linear Logic.

43 citations

Book ChapterDOI
21 Nov 2016
TL;DR: A detailed comparative study of the operational semantics of four calculi, coming from different areas such as the study of abstract machines, denotational semantics, linear logic proof nets, and sequent calculus, showing that these calculi are all equivalent from a termination point of view.
Abstract: The elegant theory of the call-by-value lambda-calculus relies on weak evaluation and closed terms, that are natural hypotheses in the study of programming languages. To model proof assistants, however, strong evaluation and open terms are required, and it is well known that the operational semantics of call-by-value becomes problematic in this case. Here we study the intermediate setting—that we call Open Call-by-Value—of weak evaluation with open terms, on top of which Gregoire and Leroy designed the abstract machine of Coq. Various calculi for Open Call-by-Value already exist, each one with its pros and cons. This paper presents a detailed comparative study of the operational semantics of four of them, coming from different areas such as the study of abstract machines, denotational semantics, linear logic proof nets, and sequent calculus. We show that these calculi are all equivalent from a termination point of view, justifying the slogan Open Call-by-Value.

42 citations


Cites background or methods from "Distilling abstract machines"

  • ...On the other hand, it forces to use explicit α-renamings (in e), but this does not affect the overall complexity, as it speeds up other operations, see [8]....

    [...]

  • ...In [7] the study of the GLAMOUr was done according to the distillation approach of [8], i....

    [...]

  • ...optimized to be reasonable) Open (reducing open terms) Global (using a single global environment) LAM, and LAM stays for Leroy Abstract Machine, an ordinary machine implementing left-to-right call-byvalue, defined in [8]....

    [...]

  • ...According to the distillation approach we distinguish different kinds of transitions, whose names reflect a proof-theoretical view, as machine transitions can be seen as cutelimination steps [8, 9]: • Multiplicative m: it morally fires a→rβf -redex, except that its action puts a new ES in the environment instead of substituting the argument, as→m in λvsub; • Exponential e: performs a clashing-avoiding substitution from the environment on the single occurrence represented by the current code....

    [...]

Journal ArticleDOI
TL;DR: The linear substitution calculus (LSC) as mentioned in this paper is a calculus of explicit substitutions modeled after linear logic proof nets and admits a decomposition of leftmost-outermost derivations with the desired property.
Abstract: Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.

38 citations

References
More filters
Journal ArticleDOI
TL;DR: This paper examines the old question of the relationship between ISWIM and the λ-calculus, using the distinction between call-by-value and call- by-name, and finds that operational equality is not preserved by either of the simulations.
Abstract: This paper examines the old question of the relationship between ISWIM and the λ-calculus, using the distinction between call-by-value and call-by-name. It is held that the relationship should be mediated by a standardisation theorem. Since this leads to difficulties, a new λ-calculus is introduced whose standardisation theorem gives a good correspondence with ISWIM as given by the SECD machine, but without the letrec feature. Next a call-by-name variant of ISWIM is introduced which is in an analogous correspondence withthe usual λ-calculus. The relation between call-by-value and call-by-name is then studied by giving simulations of each language by the other and interpretations of each calculus in the other. These are obtained as another application of the continuation technique. Some emphasis is placed throughout on the notion of operational equality (or contextual equality). If terms can be proved equal in a calculus they are operationally equal in the corresponding language. Unfortunately, operational equality is not preserved by either of the simulations.

1,240 citations

Journal ArticleDOI
TL;DR: It is shown how some forms of expression in current programming languages can be modelled in Church's X-notation, and a way of "interpreting" such expressions is described, which suggests a method of analyzing the things computer users write.
Abstract: This paper is a contribution to the \"theory\" of the activity of using computers. It shows how some forms of expression used in current programming languages can be modelled in Church's X-notation, and then describes a way of \"interpreting\" such expressions. This suggests a method, of analyzing the things computer users write, that applies to many different problem orientations and to different phases of the activity of using a computer. Also a technique is introduced by which the various composite information structures involved can be formally characterized in their essentials, without commitment to specific written or other representations.

979 citations


"Distilling abstract machines" refers background or methods in this paper

  • ...For the acquainted reader, the new stack morally is the dump of Landin’s SECD machine [35] (but beware that the original definition of the SECD is quite more technical)....

    [...]

  • ...Some are standard (KAM [34], CEK [28], a sketch of the ZINC [37]), some are new (MAM, MAD), and of others we provide simpler versions (SECD [35], Lazy KAM [19, 24], Sestoft’s [44])....

    [...]

  • ...The split CEK can be seen as a simplification of Landin’s SECD machine [35]....

    [...]

Proceedings ArticleDOI
25 Jan 1995
TL;DR: This paper derives an equational characterization of call-by-need and proves it correct with respect to the original lambda calculus and is a strictly smaller theory than the lambda calculus.
Abstract: The mismatch between the operational semantics of the lambda calculus and the actual behavior of implementations is a major obstacle for compiler writers. They cannot explain the behavior of their evaluator in terms of source level syntax, and they cannot easily compare distinct implementations of different lazy strategies. In this paper we derive an equational characterization of call-by-need and prove it correct with respect to the original lambda calculus. The theory is a strictly smaller theory than the lambda calculus. Immediate applications of the theory concern the correctness proofs of a number of implementation strategies, e.g., the call-by-need continuation passing transformation and the realization of sharing via assignments.

299 citations


"Distilling abstract machines" refers background or methods in this paper

  • ...The call-by-need calculus we use—that is a contextual reformulation of Maraist, Odersky, and Wadler’s calculus [38]—is a novelty of this paper....

    [...]

  • ...1) is a novelty of this paper, and can be seen either as a version at a distance of the calculi of [13, 38] or as a version with explicit substitution of the one in [17]....

    [...]

Journal ArticleDOI
TL;DR: The machine derived is a lazy version of Krivine's abstract machine, which was originally designed for call-by-name evaluation, and is extended with datatype constructors and base values, so the final machine implements all dynamic aspects of a lazy functional language.
Abstract: We derive a simple abstract machine for lazy evaluation of the lambda calculus, starting from Launchbury's natural semantics. Lazy evaluation here means non-strict evaluation with sharing of argument evaluation, i.e. call-by-need. The machine we derive is a lazy version of Krivine's abstract machine, which was originally designed for call-by-name evaluation. We extend it with datatype constructors and base values, so the final machine implements all dynamic aspects of a lazy functional language.

210 citations


"Distilling abstract machines" refers background or methods in this paper

  • ...10 we will present the Pointing MAD, a variant of the MAD (akin to Sestoft’s machine for CBNeed [44]) that avoids saving E1 in a dump entry, and restoring the store view of the global environment....

    [...]

  • ...Some are standard (KAM [34], CEK [28], a sketch of the ZINC [37]), some are new (MAM, MAD), and of others we provide simpler versions (SECD [35], Lazy KAM [19, 24], Sestoft’s [44])....

    [...]

  • ...It can be seen as a simpler version of Sestoft’s Abstract Machine [44], here called SAM....

    [...]

01 Feb 1990
TL;DR: This is an implementation of the ML language intended to serve as a test eld for various extensions of the language, and for new implementation techniques as well, including an eecient implementation of records with inclusion (subtyping).
Abstract: Ce rapport pr esente en d etail la conception et la r ealisation du syst eme ZINC. C'est une mise en uvre du langage ML, se xant pour objectif de permettre l'exp erimentation de diverses extensions du langage, ainsi que de nouvelles techniques d'impl ementation. Ce syst eme est conn cu pour permettre la compilation ind ependante et la production de petits programmes autonomes; le respect du typage est assur e par par un syst eme de modules a la Modula-2. ZINC utilise des techniques simples et facilement adaptables, telles que l'interpr etation de code abstrait; un mod ele d'ex ecution raan e compense en partie le ralentissement d^ u a l'interpr etation. A noter egalement une repr esentation eecace des records avec inclusion (sous-typage). Abstract This report details the design and implementation of the ZINC system. This is an implementation of the ML language, intended to serve as a test eld for various extensions of the language, and for new implementation techniques as well. This system is strongly oriented toward separate compilation and the production of small, standalone programs; type safety is ensured by a Modula-2-like module system. ZINC uses simple, portable techniques, such as bytecode interpretation; a sophisticated execution model helps counterbalance the interpretation overhead. Other highlights include an eecient implementation of records with inclusion (subtyping). Ecole Normale Sup erieure et INRIA Rocquencourt, projet Formel.

204 citations


"Distilling abstract machines" refers background or methods in this paper

  • ...The LAM owes its name to Leroy’s ZINC machine [37], that implements right-to-left CBV evaluation....

    [...]

  • ...Some are standard (KAM [34], CEK [28], a sketch of the ZINC [37]), some are new (MAM, MAD), and of others we provide simpler versions (SECD [35], Lazy KAM [19, 24], Sestoft’s [44])....

    [...]

  • ...On the other hand, [16] provides a deeper analysis of Leroy’s ZINC machine, as ours does not account for the avoidance of needless closure creations that is a distinct feature of the ZINC, and [24] focuses on the distinction between store-based and storeless call-by-need, a distinction that we address only implicitly (the calculus is storeless, but—as it will be discussed along the paper—it is meant to be implemented with a store)....

    [...]

  • ...We introduce a new name because the ZINC is a quite more sophisticated machine than the LAM: it has a separate sets of instructions to which terms are compiled, it handles arithmetic expressions, and it avoids needless closure creations in a way that it is not captured by the LAM....

    [...]