scispace - formally typeset
Search or ask a question
Book ChapterDOI

A Self-Interpreter of Lambda Calculus Having a Normal Form

28 Sep 1992-pp 85-99
TL;DR: In this paper, the notion of a canonical algebraic term rewriting system is introduced and interpreted in the lambda calculus by the Bohm-Piperno technique in such a way that strong normalization is preserved.
Abstract: We formalize a technique introduced by Bohm and Piperno to solve systems of recursive equations in lambda calculus without the use of the fixed point combinator and using only normal forms. To this aim we introduce the notion of a canonical algebraic term rewriting system, and we show that any such system can be interpreted in the lambda calculus by the Bohm — Piperno technique in such a way that strong normalization is preserved. This allows us to improve some recent results of Mogensen concerning efficient godelizations ⌈⌉:Λ→Λ of lambda calculus. In particular we prove that under a suitable godelization there exist two lambda terms E (self-interpreter) and R (reductor), both having a normal form, such that for every (closed or open) lambda term M E⌈M⌉→M and if M has a normal form N, then R⌈M⌉→⌈N⌉.
Citations
More filters
Journal ArticleDOI
TL;DR: The genesis of the lambda calculus and its two major areas of application are presented: the representation of computations and the resulting functional programming languages on the one hand and the representations of reasoning and the result systems of computer mathematics on the other hand.
Abstract: One of the most important contributions of A. Church to logic is his invention of the lambda calculus. We present the genesis of this theory and its two major areas of application: the representation of computations and the resulting functional programming languages on the one hand and the representation of reasoning and the resulting systems of computer mathematics on the other hand.

142 citations

01 Sep 1994

130 citations

Proceedings Article
01 Jan 2006
TL;DR: The stepwise construction of an efficient interpreter for lazy functional programming languages like Haskell and Clean is presented, which turns out to be very competitive in a comparison with other interpreters like Hugs, Helium, GHCi and Amanda for a number benchmarks.
Abstract: In this paper we present the stepwise construction of an efficient interpreter for lazy functional programming languages like Haskell and Clean. The interpreter is realized by first transforming the source language to the intermediate language SAPL (Simple Application Programming Language) consisting of pure functions only. During this transformation algebraic data types and pattern-based function definitions are mapped to functions. This eliminates the need for constructs for Algebraic Data Types and Pattern Matching in SAPL. For SAPL a simple and elegant interpreter is constructed using straightforward graph reduction techniques. This interpreter can be considered as a prototype implementation of lazy functional programming languages. Using abstract interpretation techniques the interpreter is optimised. The performance of the resulting interpreter turns out to be very competitive in a comparison with other interpreters like Hugs, Helium, GHCi and Amanda for a number benchmarks. For some benchmarks the interpreter even rivals the speed of the GHC compiler. Due to its simplicity and the stepwise construction this implementation is an ideal subject for introduction courses on implementation aspects of lazy functional programming languages.

34 citations

Journal ArticleDOI
19 Sep 2011
TL;DR: This paper presents the first statically-typed language that not only allows representations of different terms to have different types, and supports a self-recogniser, but also supports aSelf-enactor, and implements the approach and experiments support the theory.
Abstract: Self-interpreters can be roughly divided into two sorts: self-recognisers that recover the input program from a canonical representation, and self-enactors that execute the input program. Major progress for statically-typed languages was achieved in 2009 by Rendel, Ostermann, and Hofer who presented the first typed self-recogniser that allows representations of different terms to have different types. A key feature of their type system is a type:type rule that renders the kind system of their language inconsistent.In this paper we present the first statically-typed language that not only allows representations of different terms to have different types, and supports a self-recogniser, but also supports a self-enactor. Our language is a factorisation calculus in the style of Jay and Given-Wilson, a combinatory calculus with a factorisation operator that is powerful enough to support the pattern-matching functions necessary for a self-interpreter. This allows us to avoid a type:type rule. Indeed, the types of System F are sufficient. We have implemented our approach and our experiments support the theory.

23 citations

Book ChapterDOI
11 Apr 1994
TL;DR: It is proved, under very weak assumptions on the structure of the equations, that there always exist solutions in normal form (Interpretation theorem).
Abstract: Lambda-calculus is extended in order to represent a rather large class of recursive equation systems, implicitly characterizing function(al)s or mappings of some algebraic domain into arbitrary sets. Algebraic equality will then be represented by λβδ-convertibility (or even reducibility). It is then proved, under very weak assumptions on the structure of the equations, that there always exist solutions in normal form (Interpretation theorem). Some features of the solutions, like the use of parametric representations of the algebraic constructors, higher-order solutions by currification, definability of functions on unions of algebras, etc., have been easily checked by a first implementation of the mentioned theorem, the CuCh machine.

18 citations

References
More filters
Journal ArticleDOI

695 citations

Book
01 Jul 1988
TL;DR: In contrast to procedural / imperative programming, functional programming emphasizes the evaluation of functional expressions, rather than execution of commands.
Abstract: Functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions. In contrast to procedural / imperative programming, functional programming emphasizes the evaluation of functional expressions, rather than execution of commands. The expressions in these languages are formed by using functions to combine basic values.

370 citations

Journal ArticleDOI
TL;DR: Barendregt (1991) gives a fine-structure of the theory of constructions in the form of a canonical cube of eight type systems ordered by inclusion, which shows that the generalized type systems are flexible enough to describe many logical systems.
Abstract: Programming languages often come with type systems. Some of these are simple, others are sophisticated. As a stylistic representation of types in programming languages several versions of typed lambda calculus are studied. During the last 20 years many of these systems have appeared, so there is some need of classification. Working towards a taxonomy, Barendregt (1991) gives a fine-structure of the theory of constructions (Coquand and Huet 1988) in the form of a canonical cube of eight type systems ordered by inclusion. Berardi (1988) and Terlouw (1988) have independently generalized the method of constructing systems in the λ-cube. Moreover, Berardi (1988, 1990) showed that the generalized type systems are flexible enough to describe many logical systems. In that way the well-known propositions-as-types interpretation obtains a nice canonical form.

251 citations

Journal ArticleDOI
TL;DR: The notion of iteratively defined functions from and to heterogeneous term algebras is introduced as the solution of a finite set of equations of a special shape and an extension of the paradigms to the synthesis of functions of higher complexity is considered and exemplified.

248 citations

Journal ArticleDOI
TL;DR: This work gives a compact representation schema for λ-terms, and shows how this leads to an exceedingly small and elegant self-interpreter and self-reducer, and gives a constructive proof for the second fixed point theorem for the representation schema.
Abstract: We start by giving a compact representation schema for λ-terms, and show how this leads to an exceedingly small and elegant self-interpreter. We then define the notion of a self-reducer, and show how this too can be written as a small λ-term. Both the self-interpreter and the self-reducer are proved correct. We finally give a constructive proof for the second fixed point theorem for the representation schema. All the constructions have been implemented on a computer, and experiments verify their correctness. Timings show that the self-interpreter and self-reducer are quite efficient, being about 35 and 50 times slower than direct execution using a call-by-need reductions strategy

93 citations