scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Functional Programming in 2004"


Journal ArticleDOI
TL;DR: This paper contributes a new programming notation for type theory which elaborates the notion of pattern matching in various ways, and develops enough syntax and semantics to account for this high-level style of programming in dependent type theory.
Abstract: Pattern matching has proved an extremely powerful and durable notion in functional programming. This paper contributes a new programming notation for type theory which elaborates the notion in various ways. First, as is by now quite well-known in the type theory community, definition by pattern matching becomes a more discriminating tool in the presence of dependent types, since it refines the explanation of types as well as values. This becomes all the more true in the presence of the rich class of datatypes known as inductive families (Dybjer, 1991). Secondly, as proposed by Peyton Jones (1997) for Haskell, and independently rediscovered by us, subsidiary case analyses on the results of intermediate computations, which commonly take place on the right-hand side of definitions by pattern matching, should rather be handled on the left. In simply-typed languages, this subsumes the trivial case of Boolean guards; in our setting it becomes yet more powerful. Thirdly, elementary pattern matching decompositions have a well-defined interface given by a dependent type; they correspond to the statement of an induction principle for the datatype. More general, user-definable decompositions may be defined which also have types of the same general form. Elementary pattern matching may therefore be recast in abstract form, with a semantics given by translation. Such abstract decompositions of data generalize Wadler's (1987) notion of ‘view’. The programmer wishing to introduce a new view of a type $\mathit{T}$, and exploit it directly in pattern matching, may do so via a standard programming idiom. The type theorist, looking through the Curry–Howard lens, may see this as proving a theorem, one which establishes the validity of a new induction principle for $\mathit{T}$. We develop enough syntax and semantics to account for this high-level style of programming in dependent type theory. We close with the development of a typechecker for the simply-typed lambda calculus, which furnishes a view of raw terms as either being well-typed, or containing an error. The implementation of this view is ipso facto a proof that typechecking is decidable.

311 citations


Journal ArticleDOI
TL;DR: This paper starts with a gradual introduction to GF, going through a sequence of simpler formalisms till the full power is reached, followed by a systematic presentation of the GF formalism and outlines of the main algorithms: partial evaluation and parser generation.
Abstract: Grammatical Framework (GF) is a special-purpose functional language for defining grammars. It uses a Logical Framework (LF) for a description of abstract syntax, and adds to this a notation for defining concrete syntax. GF grammars themselves are purely declarative, but can be used both for linearizing syntax trees and parsing strings. GF can describe both formal and natural languages. The key notion of this description is a grammatical object, which is not just a string, but a record that contains all information on inflection and inherent grammatical features such as number and gender in natural languages, or precedence in formal languages. Grammatical objects have a type system, which helps to eliminate run-time errors in language processing. In the same way as a LF, GF uses dependent types in abstract syntax to express semantic conditions, such as well-typedness and proof obligations. Multilingual grammars, where one abstract syntax has many parallel concrete syntaxes, can be used for reliable and meaning-preserving translation. They can also be used in authoring systems, where syntax trees are constructed in an interactive editor similar to proof editors based on LF. While being edited, the trees can simultaneously be viewed in different languages. This paper starts with a gradual introduction to GF, going through a sequence of simpler formalisms till the full power is reached. The introduction is followed by a systematic presentation of the GF formalism and outlines of the main algorithms: partial evaluation and parser generation. The paper concludes by brief discussions of the Haskell implementation of GF, existing applications, and related work.

260 citations


Journal ArticleDOI
TL;DR: This paper analyzes the existing transformation techniques for proving termination of context-sensitive rewriting and suggests two new transformations that are simple, sound, and more powerful than the previously proposed transformations.
Abstract: Context-sensitive rewriting is a computational restriction of term rewriting used to model non-strict (lazy) evaluation in functional programming. The goal of this paper is the study and development of techniques to analyze the termination behavior of context-sensitive rewrite systems. For that purpose, several methods have been proposed in the literature which transform context-sensitive rewrite systems into ordinary rewrite systems such that termination of the transformed ordinary system implies termination of the original context-sensitive system. In this way, the huge variety of existing techniques for termination analysis of ordinary rewriting can be used for context-sensitive rewriting, too. We analyze the existing transformation techniques for proving termination of context-sensitive rewriting and we suggest two new transformations. Our first method is simple, sound, and more powerful than the previously proposed transformations. However, it is not complete, i.e., there are terminating context-sensitive rewrite systems that are transformed into non-terminating term rewrite systems. The second method that we present in this paper is both sound and complete. All these observations also hold for rewriting modulo associativity and commutativity.

90 citations


Journal ArticleDOI
TL;DR: This pearl presents a framework for discussing the first-year curriculum and, based on it, the design rationale for the book and course, dubbed How to Design Programs, which emphasizes the systematic design of programs.
Abstract: Twenty years ago Abelson and Sussman's Structure and Interpretation of Computer Programs radically changed the intellectual landscape of introductory computing courses. Instead of teaching some currently fashionable programming language, it employed Scheme and functional programming to teach important ideas. Introductory courses based on the book showed up around the world and made Scheme and functional programming popular. Unfortunately, these courses quickly disappeared again due to shortcomings of the book and the whimsies of Scheme. Worse, the experiment left people with a bad impression of Scheme and functional programming in general. In this pearl, we propose an alternative role for functional programming in the first-year curriculum. Specifically, we present a framework for discussing the first-year curriculum and, based on it, the design rationale for our book and course, dubbed How to Design Programs. The approach emphasizes the systematic design of programs. Experience shows that it works extremely well as a preparation for a course on object-oriented programming.

55 citations



Journal ArticleDOI
Andrew Kennedy1
TL;DR: The tedium of writing pickling and unpickling functions by hand is relieved using a combinator library similar in spirit to the well-known parser combinators.
Abstract: The tedium of writing pickling and unpickling functions by hand is relieved using a combinator library similar in spirit to the well-known parser combinators. Picklers for primitive types are combined to support tupling, alternation, recursion, and structure sharing. Code is presented in Haskell; an alternative implementation in ML is discussed.

49 citations


Journal ArticleDOI
TL;DR: An approach to the use of modern functional languages in first year courses is contributed and it is argued that teaching purely functional programming as such in freshman courses is detrimental to both the curriculum as well as to promoting the paradigm.
Abstract: We argue that teaching purely functional programming as such in freshman courses is detrimental to both the curriculum as well as to promoting the paradigm. Instead, we need to focus on the more general aims of teaching elementary techniques of programming and essential concepts of computing. We support this viewpoint with experience gained during several semesters of teaching large first-year classes (up to 600 students) in Haskell. These classes consisted of computer science students as well as students from other disciplines. We have systematically gathered student feedback by conducting surveys after each semester. This article contributes an approach to the use of modern functional languages in first year courses and, based on this, advocates the use of functional languages in this setting.

46 citations


Journal ArticleDOI
TL;DR: Two techniques for the efficient, modularized implementation of a large class of algorithms are described, including the use of rank-2 polymorphism inside Haskell's record types to implement a kind of type-parameterized modules.
Abstract: In this paper, we describe two techniques for the efficient, modularized implementation of a large class of algorithms. We illustrate these techniques using several examples, including efficient generic unification algorithms that use reference cells to encode substitutions, and highly modular language implementations. We chose these examples to illustrate the following important techniques that we believe many functional programmers would find useful. First, defining recursive data types by splitting them into two levels: a structure defining level, and a recursive knot-tying level. Second, the use of rank-2 polymorphism inside Haskell's record types to implement a kind of type-parameterized modules. Finally, we explore techniques that allow us to combine already existing recursive Haskell data-types with the highly modular style of programming proposed here.

39 citations


Journal ArticleDOI
TL;DR: The first is a reproof of the theorem that type inference for the simply-typed λ-calculus is PTIME-complete, and the logical interpretation is that the problem of normalizing proofnets for multiplicative linear logic (MLL) is also PTime-complete.
Abstract: We give transparent proofs of the PTIME-completeness of two decision problems for terms in the λ-calculus. The first is a reproof of the theorem that type inference for the simply-typed λ-calculus is PTIME-complete. Our proof is interesting because it uses no more than the standard combinators Church knew of some 70 years ago, in which the terms are linear affine – each bound variable occurs at most once. We then derive a modification of Church's coding of Booleans that is linear, where each bound variable occurs exactly once. A consequence of this construction is that any interpreter for linear λ-calculus requires polynomial time. The logical interpretation of this consequence is that the problem of normalizing proofnets for multiplicative linear logic (MLL) is also PTIME-complete.

38 citations


Journal ArticleDOI
TL;DR: A parser combinator library is extended with support for parsing free-order constructs of permutation phrases, which include the generation of parsers for attributes of XML tags and Haskell's record syntax.
Abstract: A permutation phrase is a sequence of elements (possibly of different types) in which each element occurs exactly once and the order is irrelevant. Some of the permutable elements may be optional. We show how to extend a parser combinator library with support for parsing such free-order constructs. A user of the library can easily write parsers for permutation phrases and does not need to care about checking and reordering the recognized elements. Applications include the generation of parsers for attributes of XML tags and Haskell's record syntax.

34 citations


Journal ArticleDOI
TL;DR: A program transformation technique is presented that can be used to solve the efficiency problems due to creation and consumption of intermediate data structures in compositions of such functions, where classical deforestation techniques fail.
Abstract: Many functional programs with accumulating parameters are contained in the class of macro tree transducers. We present a program transformation technique that can be used to solve the efficiency problems due to creation and consumption of intermediate data structures in compositions of such functions, where classical deforestation techniques fail. To do so, given two macro tree transducers under appropriate restrictions, we construct a single macro tree transducer that implements the composition of the two original ones. The imposed restrictions are more liberal than those in the literature on macro tree transducer composition, thus generalising previous results.

Journal ArticleDOI
TL;DR: A combinator library for non-deterministic parsers with a monadic interface is derived by means of successive refinements starting from a specification, simple and efficient for “almost deterministic” grammars.
Abstract: We derive a combinator library for non-deterministic parsers with a monadic interface, by means of successive refinements starting from a specification The choice operator of the parser implements a breadth-first search rather than the more common depth-first search, and can be seen as a parallel composition between two parsing processes The resulting library is simple and efficient for “almost deterministic” grammars, which are typical for programming languages and other computing science applications

Journal ArticleDOI
TL;DR: By using higher-order polymorphism, a casting function can treat its argument parametrically and this solution is demonstrated in two frameworks for ad-hoc polymorphism: intensional type analysis and Haskell type classes.
Abstract: Comparing two types for equality is an essential ingredient for an implementation of dynamic types. Once equality has been established, it is safe to cast a value from one type to another. In a language with run-time type analysis, implementing such a procedure is fairly straightforward. Unfortunately, this naive implementation destructs and rebuilds the argument while iterating over its type structure. However, by using higher-order polymorphism, a casting function can treat its argument parametrically. We demonstrate this solution in two frameworks for ad-hoc polymorphism: intensional type analysis and Haskell type classes.

Journal ArticleDOI
TL;DR: This work designs, for a core language including extensible records, a type system which rules out unsafe recursion and still supports the construction of a principal type for each typable term.
Abstract: In a call-by-value language, representing objects as recursive records requires using an unsafe fixpoint. We design, for a core language including extensible records, a type system which rules out unsafe recursion and still supports the construction of a principal type for each typable term. We illustrate the expressive power of this language with respect to object-oriented programming by introducing a sub-language for “mixin-based” programming.

Journal ArticleDOI
TL;DR: This article shows how the NUPRL proof development system and its rich type theory have contributed to the design of reliable, high-performance networks by synthesizing optimized code for application configurations of the ENSEMBLE group communication toolkit.
Abstract: Proof systems for expressive type theories provide a foundation for the verification and synthesis of programs. But despite their successful application to numerous programming problems there remains an issue with scalability. Are proof environments capable of reasoning about large software systems? Can the support they offer be useful in practice? In this article we answer this question by showing how the NUPRL proof development system and its rich type theory have contributed to the design of reliable, high-performance networks by synthesizing optimized code for application configurations of the ENSEMBLE group communication toolkit. We present a type-theoretical semantics of OCAML, the implementation language of ENSEMBLE, and tools for automatically importing system code into the NUPRL system. We describe reasoning strategies for generating verifiably correct fast-path optimizations of application configurations that substantially reduce end-to-end latency in ENSEMBLE. We also discuss briefly how to use NUPRL for checking configurations against specifications and for the design of reliable adaptive network protocols.

Journal ArticleDOI
TL;DR: The FC++ library is described, a rich library supporting functional programming in C++ that offers full and concise support for higher-order polymorphic functions through a novel use of C++ type inference.
Abstract: We describe the FC++ library, a rich library supporting functional programming in C++. Prior approaches to encoding higher order functions in C++ have suffered with respect to polymorphic functions from either lack of expressiveness or high complexity. In contrast, FC++ offers full and concise support for higher-order polymorphic functions through a novel use of C++ type inference. The FC++ library has a number of useful features, including a generalized mechanism to implement currying in C++, a “lazy list” class which enables the creation of “infinite data structures”, a subtype polymorphism facility, and an extensive library of useful functions, including a large part of the Haskell Standard Prelude. The FC++ library has an efficient implementation. We show the results of a number of experiments which demonstrate the value of optimizations we have implemented. These optimizations have improved the run-time performance by about an order of magnitude for some benchmark programs that make heavy use of FC++ lazy lists. We also make an informal performance comparison with similar programs written in Haskell.

Journal ArticleDOI
TL;DR: This paper shows that dependent types in the logic programming setting can be used to ensure partial correctness of programs which implement theorem provers, and thus avoid runtime errors in proof search and proof construction using the Twelf system.
Abstract: Static type systems in programming languages allow many errors to be detected at compile time that wouldn't be detected until runtime otherwise. Dependent types are more expressive than the type systems in most programming languages, so languages that have them should allow programmers to detect more errors earlier. In this paper, using the Twelf system, we show that dependent types in the logic programming setting can be used to ensure partial correctness of programs which implement theorem provers, and thus avoid runtime errors in proof search and proof construction. We present two examples: a tactic-style interactive theorem prover and a union-find decision procedure.

Journal ArticleDOI
TL;DR: The derivation given here constitutes a semi-formal correctness proof, and it also brings out explicitly each of the ideas underlying the algorithm.
Abstract: Using Haskell as a digital circuit description language, we transform a ripple carry adder that requires $O(n)$ time to add two $n$-bit words into a parallel carry lookahead adder that requires $O(\log n)$ time. The ripple carry adder uses a scan function to calculate carry bits, but this scan cannot be parallelized directly since it is applied to a non-associative function. Several techniques are applied in order to introduce parallelism, including partial evaluation and symbolic function representation. The derivation given here constitutes a semi-formal correctness proof, and it also brings out explicitly each of the ideas underlying the algorithm.

Journal ArticleDOI
TL;DR: It is proved that the class of terms which can be expanded is the same of terms typable in an Intersection Type System, i.e. the strongly normalizable terms, and that expansion is preserved by weak-head reduction, the reduction considered by functional programming languages.
Abstract: In this paper we present a notion of expansion of a term in the lambda-calculus which transforms terms into linear terms. This transformation replaces each occurrence of a variable in the original term by a fresh variable taking into account non-trivial implications in the structure of the term caused by these simple replacements. We prove that the class of terms which can be expanded is the same of terms typable in an Intersection Type System, i.e. the strongly normalizable terms. We then show that expansion is preserved by weak-head reduction, the reduction considered by functional programming languages.

Journal ArticleDOI
TL;DR: This volume is well suited to the aim of the series in Foundations of Computing which is to provide a forum in which important research topics can be presented and placed in perspective for researchers, students and practitioners alike.
Abstract: This Festschrift in honour of the scientific life and achievements of Robin Milner consists of 24 papers by various authors plus a biography of Robin Milner by the editors. The volume is well suited to the aim of the series in Foundations of Computing which is to provide a forum in which important research topics can be presented and placed in perspective for researchers, students and practitioners alike.

Journal ArticleDOI
TL;DR: An analysis is described, Constructed Product Result (CPR) analysis, that determines when a function can profitably return multiple results in registers and the benefits are modest in general, but the costs in both complexity and compile time are low.
Abstract: Compilers for ML and Haskell typically go to a good deal of trouble to arrange that multiple arguments can be passed efficiently to a procedure. For some reason, less effort seems to be invested in ensuring that multiple results can also be returned efficiently. In the context of the lazy functional language Haskell, we describe an analysis, Constructed Product Result (CPR) analysis, that determines when a function can profitably return multiple results in registers. The analysis is based only on a function's definition, and not on its uses (so separate compilation is easily supported) and the results of the analysis can be expressed by a transformation of the function definition alone. We discuss a variety of design issues that were addressed in our implementation, and give measurements of the effectiveness of our approach across a substantial benchmark set. Overall, the pricesperformance ratio is good: the benefits are modest in general (though occasionally dramatic), but the costs in both complexity and compile time, are low.

Journal ArticleDOI
TL;DR: Haskell code is developed for two ways to list the strings of the language defined by a regular expression: directly by set operations and indirectly by converting to and simulating an equivalent automaton.
Abstract: Haskell code is developed for two ways to list the strings of the language defined by a regular expression: directly by set operations and indirectly by converting to and simulating an equivalent automaton. The exercise illustrates techniques for dealing with infinite ordered domains and leads to an effective standard form for nondeterministic finite automata.

Journal ArticleDOI
TL;DR: Simple equational reasoning is exploited to derive the inverse of the Burrows–Wheeler transform from its specification, and two more general versions of the transform are outlined.
Abstract: The Burrows–Wheeler Transform is a string-to-string transform which, when used as a preprocessing phase in compression, significantly enhances the compression rate However, it often puzzles people how the inverse transform is carried out In this pearl we to exploit simple equational reasoning to derive the inverse of the Burrows–Wheeler transform from its specification We also outline how to derive the inverse of two more general versions of the transform, one proposed by Schindler and the other by Chapin and Tate


Journal ArticleDOI
TL;DR: This paper implements a simple and elegant version of bottom-up Kilbury chart parsing (Kilbury, 1985; Wiren, 1992), which makes the code clean, elegant and declarative, while still having the same space and time complexity as the standard imperative implementations.
Abstract: This paper implements a simple and elegant version of bottom-up Kilbury chart parsing (Kilbury, 1985; Wiren, 1992). This is one of the many chart parsing variants, which are all based on the data structure of charts. The chart parsing process uses inference rules to add new edges to the chart, and parsing is complete when no further edges can be added. One novel aspect of this implementation is that it doesn't have to rely on a global state for the implementation of the chart. This makes the code clean, elegant and declarative, while still having the same space and time complexity as the standard imperative implementations.

Journal ArticleDOI
TL;DR: This work describes the design of an injective finite mapping and its implementation in Curry, a functional logic language that supports the concurrent asynchronous execution of distinct portions of a program.
Abstract: An injective finite mapping is an abstraction common to many programs. We describe the design of an injective finite mapping and its implementation in Curry, a functional logic language. Curry supports the concurrent asynchronous execution of distinct portions of a program. This condition prevents passing from one portion to another a structure containing a partially constructed mapping to ensure that a new choice does not violate the injectivity condition. We present some motivating problems and we show fragments of programs that solve these problems using our design and implementation.

Journal ArticleDOI
TL;DR: A simple but flexible family of Haskell programs for drawing pictures of fractals such as Mandelbrot and Julia sets is described to showcase the elegance of a compositional approach to program construction, and the benefits of a clean separation between different aspects of program behavior.
Abstract: This paper describes a simple but flexible family of Haskell programs for drawing pictures of fractals such as Mandelbrot and Julia sets. Its main goal is to showcase the elegance of a compositional approach to program construction, and the benefits of a clean separation between different aspects of program behavior. Aimed at readers with relatively little experience of functional programming, the paper can be used as a tutorial on functional programming, as an overview of the Mandelbrot set, or as a motivating example for studies in computability.

Journal ArticleDOI
TL;DR: This work discusses several existing methods of imperative programming, none of which are really satisfactory, and proposes a new approach based on implicit parameters that is simple, safe, and efficient, although it does reveal weaknesses in Haskell's present type system.
Abstract: Haskell today provides good support not only for a functional programming style, but also for an imperative one. Elements of imperative programming are needed in applications such as web servers, or to provide efficient implementations of well-known algorithms, such as many graph algorithms. However, one element of imperative programming, the global variable, is surprisingly hard to emulate in Haskell. We discuss several existing methods, none of which is really satisfactory, and finally propose a new approach based on implicit parameters. This approach is simple, safe, and efficient, although it does reveal weaknesses in Haskell's present type system.

Journal ArticleDOI
TL;DR: It is illustrated with an example that modern functional programming languages like Haskell can be used effectively for programming search problems, in contrast to the widespread belief that Prolog is much better suited for tasks like these.
Abstract: In this Pearl we illustrate with an example that modern functional programming languages like Haskell can be used effectively for programming search problems, in contrast to the widespread belief that Prolog is much better suited for tasks like these.

Journal ArticleDOI
TL;DR: Hartel and Plasmeijer as discussed by the authors developed a book that teaches C to students who can write simple functional programs, which is based on program transformation, and presents their experience teaching functional C at the Universities of Southampton and Bristol.
Abstract: A functional programming language can be taught successfully as a first language, but if there is no follow up the students do not appreciate the functional approach. Following discussions concerning this issue at the 1995 FPLE conference (Hartel & Plasmeijer, 1995), we decided to develop such a follow up by writing a book that teaches C to students who can write simple functional programs. This paper summarises the essence of our approach, which is based on program transformation, and presents our experience teaching functional C at the Universities of Southampton and Bristol.