scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Functional Programming in 1995"


Journal ArticleDOI
TL;DR: This work investigates the interplay of dynamic types with other advanced type constructions, discussing their integration into languages with explicit polymorphism (in the style of system F), implicit polymorphism, abstract data types, and subtyping.
Abstract: There are situations in programming where some dynamic typing is needed, even in the presence of advanced static type systems. We investigate the interplay of dynamic types with other advanced type constructions, discussing their integration into languages with explicit polymorphism (in the style of system F ), implicit polymorphism (in the style of ML), abstract data types, and subtyping.

146 citations


Journal ArticleDOI
Mark P. Jones1
TL;DR: The authors describe a type system that combines overloading and higher-order polymorphism in an implicitly typed language using a system of constructor classes, a natural generalization of type classes in Haskell.
Abstract: This paper describes a flexible type system that combines overloading and higher-order polymorphism in an implicitly typed language using a system of constructor classes—a natural generalization of type classes in Haskell. We present a range of examples to demonstrate the usefulness of such a system. In particular, we show how constructor classes can be used to support the use of monads in a functional language. The underlying type system permits higher-order polymorphism but retains many of the attractive features that have made Hindley/Milner type systems so popular. In particular, there is an effective algorithm that can be used to calculate principal types without the need for explicit type or kind annotations. A prototype implementation has been developed providing, amongst other things, the first concrete implementation of monad comprehensions known to us at the time of writing.

93 citations


Journal ArticleDOI
TL;DR: This paper shows that I/O can be incorporated in a functional programming language without loss of any of the generally accepted advantages of functional programming languages.
Abstract: Functional programming languages have banned assignment because of its undesirable properties. The reward of this rigorous decision is that functional programming languages are side-effect free. There is another side to the coin: because assignment plays a crucial role in Input/Output (I/O), functional languages have a hard time dealing with I/O. Func tional programming languages have therefore often been stigmatised as inferior to imperative programming languages because they cannot deal with I/O very well. In this paper we show that I/O can be incorporated in a functional programming language without loss of any of the generally accepted advantages of functional programming languages. This discussion is supported by an extensive ac count of the I/O system offered by the lazy, purely functional programming language Clean. Two aspects that are paramount in its I/O system make the approach novel with respect to other approaches. These aspects are the technique of explicit multiple environment passing , and the Event I/O framework to program Graphical User I/O in a highly struc t red and high-level way. Clean file I/O is as powerful and flexible as it is in common imperative languages (one can read, write, and seek directly in a file). Clean Event I/O provides pro grammers with a high-level frame work to specify complex Graphical User I/O. It has been used to write applications such as a window-based text editor, an object based drawing program, a relational database, and a spreadsheet program. These graphical interactive programs are com pl tely machine independent, but still obey the look-and-feel of the concrete win dow environment being used. The specifications are completely functional and make extensive use of uniqueness typing, higher-order functions, and alge braic data types. Efficient implementations are present on the Macintosh, Sun (X Windows under Open Look), and PC (OS/2).

86 citations


Journal ArticleDOI
TL;DR: Interpreting η-conversion as an expansion rule in the simply-typed λ-calculus maintains the confluence of reduction in a richer type structure.
Abstract: Interpreting η-conversion as an expansion rule in the simply-typed λ-calculus maintains the confluence of reduction in a richer type structure. This use of expansions is supported by categorical models of reduction, where β-contraction, as the local counit, and η-expansion, as the local unit, are linked by local triangle laws. The latter form reduction loops, but strong normalization (to the long βη-normal forms) can be recovered by ‘cutting’ the loops.

79 citations


Journal ArticleDOI
TL;DR: In this article, a portable, instrumentation-based, replay debugger for the Standard ML of New Jersey compiler is presented, which is independent from the underlying hardware and runtime system, and from the optimization strategies used by the compiler.
Abstract: We have built a portable, instrumentation-based, replay debugger for the Standard ML of New Jersey compiler. Traditional ‘source-level’ debuggers for compiled languages actually operate at machine level, which makes them complex, difficult to port, and intolerant of compiler optimization. For secure languages like ML, however, debugging support can be provided without reference to the underlying machine, by adding instrumentation to program source code before compilation. Because instrumented code is (almost) ordinary source, it can be processed by the ordinary compiler. Our debugger is thus independent from the underlying hardware and runtime system, and from the optimization strategies used by the compiler. The debugger also provides reverse execution, both as a user feature and an internal mechanism. Reverse execution is implemented using a checkpoint and replay system; checkpoints are represented primarily by first-class continuations.

73 citations


Journal ArticleDOI
TL;DR: This paper presents purely functional implementations of queues and double-ended queues (deques) requiring only O(1) time per operation in the worst case, with a strange feature of requiring some laziness – but not too much!
Abstract: We present purely functional implementations of queues and double-ended queues (deques) requiring only O(1) time per operation in the worst case. Our algorithms are considerably simpler than previous designs with the same bounds. The inspiration for our approach is the incremental behavior of certain functions on lazy lists. Capsule Review This paper presents another example of the ability to write programs in functional languages that satisfy our desire for clarity while satisfying our need for efficiency. In this case, the subject (often-studied) is the implementation of queues and dequeues that are functional and exhibit constant-time worst-case insertion and deletion operations. Although the problem has been solved previously, this paper presents the simplest algorithm so far. As the author notes, it has a strange feature of requiring some laziness – but not too much!

71 citations


Journal ArticleDOI
TL;DR: A new uniCation algorithm is presented which is an extension of syntactic uni cation with constraint solving and the existence of principal types follows from an analysis of this uni Cation algorithm.
Abstract: We study the type inference problem for a system with type classes as in the functional programming language Haskell. Type classes are an extension of ML-style polymorphism with overloading. We generalize Milner's work on polymorphism by introducing a separate context constraining the type variables in a typing judgement. This leads to simple type inference systems and algorithms which closely resemble those for ML. In particular, we present a new unification algorithm which is an extension of syntactic unification with constraint solving. The existence of principal types follows from an analysis of this unification algorithm.

52 citations


Journal ArticleDOI
TL;DR: The Bologna Optimal Higher-order Machine’s general architecture is described, and a large set of benchmarks and experimental results are given.
Abstract: The Bologna Optimal Higher-order Machine (BOHM) is a prototype implementation of the core of a functional language based on (a variant of) Lamping''s optimal graph reduction technique. The source language is a sugared lambda-calculus enriched with booleans, integers, lists and basic operations on these data types (following the guidelines of Interaction Systems). In this paper, we shall describe BOHM''s general architecture (comprising the garbage collector), and we shall give a large set of benchmarks and experimental results.

50 citations


Journal ArticleDOI
TL;DR: In this paper, the basic mechanisms of object-oriented programming, objects, methods, message passing, and subtyping are characterized by an explicit Object type constructor and suitable introduction, elimination, and equality rules.
Abstract: We give a direct type-theoretic characterization of the basic mechanisms of object-oriented programming, objects, methods, message passing, and subtyping, by introducing an explicit Object type constructor and suitable introduction, elimination, and equality rules. The resulting abstract framework provides a common basis for justifying and comparing previous encodings of objects based on recursive record types [7, 9], F-bounded quantification [4, 13, 19], and existential types [23].

48 citations


Journal ArticleDOI
Amir Kishon1, Paul Hudak1
TL;DR: This article explores the use of monitoring semantics in the specification and implementation of a variety of monitors: profilers, tracers, collecting interpreters, and, most importantly, interactive source-level debuggers.
Abstract: Monitoring semantics is a formal model of program execution which captures "monitoring activity" as found in profilers, tracers, debuggers, etc. Beyond its theoretical interest, this formalism provides a new methodology for implementing a large family of source-level monitoring activities for sequential deterministic programming languages. In this article we explore the use of monitoring semantics in the specification and implementation of a variety of monitors: profilers, tracers, collecting interpreters, and, most importantly, interactive source-level debuggers. Although we consider such monitors only for (both strict and non-strict) functional languages, the methodology extends easily to imperative languages, since it begins with a continuation semantics specification. In addition, using standard partial evaluation techniques as an optimization strategy, we show that the methodology forms a practical basis for building real monitors. Our system can be optimized at two levels of specialization: specializing the interpreter with respect to a monitor specification automatically yields an instrumented interpreter; fur- ther specializing this instrumented interpreter with respect to a source program yields an instrumented program, i.e., one in which the extra code to perform monitoring has been automatically embedded into the program.

25 citations


Journal ArticleDOI
TL;DR: This paper describes a first attempt to implement a spreadsheet in a lazy, purely functional language and introduces the possibility of asking the system to prove equality of symbolic cell expressions: a property which can greatly enhance the reliability of a particular user-defined spreadsheet.
Abstract: It has been claimed that recent developments in the research on the efficiency of code generation and on graphical input/output interfacing have made it possible to use a functional language to write efficient programs that can compete with industrial applications written in a traditional imperative language. As one of the early steps in verifying this claim, this paper describes a first attempt to implement a spreadsheet in a lazy, purely functional language. An interesting aspect of the design is that the language with which the user specifies the relations between the cells of the spreadsheet is itself a lazy, purely functional and higher order language as well, and not some special dedicated spreadsheet language. Another interesting aspect of the design is that the spreadsheet incorporates symbolic reduction and normalisation of symbolic expressions (including equations). This introduces the possibility of asking the system to prove equality of symbolic cell expressions: a property which can greatly enhance the reliability of a particular user-defined spreadsheet. The resulting application is by no means a fully mature product. It is not intended as a competitor to commercially available spreadsheets. However, with its higher order lazy functional language and its symbolic capabilities it may serve as an interesting candidate to fill the gap between calculators with purely functional expressions and full-featured spreadsheets with dedicated non-functional spreadsheet languages. This paper describes the global design and important implementation issues in the development of the application. The experience gained and lessons learnt during this project are discussed. Performance and use of the resulting application are compared with related work.

Journal ArticleDOI
TL;DR: The construction of a parallel vision system from Standard ML prototypes is presented and the system recognises 3D objects from 2D scenes through edge detection, grouping of edges into straight lines and line junction based model matching.
Abstract: The construction of a parallel vision system from Standard ML prototypes is presented. The system recognises 3D objects from 2D scenes through edge detection, grouping of edges into straight lines and line junction based model matching. Functional prototyping for parallelism is illustrated through the development of the straight line detection component. The assemblage of the whole system from prototyped components is then considered and its performance discussed.

Journal ArticleDOI
TL;DR: The difference between the way that functional programmers and functional language implementors view program behaviour is examined and a new technique is proposed which produces results that are straightforward for programmers to assimilate.
Abstract: This paper addresses the issue of analysing the run-time behaviour of lazy, higher-order functional programs. We examine the difference between the way that functional programmers and functional language implementors view program behaviour. Existing profiling techniques are discussed and a new technique is proposed which produces results that are straightforward for programmers to assimilate. The new technique, which we call lexical profiling, collects information about the run-time behaviour of functional programs, and reports the results with respect to the original source code rather than simply listing the actions performed at run-time. Lexical profiling complements implementation-specific profiling and is important because it provides a view of program activity which is largely independent of the underlying evaluation mechanism. Using the lexical profiler, programmers may easily relate results back to the source program. We give a full implementation of the lexical profiling technique for a sequential, interpretive graph reduction engine, and extensions for compiled and parallel graph reduction are discussed.

Journal ArticleDOI
TL;DR: A translation of linear terms into terms in the second-order polymorphic lambda calculus (λ2) is given which allows the result to be proved by appealing to the well-known strong normalisation property of λ2.
Abstract: We prove a strong normalisation result for the linear term calculus of Benton, Bierman, Hyland and de Paiva. Rather than prove the result from first principles, we give a translation of linear terms into terms in the second-order polymorphic lambda calculus (λ2) which allows the result to be proved by appealing to the well-known strong normalisation property of λ2. An interesting feature of the translation is that it makes use of the λ2 coding of a coinductive datatype as the translation of the !-types (exponentials) of the linear calculus.

Journal ArticleDOI
TL;DR: This paper presents semantic specifications and correctness proofs for both on-line and offline partial evaluation of strict first-order functional programs by defining a core semantics as a basis for the specification of three non-standard evaluations: instrumented evaluation, on- line and off-line partial evaluation.
Abstract: This paper presents semantic specifications and correctness proofs for both on-line and offline partial evaluation of strict first-order functional programs. To do so, our strategy consists of defining a core semantics as a basis for the specification of three non-standard evaluations: instrumented evaluation, on-line and off-line partial evaluation. We then use the technique of logical relations to prove the correctness of both on-line and off-line partial evaluation semantics.The contributions of this work are as follows:1. We provide a uniform framework to defining and proving correct both on-line and off-line partial evaluation.2. This work required a formal specification of on-line partial evaluation with polyvariant specialization. We define criteria for its correctness with respect to an instrumented standard semantics. As a by-product, on-line partial evaluation appears to be based on a fixpoint iteration process, just like binding-time analysis.3. We show that binding-time analysis, the preprocessing phase of off-line partial evaluation, is an abstraction of on-line partial evaluation. Therefore, its correctness can be proved with respect to on-line partial evaluation, instead of with respect to the standard semantics, as is customarily done.4. Based on the binding-time analysis, we formally derive the specialization semantics for off-line partial evaluation. This strategy ensures the correctness of the resulting semantics.

Journal ArticleDOI
TL;DR: This paper presents functional Id and Haskell versions of a large Monte Carlo radiation transport code, and compares the two languages with respect to their expressiveness, and discusses the effect of laziness on programming style.
Abstract: In this paper we present functional Id and Haskell versions of a large Monte Carlo radiation transport code, and compare the two languages with respect to their expressiveness. Monte Carlo transport simulation exercises such abilities as parsing, input/output, recur-sive data structures and traditional number crunching, which makes it a good test problem for languages and compilers. Using some code examples, we compare the programming styles encouraged by the two languages. In particular, we discuss the eeect of laziness on programming style. We point out that resource management problems currently prevent running realistically large problem sizes in the functional versions of the code. The Monte Carlo technique has a long history. Its importance has grown in tandem with the availability of cheap computing power. The authors outline the functionality of a large Monte Carlo simulation program, and demonstrate that a simpliied kernel version can be cleanly coded in a functional style. They illustrate some eeects of functional language implementation on programming style. It is characteristic of the Monte Carlo method that code validation and debugging depend on high-statistics results. The authors frankly describe the problems encountered in obtaining such results from the functional codes. Their experiences highlight the need for future research to address speciic implementation problems. Chief among these needs are eeective debugging tools for inspecting partial results and eecient yet unobtrusive methods of memory management. Reports of this kind provide important empirical data on the practice of functional programming that can help guide both application development and language support research.

Journal ArticleDOI
TL;DR: A ?
Abstract: We introduce a ?-calculus notation which enables us to detect in a term, more s-redexes than in the usual notation. On this basis, we define an extended s-reduction which is yet a subrelation of conversion. The Church Rosser property holds for this extended reduction. Moreover, we show that we can transform generalised redexes into usual ones by a process called ‘term reshuffling’.

Journal ArticleDOI
TL;DR: Communication lifting is a worthwhile optimisation to be included in a compiler for a lazy functional language and demonstrates performance gains in practice both with sequential and parallel evaluation.
Abstract: Communication lifting is a program transformation that can be applied to a synchronous process network to restructure the network. This restructuring in theory improves sequential and parallel performance. The transformation has been formally specified and proved correct and it has been implemented as an automatic program transformation tool. This tool has been applied to a small set of programs consisting of synchronous process networks. For these networks communication lifting generates parallel programs that do not require locking. Measurements indicate performance gains in practice both with sequential and parallel evaluation. Communication lifting is a worthwhile optimization to be included in a compiler for a lazy functional language.

Journal ArticleDOI
TL;DR: This paper describes a data-intensive application written in a lazy functional language: a server for textual information retrieval that illustrates the importance of interoperability, the capability of interacting with code written in other programming languages.
Abstract: This paper describes a data-intensive application written in a lazy functional language: a server for textual information retrieval. The design illustrates the importance of interoperability, the capability of interacting with code written in other programming languages. Lazy functional programming is shown to be a powerful and elegant means of accomplishing several desirable concrete goals: delivering initial results promptly, using space economically, and avoiding unnecessary I/O. Performance results, however, are mixed.

Journal ArticleDOI
TL;DR: A Geometric Evaluation Library (GEL) is described which becomes the basis of developing CSG applications quickly and concisely and the benefits of the functional paradigm in this domain and the merits of programming with a set of higher-order functions.
Abstract: Solid modelling using constructive solid geometry (CSG) includes many examples of stylised divide-and-conquer algorithms. We identify the sources of these recurrent patterns and describe a Geometric Evaluation Library (GEL) which captures them as higher-order functions. This library then becomes the basis of developing CSG applications quickly and concisely. GEL is currently implemented as a set of separately compiled modules in the pure functional language Hope+. We evaluate our work in terms of performance and general applicability. We also assess the benefits of the functional paradigm in this domain and the merits of programming with a set of higher-order functions.


Journal ArticleDOI
TL;DR: The Nucleic Acid three-dimensional structure determination problem (NA3D) and a constraint satisfaction algorithm are formally described and Prototyping and experimental development using the Miranda functional programming language, over the last four years, are discussed.
Abstract: This paper presents an application of functional programming in the field of molecular biology: exploring the conformations of nucleic acids. The Nucleic Acid three-dimensional structure determination problem (NA3D) and a constraint satisfaction algorithm are formally described. Prototyping and experimental development using the Miranda functional programming language, over the last four years, are discussed. A Prolog implementation has been developed to evaluate software engineering and performance criteria between functional and logic programming. A C++ implementation has been developed for distribution purpose and to solve large practical problems. This system, called MC-SYM for ‘Macromolecular Conformation by SYMbolic generation’, is used in more than 50 laboratories, including academic and government research centres and pharmaceutical companies.




Journal Article
TL;DR: Thunk-lifting as mentioned in this paper is a program transformation for lazy functional programs that aims at reducing the amount of heap space allocated to the program when it executes by transforming a function application that contains nested functions into a new function application without nesting.
Abstract: Thunk-lifting is a programtransformation for lazy functional programs. The transforma- tion aims at reducing the amount of heap space allocated to the program when it executes. Thunk-lifting transforms a function application that contains as arguments further, nested, function applications into a new function application without nesting. The transformation thus essentially folds some function applications. The applications to be folded are selected on the basis of a set of conditions, which have been chosen such that thunk-lifting never increases the amount of heap space required by a transformed program. Thunk-lifting has been implemented and applied to a number ofmediumsize benchmark programs. The results show that the number of cell claims in the heap decreases on average by 5%, with a maximum of 16%.


Journal ArticleDOI
TL;DR: It is shown that any recursively enumerable subset of a data structure can be regarded as the solution set to a Bohm-out problem.
Abstract: We show that any recursively enumerable subset of a data structure can be regarded as the solution set to a Bohm-out problem.