Topic

# MLton

About: MLton is a(n) research topic. Over the lifetime, 32 publication(s) have been published within this topic receiving 433 citation(s).

##### Papers

More filters

••

TL;DR: This talk will describe MLton's approach to whole-program compilation, covering the optimizations and the intermediate languages, as well as some of the engineering challenges that were overcome to make it feasible to use MLton on programs with over one hundred thousand lines.

Abstract: MLton is a stable, robust, widely ported, Standard ML (SML) compiler that generates efficient executables. Whole-program compilation is the key to MLton's success, significantly improving both correctness and efficiency. Whole-program compilation makes possible a number of optimizations that reduce or eliminate the cost of SML's powerful abstraction mechanisms, such as parametric modules, polymorphism, and higher-order functions. It also allows MLton to use a simply-typed, first-order, intermediate language. By structuring the bulk of MLton's optimizer as small passes on whole programs in this simple intermediate language, it is easy to implement and debug new optimizations. This intermediate language uses a variant of standard control-flow graphs and static single assignment form, which makes it easy to implement traditional local optimizations as well. Having the whole program also enables standard data representations such as unboxed integers and arrays, as well as efficient representations for user-defined data structures.This talk will describe MLton's approach to whole-program compilation, covering the optimizations and the intermediate languages, as well as some of the engineering challenges that were overcome to make it feasible to use MLton on programs with over one hundred thousand lines. It will also cover the history of the MLton project from its inception in 1997 until now, and give some lessons learned and thoughts on the future of MLton.

73 citations

••

02 Apr 2000

TL;DR: Experimental results over a range of benchmarks including large real-world programs such as the compiler itself and the ML-Kit indicate that the compile-time cost of flow analysis and closure conversion is extremely small, and that the dispatches and coercions inserted by the algorithm are dynamically infrequent.

Abstract: This paper presents a new closure conversion algorithm for simply-typed languages. We have have implemented the algorithm as part of MLton, a whole-program compiler for Standard ML (SML). MLton first applies all functors and eliminates polymorphism by code duplication to produce a simply-typed program. MLton then performs closure conversion to produce a first-order, simply-typed program. In contrast to typical functional language implementations, MLton performs most optimizations on the first-order language, after closure conversion. There are two notable contributions of our work:
1. The translation uses a general flow-analysis framework which includes OCFA. The types in the target language fully capture the results of the analysis. MLton uses the analysis to insert coercions to translate between different representations of a closure to preserve type correctness of the target language program.
2. The translation is practical. Experimental results over a range of benchmarks including large real-world programs such as the compiler itself and the ML-Kit [25] indicate that the compile-time cost of flow analysis and closure conversion is extremely small, and that the dispatches and coercions inserted by the algorithm are dynamically infrequent.

56 citations

••

20 Jan 2013TL;DR: This work encodes higher-order features into first-order logic formula whose solution can be derived using a lightweight counterexample guided refinement loop to extract initial verification conditions from dependent typing rules derived by a syntactic scan of the program.

Abstract: We consider the problem of inferring expressive safety properties of higher-order functional programs using first-order decision procedures. Our approach encodes higher-order features into first-order logic formula whose solution can be derived using a lightweight counterexample guided refinement loop. To do so, we extract initial verification conditions from dependent typing rules derived by a syntactic scan of the program. Subsequent type-checking and type-refinement phases infer and propagate specifications of higher order functions, which are treated as uninterpreted first-order constructs, via subtyping chains. Our technique provides several benefits not found in existing systems: 1 it enables compositional verification and inference of useful safety properties for functional programs; 2 additionally provides counterexamples that serve as witnesses of unsound assertions: 3 does not entail a complex translation or encoding of the original source program into a first-order representation; and, 4 most importantly, profitably employs the large body of existing work on verification of first-order imperative programs to enable efficient analysis of higher-order ones. We have implemented the technique as part of the MLton SML compiler toolchain, where it has shown to be effective in discovering useful invariants with low annotation burden.

36 citations

••

01 Oct 2001TL;DR: This paper gives a formal presentation of contification in MLton, a whole-program optimizing Standard ML compiler, as well as a new algorithm based on the dominator tree of a program's call graph that is optimal.

Abstract: Contification is a compiler optimization that turns a function that always returns to the same place into a continuation. Compilers for functional languages use contification to expose the control-flow information that is required by many optimizations, including traditional loop optimizations. This paper gives a formal presentation of contification in MLton, a whole-program optimizing Standard ML compiler. We present two existing algorithms for contification in our framework, as well as a new algorithm based on the dominator tree of a program's call graph. We prove that the dominator algorithm is optimal. We present benchmark results on realistic SML programs demonstrating that contification has minimal overhead on compile time and significantly improves run time.

33 citations

••

12 Aug 2008Abstract: CLF (Concurrent LF) [CPWW02a] is a logical framework for specifying and implementing deductive and concurrent systems from areas, such as programming language theory, security protocol analysis, process algebras, and logics. Celf is an implementation of the CLF type theory that extends the LF type theory by linear types to support representation of state and a monad to support representation of concurrency. It relies on the judgments-as-types methodology for specification and the interpretation of CLF signatures as concurrent logic programs [LPPW05] for experimentation. Celf is written in Standard ML and compiles with MLton, MLKit, and SML/NJ. The source code and a collection of examples are available from http://www.twelf.org/~celf .

33 citations