scispace - formally typeset
Search or ask a question
Author

Delia Kesner

Bio: Delia Kesner is an academic researcher from Paris Diderot University. The author has contributed to research in topics: Lambda calculus & Rewriting. The author has an hindex of 19, co-authored 53 publications receiving 924 citations. Previous affiliations of Delia Kesner include Institut Universitaire de France & University of Paris.


Papers
More filters
Journal ArticleDOI
TL;DR: A general framework for pattern calculi is developed, which supports a proof of confluence that is parameterised by the nature of the matching algorithm, suitable for the pure pattern calculus and all other known pattern Calculi.
Abstract: Pure pattern calculus supports pattern-matching functions in which patterns are first-class citizens that can be passed as parameters, evaluated and returned as results. This new expressive power supports two new forms of polymorphism. Path polymorphism allows recursive functions to traverse arbitrary data structures. Pattern polymorphism allows patterns to be treated as parameters which may be collected from various sources or generated from training data. A general framework for pattern calculi is developed. It supports a proof of confluence that is parameterised by the nature of the matching algorithm, suitable for the pure pattern calculus and all other known pattern calculi.

78 citations

Book ChapterDOI
27 Mar 2006
TL;DR: The pure pattern calculus generalises the pure lambda-calculus by basing computation on pattern-matching instead of beta-reduction, and supports a uniform approach to data structures which underpins two new forms of polymorphism.
Abstract: The pure pattern calculus generalises the pure lambda-calculus by basing computation on pattern-matching instead of beta-reduction. The simplicity and power of the calculus derive from allowing any term to be a pattern. As well as supporting a uniform approach to functions, it supports a uniform approach to data structures which underpins two new forms of polymorphism. Path polymorphism supports searches or queries along all paths through an arbitrary data structure. Pattern polymorphism supports the dynamic creation and evaluation of patterns, so that queries can be customised in reaction to new information about the structures to be encountered. In combination, these features provide a natural account of tasks such as programming with XML paths. As the variables used in matching can now be eliminated by reduction it is necessary to separate them from the binding variables used to control scope. Then standard techniques suffice to ensure that reduction progresses and to establish confluence of reduction.

63 citations

Proceedings ArticleDOI
08 Jan 2014
TL;DR: This paper focuses on standardization for the linear substitution calculus, a calculus with ES capable of mimicking reduction in lambda-calculus and linear logic proof-nets, and relies on Gonthier, Lévy, and Melliès' axiomatic theory for standardization.
Abstract: Standardization is a fundamental notion for connecting programming languages and rewriting calculi. Since both programming languages and calculi rely on substitution for defining their dynamics, explicit substitutions (ES) help further close the gap between theory and practice.This paper focuses on standardization for the linear substitution calculus, a calculus with ES capable of mimicking reduction in lambda-calculus and linear logic proof-nets. For the latter, proof-nets can be formalized by means of a simple equational theory over the linear substitution calculus.Contrary to other extant calculi with ES, our system can be equipped with a residual theory in the sense of Levy, which is used to prove a left-to-right standardization theorem for the calculus with ES but without the equational theory. Such a theorem, however, does not lift from the calculus with ES to proof-nets, because the notion of left-to-right derivation is not preserved by the equational theory. We then relax the notion of left-to-right standard derivation, based on a total order on redexes, to a more liberal notion of standard derivation based on partial orders.Our proofs rely on Gonthier, Levy, and Mellies' axiomatic theory for standardization. However, we go beyond merely applying their framework, revisiting some of its key concepts: we obtain uniqueness (modulo) of standard derivations in an abstract way and we provide a coinductive characterization of their key abstract notion of external redex. This last point is then used to give a simple proof that linear head reduction --a nondeterministic strategy having a central role in the theory of linear logic-- is standard.

61 citations

Book ChapterDOI
Delia Kesner1
11 Sep 2007
TL;DR: Very simple technology is used to establish a general theory of explicit substitutions for the lambda-calculus which enjoys fundamental properties such as simulation of one-step beta-reduction, confluence on metaterms, preservation of beta-strong normalisation, strong normalisation of typed terms and full composition.
Abstract: Calculi with explicit substitutions (ES) are widely used in different areas of computer science. Complex systems with ES were developed these last 15 years to capture the good computational behaviour of the original systems (with meta-level substitutions) they were implementing. In this paper we first survey previous work in the domain by pointing out the motivations and challenges that guided the development of such calculi. Then we use very simple technology to establish a general theory of explicit substitutions for the lambda-calculus which enjoys fundamental properties such as simulation of one-step beta-reduction, confluence on metaterms, preservation of beta-strong normalisation, strong normalisation of typed terms and full composition. The calculus also admits a natural translation into Linear Logic's proof-nets.

53 citations

Journal ArticleDOI
TL;DR: This article explores the use of non-idempotent intersection types in the framework of the λ-calculus by replacing the reducibility technique with trivial combinatorial arguments.
Abstract: This article explores the use of non-idempotent intersection types in the framework of the λ-calculus. Different topics are presented in a uniform framework: head normalization, weak normalization, weak head normalization, strong normalization, inhabitation, exact bounds and principal typings. The reducibility technique, traditionally used when working with idempotent types, is replaced in this framework by trivial combinatorial arguments.

53 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Book ChapterDOI
01 Jan 2002
TL;DR: This chapter presents the basic concepts of term rewriting that are needed in this book and suggests several survey articles that can be consulted.
Abstract: In this chapter we will present the basic concepts of term rewriting that are needed in this book. More details on term rewriting, its applications, and related subjects can be found in the textbook of Baader and Nipkow [BN98]. Readers versed in German are also referred to the textbooks of Avenhaus [Ave95], Bundgen [Bun98], and Drosten [Dro89]. Moreover, there are several survey articles [HO80, DJ90, Klo92, Pla93] that can also be consulted.

501 citations

Book ChapterDOI
11 Nov 1992
TL;DR: In this paper, the authors present some quantitative performance measurements for the computing power of Programmable Active Memories (PAM), based on Field Programmable Gate Array (FPGA) technology, which is a universal hardware co-processor closely coupled to a standard host computer.
Abstract: We present some quantitative performance measurements for the computing power of Programmable Active Memories (PAM), as introduced by [BRV 89]. Based on Field Programmable Gate Array (FPGA) technology, the PAM is a universal hardware co-processor closely coupled to a standard host computer. The PAM can speed up many critical software applications running on the host, by executing part of the computations through a specific hardware design. The performance measurements presented are based on two PAM architectures and ten specific applications, drawn from arithmetics, algebra, geometry, physics, biology, audio and video. Each of these PAM designs proves as fast as any reported hardware or super-computer for the corresponding application. In cases where we could bring some genuine algorithmic innovation into the design process, the PAM has proved an order of magnitude faster than any previously existing system (see [SBV 91] and [S 92]).

194 citations

Journal ArticleDOI
TL;DR: In this article, a forward shooting grid (FSG) method was proposed to deal with degenerate diffusion partial differential equations (PDE) for the early exercise condition of American options.
Abstract: We consider the problem of pricing path-dependent contingent claims. Classically, this problem can be cast into the Black-Scholes valuation framework through inclusion of the path-dependent variables into the state space. This leads to solving a degenerate advection-diffusion partial differential equation (PDE). We first estabilish necessary and sufficient conditions under which degenerate diffusions can be reduced to lower-dimensional nondegenerate diffusions. We apply these results to path-dependent options. Then, we describe a new numerical technique, called forward shooting grid (FSG) method, that efficiently copes with degenerate diffusion PDEs. Finally, we show that the FSG method is unconditionally stable and convergent. the FSG method is the first capable of dealing with the early exercise condition of American options. Several numerical examples are presented and discussed.2

186 citations