scispace - formally typeset
Search or ask a question
Author

Damiano Mazza

Bio: Damiano Mazza is an academic researcher from Centre national de la recherche scientifique. The author has contributed to research in topics: Linear logic & Interaction nets. The author has an hindex of 13, co-authored 38 publications receiving 451 citations. Previous affiliations of Damiano Mazza include University of Paris & Nord University.

Papers
More filters
Book ChapterDOI
05 Apr 2014
TL;DR: This work presents a language $\ell \mathcal{R}$ PCF inspired by a generalized interpretation of the exponential modality, which carries a label that provides additional information on how a program uses its context.
Abstract: Linear logic is well known for its resource-awareness, which has inspired the design of several resource management mechanisms in programming language design. Its resource-awareness arises from the distinction between linear, single-use data and non-linear, reusable data. The latter is marked by the so-called exponential modality, which, from the categorical viewpoint, is a monoidal comonad. Monadic notions of computation are well-established mechanisms used to express effects in pure functional languages. Less well-established is the notion of comonadic computation. However, recent works have shown the usefulness of comonads to structure context dependent computations. In this work, we present a language $\ell \mathcal{R}$ PCF inspired by a generalized interpretation of the exponential modality. In $\ell \mathcal{R}$ PCF the exponential modality carries a label--an element of a semiring $\mathcal{R}$ --that provides additional information on how a program uses its context. This additional structure is used to express comonadic type analysis.

79 citations

Proceedings ArticleDOI
19 Aug 2014
TL;DR: The distillation process unveils that abstract machines in fact implement weak linear head reduction, a notion of evaluation having a central role in the theory of linear logic, and shows that the LSC is a complexity-preserving abstraction of abstract machines.
Abstract: It is well-known that many environment-based abstract machines can be seen as strategies in lambda calculi with explicit substitutions (ES). Recently, graphical syntaxes and linear logic led to the linear substitution calculus (LSC), a new approach to ES that is halfway between small-step calculi and traditional calculi with ES. This paper studies the relationship between the LSC and environment-based abstract machines. While traditional calculi with ES simulate abstract machines, the LSC rather distills them: some transitions are simulated while others vanish, as they map to a notion of structural congruence. The distillation process unveils that abstract machines in fact implement weak linear head reduction, a notion of evaluation having a central role in the theory of linear logic. We show that such a pattern applies uniformly in call-by-name, call-by-value, and call-by-need, catching many machines in the literature. We start by distilling the KAM, the CEK, and a sketch of the ZINC, and then provide simplified versions of the SECD, the lazy KAM, and Sestoft's machine. Along the way we also introduce some new machines with global environments. Moreover, we show that distillation preserves the time complexity of the executions, i.e. the LSC is a complexity-preserving abstraction of abstract machines.

77 citations

Journal ArticleDOI
TL;DR: It turns out that Girard's systems can be recovered by forcing depth and level to coincide, and this fact is used to propose a variant of the polytime system in which the paragraph modality is only allowed on atoms, and which may serve as a basis for developing lambda-calculus type assignment systems with more efficient typing algorithms than existing ones.
Abstract: We give a new characterization of elementary and deterministic polynomial time computation in linear logic through the proofs-as-programs correspondence. Girard's seminal results, concerning elementary and light linear logic, achieve this characterization by enforcing a stratification principle on proofs, using the notion of depth in proof nets. Here, we propose a more general form of stratification, based on inducing levels in proof nets by means of indices, which allows us to extend Girard's systems while keeping the same complexity properties. In particular, it turns out that Girard's systems can be recovered by forcing depth and level to coincide. A consequence of the higher flexibility of levels with respect to depth is the absence of boxes for handling the paragraph modality. We use this fact to propose a variant of our polytime system in which the paragraph modality is only allowed on atoms, and which may thus serve as a basis for developing lambda-calculus type assignment systems with more efficient typing algorithms than existing ones.

42 citations

Journal ArticleDOI
27 Dec 2017
TL;DR: This work develops a general framework for building systems of intersection types characterizing normalization properties and shows how this construction allows us to recover equivalent versions of every well known intersection type system.
Abstract: Starting from an exact correspondence between linear approximations and non-idempotent intersection types, we develop a general framework for building systems of intersection types characterizing normalization properties. We show how this construction, which uses in a fundamental way Mellies and Zeilberger's ``type systems as functors'' viewpoint, allows us to recover equivalent versions of every well known intersection type system (including Coppo and Dezani's original system, as well as its non-idempotent variants independently introduced by Gardner and de Carvalho). We also show how new systems of intersection types may be built almost automatically in this way.

33 citations

Journal ArticleDOI
TL;DR: A compositional program transformation from the simply-typed lambda-calculus to itself augmented with a notion of linear negation is defined, and it is proved that this computes the gradient of the source program with the same efficiency as first-order backpropagation.
Abstract: Backpropagation is a classic automatic differentiation algorithm computing the gradient of functions specified by a certain class of simple, first-order programs, called computational graphs. It is a fundamental tool in several fields, most notably machine learning, where it is the key for efficiently training (deep) neural networks. Recent years have witnessed the quick growth of a research field called differentiable programming, the aim of which is to express computational graphs more synthetically and modularly by resorting to actual programming languages endowed with control flow operators and higher-order combinators, such as map and fold. In this paper, we extend the backpropagation algorithm to a paradigmatic example of such a programming language: we define a compositional program transformation from the simply-typed lambda-calculus to itself augmented with a notion of linear negation, and prove that this computes the gradient of the source program with the same efficiency as first-order backpropagation. The transformation is completely effect-free and thus provides a purely logical understanding of the dynamics of backpropagation.

32 citations


Cited by
More filters
Book ChapterDOI
01 Jan 2002
TL;DR: This chapter presents the basic concepts of term rewriting that are needed in this book and suggests several survey articles that can be consulted.
Abstract: In this chapter we will present the basic concepts of term rewriting that are needed in this book. More details on term rewriting, its applications, and related subjects can be found in the textbook of Baader and Nipkow [BN98]. Readers versed in German are also referred to the textbooks of Avenhaus [Ave95], Bundgen [Bun98], and Drosten [Dro89]. Moreover, there are several survey articles [HO80, DJ90, Klo92, Pla93] that can also be consulted.

501 citations

Journal ArticleDOI
TL;DR: The calculus' contribution to analyzing mobile processes is a major topic, and it is dealt with extensively starting from part three, and how π-calculus can be employed in studying practical, modern software engineering concepts such as object-oriented programming is shown.
Abstract: The π-Calculus: A theory of mobile processes by Davide Sangiorgi and David Walker Formal methods have formed the foundation of Computer Science since its inception. Although, initially these formal methods dealt with processes and systems on an individual basis, the paradigm has shifted with the dawn of the age of computer networks. When dealing with systems with interconnected, communicating, dependent, cooperative, and competitive components, the older outlook of analyzing and developing singular systems—and the tools that went with it—were hardly suitable. This led to the development of theories and tools that would support the new paradigm. On the tools end, the development has been widespread and satisfactory: programming languages, development frameworks, databases, and even end-user software products such as word processors, have gained network-awareness. However on the theoretical end, the development has been far less satisfactory. The major work was done by Robin Milner, Joachim Parrow, and David Walker who developed the formalism known as π-calculus in 1989. π-calculus is a process calculus that treats communication between its components as the basic form of computation. It has been quite successful as a foundation of several other calculi in the field and as Milner puts it, it has become common to express ideas about interactions and mobility in variants of the calculus. Introduction The current book serves as a comprehensive reference to π-calculus. Besides Milner's own book on the subject, this is the only other book-length publication on the topic. In many ways, it is much more comprehensive than Milner's: a much wider area of topics are dealt with and in more detail as well. Contents The book is split into seven part. The first part presents the basic theory of π-calculus. However, basic does not mean concise: every topic is discussed in great detail. The section on bisimulation is particularly intensive and provides several insights about the motivation for the theory. Part two discusses several variants of the original calculus. By varying several characteristics of the calculus, such as whether a process can communicate with more than processes at a time, we can obtain these variants. A number of interesting properties of the language are studied by the other when discussing these variants. As can be understood from the title, the calculus' contribution to analyzing mobile processes is a major topic, and it is dealt with extensively starting from part three. The basics are introduced by the way of a sophisticated typing system whose application in speciying complex processes is presented in part four. Part five looks at higher-order π-calculus in which composed systems are considered as first-class citizens. Part six is one of the best in the book and discusses the relation between π-calculus and lambda-calculus, which is an older and more basic calculus. Finally part seven shows how π-calculus can be employed in studying practical, modern software engineering concepts such as object-oriented programming. Impressions One of my disappointments with this book is in how often the reader is left perplexed with some of the theoretical developments, specially in part three. π-calculus is a complicated topic, even for the graduate student audience to which this book is directed, and the author would have done much better by reducing the number of topics and instead focusing on more lucid and detailed explanations. There are several experimental digressions throughout the book, which although interesting, take away from some of the momentum of sequential study. For example, topics such as comparison and encoding of one language to another could be easily moved to a separate section in order to make the content more suitable for self-study. Another issue is the little effort towards making the connection from the theoretical to the practical. The main reason why formal methods have not been adopted in mainstream software development pracitces is that often it is unclear to developers how formalisms can contribute towards the software engineering process. The book would have served its purpose much better if it had spent part of eah chapter discussing the practical application of that chapter's content. For example, congruence checking and bisimulation can be incredbily exciting topics for programmers to learn if they can see practical applications of such powerful techniques. Beyond the above criticism, the book is absolutely indispensible to students and researchers in the field of formal methods. Currently it serves as the primary reference for anyone who wishes to learn the various aspects of π-calculus in detail. Raheel Ahmad

484 citations

Book ChapterDOI
01 Jan 2009

137 citations

Journal ArticleDOI
TL;DR: Term Rewriting and All That is a self-contained introduction to the field of term rewriting and covers all the basic material including abstract reduction systems, termination, confluence, completion, and combination problems.
Abstract: Term Rewriting and All That is a self-contained introduction to the field of term rewriting. The book starts with a simple motivating example and covers all the basic material including abstract reduction systems, termination, confluence, completion, and combination problems. Some closely connected subjects, such as universal algebra, unification theory, Grobher bases, and Buchberger's algorithm, are also covered.

99 citations