scispace - formally typeset
Search or ask a question
Author

Michele Pagani

Other affiliations: University of Paris, University of Turin, Roma Tre University  ...read more
Bio: Michele Pagani is an academic researcher from Paris Diderot University. The author has contributed to research in topics: Linear logic & Probabilistic logic. The author has an hindex of 15, co-authored 41 publications receiving 788 citations. Previous affiliations of Michele Pagani include University of Paris & University of Turin.

Papers
More filters
Proceedings ArticleDOI
08 Jan 2014
TL;DR: In this paper, a denotational semantics for a quantum lambda calculus with recursion and an infinite data type is proposed, using constructions from the quantitative semantics of linear logic.
Abstract: Finding a denotational semantics for higher order quantum computation is a long-standing problem in the semantics of quantum programming languages. Most past approaches to this problem fell short in one way or another, either limiting the language to an unusably small finitary fragment, or giving up important features of quantum physics such as entanglement. In this paper, we propose a denotational semantics for a quantum lambda calculus with recursion and an infinite data type, using constructions from quantitative semantics of linear logic.

91 citations

Proceedings ArticleDOI
08 Jan 2014
TL;DR: It is proved that the equality of interpretations in Pcoh characterizes the operational indistinguishability of programs in PCF with a random primitive, the first result of full abstraction for a semantics of probabilistic PCF.
Abstract: Probabilistic coherence spaces (PCoh) yield a semantics of higher-order probabilistic computation, interpreting types as convex sets and programs as power series. We prove that the equality of interpretations in Pcoh characterizes the operational indistinguishability of programs in PCF with a random primitive.This is the first result of full abstraction for a semantics of probabilistic PCF. The key ingredient relies on the regularity of power series.Along the way to the theorem, we design a weighted intersection type assignment system giving a logical presentation of PCoh.

86 citations

Proceedings ArticleDOI
25 Jun 2013
TL;DR: The generalization of the category Rel of sets and relations to an arbitrary continuous semiring R is considered, producing a cpo-enriched category which is a semantics of LL, and its (co)Kleisli category is an adequate model of an extension of PCF, parametrized by R.
Abstract: The category Rel of sets and relations yields one of the simplest denotational semantics of Linear Logic (LL). It is known that Rel is the biproduct completion of the Boolean ring. We consider the generalization of this construction to an arbitrary continuous semiring R, producing a cpo-enriched category which is a semantics of LL, and its (co)Kleisli category is an adequate model of an extension of PCF, parametrized by R. Specific instances of R allow us to compare programs not only with respect to "what they can do", but also "in how many steps" or "in how many different ways" (for non-deterministic PCF) or even "with what probability" (for probabilistic PCF).

83 citations

Journal ArticleDOI
TL;DR: A semantic account of the execution time (i.e. the number of cut elimination steps leading to the normal form) of an untyped MELL net and proves that a net is head-normalizable if and only if its exhaustive interpretation (a suitable restriction of its interpretation) is not empty.

63 citations

Proceedings ArticleDOI
21 Jun 2011
TL;DR: This paper proves that a denotational semantics interpreting programs by power series with non negative real coefficients is adequate for a probabilistic extension of the untyped $\lambda$-calculus: the probability that a term reduces to ahead normal form is equal to its denotation computed on a suitable set of values.
Abstract: We study the probabilistic coherent spaces -- a denotational semantics interpreting programs by power series with non negative real coefficients. We prove that this semantics is adequate for a probabilistic extension of the untyped $\lambda$-calculus: the probability that a term reduces to ahead normal form is equal to its denotation computed on a suitable set of values. The result gives, in a probabilistic setting, a quantitative refinement to the adequacy of Scott's model for untyped $\lambda$-calculus.

44 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Book ChapterDOI
01 Jan 2002
TL;DR: This chapter presents the basic concepts of term rewriting that are needed in this book and suggests several survey articles that can be consulted.
Abstract: In this chapter we will present the basic concepts of term rewriting that are needed in this book. More details on term rewriting, its applications, and related subjects can be found in the textbook of Baader and Nipkow [BN98]. Readers versed in German are also referred to the textbooks of Avenhaus [Ave95], Bundgen [Bun98], and Drosten [Dro89]. Moreover, there are several survey articles [HO80, DJ90, Klo92, Pla93] that can also be consulted.

501 citations

01 Jan 1999

166 citations

Journal ArticleDOI
TL;DR: By understanding the GPU architecture and its massive parallelism programming model, one can overcome many of the technical limitations found along the way, design better GPU-based algorithms for computational physics problems and achieve speedups that can reach up to two orders of magnitude when compared to sequential implementations.
Abstract: Parallel computing has become an important subject in the field of computer science and has proven to be critical when researching high performance solutions. The evolution of computer architectures ( multi-core and many-core ) towards a higher number of cores can only confirm that parallelism is the method of choice for speeding up an algorithm. In the last decade, the graphics processing unit, or GPU, has gained an important place in the field of high performance computing (HPC) because of its low cost and massive parallel processing power. Super-computing has become, for the first time, available to anyone at the price of a desktop computer. In this paper, we survey the concept of parallel computing and especially GPU computing. Achieving efficient parallel algorithms for the GPU is not a trivial task, there are several technical restrictions that must be satisfied in order to achieve the expected performance. Some of these limitations are consequences of the underlying architecture of the GPU and the theoretical models behind it. Our goal is to present a set of theoretical and technical concepts that are often required to understand the GPU and its massive parallelism model. In particular, we show how this new technology can help the field of computational physics, especially when the problem is data-parallel. We present four examples of computational physics problems; n-body, collision detection, Potts model and cellular automata simulations. These examples well represent the kind of problems that are suitable for GPU computing. By understanding the GPU architecture and its massive parallelism programming model, one can overcome many of the technical limitations found along the way, design better GPU-based algorithms for computational physics problems and achieve speedups that can reach up to two orders of magnitude when compared to sequential implementations.

158 citations

Book ChapterDOI
01 Jan 2009

137 citations