scispace - formally typeset
Search or ask a question

Showing papers on "Denotational semantics published in 2014"


Proceedings ArticleDOI
08 Jan 2014
TL;DR: It is shown that NetKAT is an instance of a canonical and well-studied mathematical structure called a Kleene algebra with tests (KAT) and proved that its equational theory is sound and complete with respect to its denotational semantics.
Abstract: Recent years have seen growing interest in high-level languages for programming networks. But the design of these languages has been largely ad hoc, driven more by the needs of applications and the capabilities of network hardware than by foundational principles. The lack of a semantic foundation has left language designers with little guidance in determining how to incorporate new features, and programmers without a means to reason precisely about their code.This paper presents NetKAT, a new network programming language that is based on a solid mathematical foundation and comes equipped with a sound and complete equational theory. We describe the design of NetKAT, including primitives for filtering, modifying, and transmitting packets; union and sequential composition operators; and a Kleene star operator that iterates programs. We show that NetKAT is an instance of a canonical and well-studied mathematical structure called a Kleene algebra with tests (KAT) and prove that its equational theory is sound and complete with respect to its denotational semantics. Finally, we present practical applications of the equational theory including syntactic techniques for checking reachability, proving non-interference properties that ensure isolation between programs, and establishing the correctness of compilation algorithms.

434 citations


Book
23 Sep 2014
TL;DR: This chapter discusses Grammars and Automata, which are Regular Languages, and Logic, which is a Comparative Dictionary of Logic and Propositional Calculus.
Abstract: Preliminaries. Computability: Programs and Computable Functions. Primitive Recursive Functions. A Universal Program. Calculations on Strings. Turing Machines. Processes and Grammars. Classifying Unsolvable Problems. Grammars and Automata: Regular Languages. Context-Free Languages. Context-Sensitive Languages. Logic: Propositional Calculus. Quantification Theory. Complexity: Abstract Complexity. Polynomial Time Computability. Semantics: Approximation Orderings. Denotational Semantics of Recursion Equations. Operational Semantics of Recursion Equations. Suggestions for Further Reading. Subject Index.

292 citations


Journal ArticleDOI
TL;DR: The idea that word meaning can be approximated by the patterns of co-occurrence of words in corpora from statistical semantics and the idea that compositionality can be captured in terms of a syntax-driven calculus of function application from formal semantics are adopted.
Abstract: The lexicon of any natural language encodes a huge number of distinct word meanings. Just to understand this article, you will need to know what thousands of words mean. The space of possible sentential meanings is infinite: In this article alone, you will encounter many sentences that express ideas you have never heard before, we hope. Statistical semantics has addressed the issue of the vastness of word meaning by proposing methods to harvest meaning automatically from large collections of text (corpora). Formal semantics in the Fregean tradition has developed methods to account for the infinity of sentential meaning based on the crucial insight of compositionality, the idea that meaning of sentences is built incrementally by combining the meanings of their constituents. This article sketches a new approach to semantics that brings together ideas from statistical and formal semantics to account, in parallel, for the richness of lexical meaning and the combinatorial power of sentential semantics. We adopt, in particular, the idea that word meaning can be approximated by the patterns of co-occurrence of words in corpora from statistical semantics, and the idea that compositionality can be captured in terms of a syntax-driven calculus of function application from formal semantics.

200 citations


Proceedings ArticleDOI
08 Jan 2014
TL;DR: RF*, a relational extension of F*, a general-purpose higher-order stateful programming language with a verification system based on refinement types, is presented, which is a relational Hoare logic for a higher- order, stateful, probabilistic language.
Abstract: Relational program logics have been used for mechanizing formal proofs of various cryptographic constructions. With an eye towards scaling these successes towards end-to-end security proofs for implementations of distributed systems, we present RF*, a relational extension of F*, a general-purpose higher-order stateful programming language with a verification system based on refinement types. The distinguishing feature of F* is a relational Hoare logic for a higher-order, stateful, probabilistic language. Through careful language design, we adapt the F* typechecker to generate both classic and relational verification conditions, and to automatically discharge their proofs using an SMT solver. Thus, we are able to benefit from the existing features of F*, including its abstraction facilities for modular reasoning about program fragments. We evaluate RF* experimentally by programming a series of cryptographic constructions and protocols, and by verifying their security properties, ranging from information flow to unlinkability, integrity, and privacy. Moreover, we validate the design of RF* by formalizing in Coq a core probabilistic λ calculus and a relational refinement type system and proving the soundness of the latter against a denotational semantics of the probabilistic lambda λ calculus.

112 citations


Book ChapterDOI
05 Apr 2014
TL;DR: A novel model of concurrent computations with shared memory is presented and a simple, yet powerful, logical framework for uniform Hoarestyle reasoning about partial correctness of coarse- and fine-grained concurrent programs is provided.
Abstract: We present a novel model of concurrent computations with shared memory and provide a simple, yet powerful, logical framework for uniform Hoarestyle reasoning about partial correctness of coarse- and fine-grained concurrent programs. The key idea is to specify arbitrary resource protocols as communicating state transition systems STS that describe valid states of a resource and the transitions the resource is allowed to make, including transfer of heap ownership. We demonstrate how reasoning in terms of communicating STS makes it easy to crystallize behavioral invariants of a resource. We also provide entanglement operators to build large systems from an arbitrary number of STS components, by interconnecting their lines of communication. Furthermore, we show how the classical rules from the Concurrent Separation Logic CSL, such as scoped resource allocation, can be generalized to fine-grained resource management. This allows us to give specifications as powerful as Rely-Guarantee, in a concise, scoped way, and yet regain the compositionality of CSL-style resource management. We proved the soundness of our logic with respect to the denotational semantics of action trees variation on Brookes' action traces. We formalized the logic as a shallow embedding in Coq and implemented a number of examples, including a construction of coarse-grained CSL resources as a modular composition of various logical and semantic components.

108 citations


Proceedings ArticleDOI
08 Jan 2014
TL;DR: Fundamental properties of a generalisation of monad called parametric effect monad are studied, and they are applied to the interpretation of general effect systems whose effects have sequential composition operators.
Abstract: We study fundamental properties of a generalisation of monad called parametric effect monad, and apply it to the interpretation of general effect systems whose effects have sequential composition operators. We show that parametric effect monads admit analogues of the structures and concepts that exist for monads, such as Kleisli triples, the state monad and the continuation monad, Plotkin and Power's algebraic operations, and the categorical ┬┬-lifting. We also show a systematic method to generate both effects and a parametric effect monad from a monad morphism. Finally, we introduce two effect systems with explicit and implicit subeffecting, and discuss their denotational semantics and the soundness of effect systems.

103 citations


Proceedings ArticleDOI
08 Jan 2014
TL;DR: In this paper, a denotational semantics for a quantum lambda calculus with recursion and an infinite data type is proposed, using constructions from the quantitative semantics of linear logic.
Abstract: Finding a denotational semantics for higher order quantum computation is a long-standing problem in the semantics of quantum programming languages. Most past approaches to this problem fell short in one way or another, either limiting the language to an unusably small finitary fragment, or giving up important features of quantum physics such as entanglement. In this paper, we propose a denotational semantics for a quantum lambda calculus with recursion and an infinite data type, using constructions from quantitative semantics of linear logic.

91 citations


Proceedings ArticleDOI
TL;DR: In this article, a relational refinement type system, called $\mathsf{HOARe}^2$, is proposed for verifying mechanism design and differential privacy, which is sound w.r.t. a denotational semantics and correctly models differential privacy.
Abstract: Mechanism design is the study of algorithm design in which the inputs to the algorithm are controlled by strategic agents, who must be incentivized to faithfully report them. Unlike typical programmatic properties, it is not sufficient for algorithms to merely satisfy the property---incentive properties are only useful if the strategic agents also believe this fact. Verification is an attractive way to convince agents that the incentive properties actually hold, but mechanism design poses several unique challenges: interesting properties can be sophisticated relational properties of probabilistic computations involving expected values, and mechanisms may rely on other probabilistic properties, like differential privacy, to achieve their goals. We introduce a relational refinement type system, called $\mathsf{HOARe}^2$, for verifying mechanism design and differential privacy. We show that $\mathsf{HOARe}^2$ is sound w.r.t. a denotational semantics, and correctly models $(\epsilon,\delta)$-differential privacy; moreover, we show that it subsumes DFuzz, an existing linear dependent type system for differential privacy. Finally, we develop an SMT-based implementation of $\mathsf{HOARe}^2$ and use it to verify challenging examples of mechanism design, including auctions and aggregative games, and new proposed examples from differential privacy.

62 citations


Book ChapterDOI
TL;DR: This paper outlines a formalisation strategy for making precise the core semantics of UML by strengthening the denotational semantics of the existing UML metamodel.
Abstract: The Unified Modelling Language is emerging as a de-facto standard for modelling object-oriented systems. However, the semantics document that a part of the standard definition primarily provides a description of the language's syntax and well-formedness rules. The meaning of the language, which is mainly described in English, is too informal and unstructured to provide a foundation for developing formal analysis and development techniques. This paper outlines a formalisation strategy for making precise the core semantics of UML. This is achieved by strengthening the denotational semantics of the existing UML metamodel. To illustrate the approach, the semantics of generalization/specialization are made precise.

62 citations


Journal ArticleDOI
TL;DR: Some properties of the knowledge representation language $\mathcal{A}log$ are given, an algorithm for computing its answer sets, and comparison with other approaches are given.
Abstract: The paper presents a knowledge representation language $\mathcal{A}log$ which extends ASP with aggregates. The goal is to have a language based on simple syntax and clear intuitive and mathematical semantics. We give some properties of $\mathcal{A}log$ , an algorithm for computing its answer sets, and comparison with other approaches.

50 citations


Proceedings ArticleDOI
14 Jul 2014
TL;DR: This paper investigates a type-based alternative to the existing syntactic productivity checks of Coq and Agda, using a combination of guarded recursion and quantification over clocks, which was developed by Atkey and McBride in the simply typed setting.
Abstract: To ensure consistency and decidability of type checking, proof assistants impose a requirement of productivity on corecursive definitions. In this paper we investigate a type-based alternative to the existing syntactic productivity checks of Coq and Agda, using a combination of guarded recursion and quantification over clocks. This approach was developed by Atkey and McBride in the simply typed setting, here we extend it to a calculus with dependent types. Building on previous work on the topos-of-trees model we construct a model of the calculus using a family of presheaf toposes, each of which can be seen as a multi-dimensional version of the topos-of-trees. As part of the model construction we must solve the coherence problem for modelling dependent types in locally cartesian closed categories simulatiously in a whole family of locally cartesian closed categories. We do this by embedding all the categories in a large one and applying a recent approach to the coherence problem due to Streicher and Voevodsky.

Journal ArticleDOI
TL;DR: This paper captures how the considered TGG implementation realizes the transformation by means of operational rules, define required criteria, and show conformance to the formal semantics if these criteria are fulfilled.
Abstract: The correctness of model transformations is a crucial element for model-driven engineering of high-quality software. A prerequisite to verify model transformations at the level of the model transformation specification is that an unambiguous formal semantics exists and that the implementation of the model transformation language adheres to this semantics. However, for existing relational model transformation approaches, it is usually not really clear under which constraints particular implementations really conform to the formal semantics. In this paper, we will bridge this gap for the formal semantics of triple graph grammars (TGG) and an existing efficient implementation. While the formal semantics assumes backtracking and ignores non-determinism, practical implementations do not support backtracking, require rule sets that ensure determinism, and include further optimizations. Therefore, we capture how the considered TGG implementation realizes the transformation by means of operational rules, define required criteria, and show conformance to the formal semantics if these criteria are fulfilled. We further outline how static and runtime checks can be employed to guarantee these criteria.

Journal ArticleDOI
TL;DR: An overview of the tool-supported K framework for semantics-based programming language design and formal analysis is given, namely the K definition of the dynamic and static semantics of SIMPLE, a non-trivial imperative programming language.

Posted Content
06 Jun 2014
TL;DR: This work evaluates whether each of two classes of neural model can correctly learn relationships such as entailment and contradiction between pairs of sentences, and finds that the plain RNN achieves only mixed results on all three experiments, whereas the stronger RNTN model generalizes well in every setting and appears capable of learning suitable representations for natural language logical inference.
Abstract: Supervised recursive neural network models (RNNs) for sentence meaning have been successful in an array of sophisticated language tasks, but it remains an open question whether they can learn compositional semantic grammars that support logical deduction. We address this question directly by for the first time evaluating whether each of two classes of neural model — plain RNNs and recursive neural tensor networks (RNTNs) — can correctly learn relationships such as entailment and contradiction between pairs of sentences, where we have generated controlled data sets of sentences from a logical grammar. Our first experiment evaluates whether these models can learn the basic algebra of logical relations involved. Our second and third experiments extend this evaluation to complex recursive structures and sentences involving quantification. We find that the plain RNN achieves only mixed results on all three experiments, whereas the stronger RNTN model generalizes well in every setting and appears capable of learning suitable representations for natural language logical inference.

Posted Content
TL;DR: The approach to define the semantics for UML is described and the semantics definition is detailed for U ML/P class diagrams, a variant of class diagrams which restricts the use of a few methodologically and semantically involved concepts.
Abstract: Defining semantics for UML is a difficult task. Disagreements in the meaning of UML constructs as well as the size of UML are major obstacles. In this report, we describe our approach to define the semantics for UML. Semantics is defined denotationally as a mapping into our semantics domain called the system model [4, 5, 6]. We demonstrate our approach by defining the semantics for a comprehensive version of class diagrams. The semantics definition is detailed for UML/P class diagrams, a variant of class diagrams which restricts the use of a few methodologically and semantically involved concepts. Class diagrams are well-known and rather easy to understand and thus perfect to examine the usability of the system model for precise semantic mappings.

Proceedings ArticleDOI
08 Jan 2014
TL;DR: In this paper, a denotational semantics for a region-based effect system is proposed, where only externally visible effects need to be tracked: non-observable internal modifications, such as the reorganisation of a search tree or lazy initialisation, can count as 'pure' or'read only'.
Abstract: We give a denotational semantics for a region-based effect system that supports type abstraction in the sense that only externally visible effects need to be tracked: non-observable internal modifications, such as the reorganisation of a search tree or lazy initialisation, can count as 'pure' or 'read only'. This 'fictional purity' allows clients of a module to validate soundly more effect-based program equivalences than would be possible with previous semantics. Our semantics uses a novel variant of logical relations that maps types not merely to partial equivalence relations on values, as is commonly done, but rather to a proof-relevant generalisation thereof, namely setoids. The objects of a setoid establish that values inhabit semantic types, whilst its morphisms are understood as proofs of semantic equivalence. The transition to proof-relevance solves twoawkward problems caused by naive use of existential quantification in Kripke logical relations, namely failure of admissibility and spurious functional dependencies.

Journal ArticleDOI
TL;DR: In this article, an effect system for core Eff, an ML-style programming language with first-class algebraic effects and handlers, is presented, and the safety theorem in Twelf is proved.
Abstract: We present an effect system for core Eff, a simplified variant of Eff, which is an ML-style programming language with first-class algebraic effects and handlers. We define an expressive effect system and prove safety of operational semantics with respect to it. Then we give a domain-theoretic denotational semantics of core Eff, using Pitts's theory of minimal invariant relations, and prove it adequate. We use this fact to develop tools for finding useful contextual equivalences, including an induction principle. To demonstrate their usefulness, we use these tools to derive the usual equations for mutable state, including a general commutativity law for computations using non-interfering references. We have formalized the effect system, the operational semantics, and the safety theorem in Twelf.

Journal ArticleDOI
TL;DR: The construction of NOOP is summarized, in full agreement with intuitions of OO developers using these languages, and contrary to the belief that ''inheritance is not subtyping'', which came from assuming non-nominal structural models ofOO type systems.

Journal ArticleDOI
TL;DR: This study proposes an interpretation of the inherent order-based semantics of terms through their qualitative semantics modeled by hedge algebra structures, and proposes two concepts of assessment scales to address decision problems: linguistic scales used for representing expert linguistic assessments and semantic linguistic scales based on 4-tuple linguistic representation model, which forms a formalized structure useful for computing with words.

Book ChapterDOI
05 Apr 2014
TL;DR: The characters of the polarised evaluation order are characterised through a categorical structure where the hypothesis that composition is associative is relaxed, and the result suggests that the various biases in denotational semantics: indirect, call-by-value, call -by-name... are ways of hiding the fact that compose is not always associative.
Abstract: We characterise the polarised evaluation order through a categorical structure where the hypothesis that composition is associative is relaxed. Duploid is the name of the structure, as a reference to Jean-Louis Loday’s duplicial algebras. The main result is a reflection Open image in new window where Open image in new window is a category of duploids and duploid functors, and Open image in new window is the category of adjunctions and pseudo maps of adjunctions. The result suggests that the various biases in denotational semantics: indirect, call-by-value, call-by-name... are ways of hiding the fact that composition is not always associative.

Proceedings ArticleDOI
08 Jan 2014
TL;DR: An operational and axiomatic semantics (based on separation logic) for non-determinism and sequence points in C is presented and soundness of this semantics is proved with respect to its operational semantics.
Abstract: The C11 standard of the C programming language does not specify the execution order of expressions. Besides, to make more effective optimizations possible (eg. delaying of side-effects and interleaving), it gives compilers in certain cases the freedom to use even more behaviors than just those of all execution orders.Widely used C compilers actually exploit this freedom given by the C standard for optimizations, so it should be taken seriously in formal verification. This paper presents an operational and axiomatic semantics (based on separation logic) for non-determinism and sequence points in C. We prove soundness of our axiomatic semantics with respect to our operational semantics. This proof has been fully formalized using the Coq proof assistant.

Journal ArticleDOI
TL;DR: This work proposes new semantics for is based on timed event structures (TES) and presents an operational semantics based on the non-deterministic timed concurrent constraint calculus and relates such a semantics to the TES semantics.
Abstract: Most interactive scenarios are based on informal specifications, so that it is not possible to formally verify properties of such systems. We advocate the need for a general and formal model aiming at ensuring safe executions of interactive multimedia scenarios. Interactive scores (is) is a formalism based on temporal constraints to describe interactive scenarios. We propose new semantics for is based on timed event structures (TES). With such a semantics, we can specify more properties of the system, in particular, properties about execution traces, which are difficult to specify as constraints. We also present an operational semantics of is based on the non-deterministic timed concurrent constraint calculus and we relate such a semantics to the TES semantics. With the operational semantics, we can describe the behaviour of scores whose timed object durations can be arbitrary integer intervals.

Book ChapterDOI
18 Jun 2014
TL;DR: It is argued that, for NLs, the divide between model-theoretic semantics and proof-the theoretical semantics has not been well-understood, and that MTTs arguably have unique advantages when employed for formal semantics.
Abstract: In this talk, we contend that, for NLs, the divide between model-theoretic semantics and proof-theoretic semantics has not been well-understood. In particular, the formal semantics based on modern type theories (MTTs) may be seen as both model-theoretic and proof-theoretic. To be more precise, it may be seen both ways in the sense that the NL semantics can first be represented in an MTT in a model-theoretic way and then the semantic representations can be understood inferentially in a proof-theoretic way. Considered in this way, MTTs arguably have unique advantages when employed for formal semantics.

Posted Content
TL;DR: In this report, the SysLab system model is complemented in different ways: state-box models are provided through timed port automata, for which an operational and a corresponding denotational semantics are given.
Abstract: In this report, the SysLab system model is complemented in different ways: State-box models are provided through timed port automata, for which an operational and a corresponding denotational semantics are given Composition is defined for components modeled in the state-box view as well as for components modeled in the black-box view This composition is well-defined for networks of infinitely many components To show the applicability of the model, several examples are given

01 Jan 2014
TL;DR: This paper embeds rely-guarantee thinking into a refinement calculus for concurrent programs, in which programs are developed in (small) steps from an abstract specification, and extends the implementation language with specification constructs.
Abstract: Interference is the essence of concurrency and it is what makes reasoning about concurrent programs difficult. The fundamental insight of rely-guarantee thinking is that stepwise design of concurrent programs can only be compositional in development methods that offer ways to record and reason about interference. In this way of thinking, a rely relation records assumptions about the behaviour of the environment, and a guarantee relation records commitments about the behaviour of the process. The standard application of these ideas is in the extension of Hoare-like inference rules to quintuples that accommodate rely and guarantee conditions. In contrast, in this paper, we embed rely-guarantee thinking into a refinement calculus for concurrent programs, in which programs are developed in (small) steps from an abstract specification. As is usual, we extend the implementation language with specification constructs (the extended language is sometimes called a wide-spectrum language), in this case adding two new commands: a guarantee command (guar g.c) whose valid behaviours are in accord with the command c but all of whose atomic steps also satisfy the relation g, and a rely command (rely r.c) whose behaviours are like c provided any interference steps from the environment satisfy the relation r. The theory of concurrent program refinement is based on a theory of atomic program steps and more powerful refinement laws, most notably, a law for decomposing a specification into a parallel composition, are developed from a small set of more primitive lemmas, which are proved sound with respect to an operational semantics. © 2014 Newcastle University. Printed and published by Newcastle University, Computing Science, Claremont Tower, Claremont Road, Newcastle upon Tyne, NE1 7RU, England. Bibliographical details HAYES, I, J., JONES, C, B., COLVIN, R, J. Laws and semantics for rely-guarantee refinement [By] I. J. Hayes, C.B. Jones, and R. J. Colvin Newcastle upon Tyne: Newcastle University: Computing Science, 2014. (Newcastle University, Computing Science, Technical Report Series, No. CS-TR-1425)

Journal ArticleDOI
TL;DR: It is shown that well-founded semantics, a widely accepted approach in the field of non-monotonic reasoning, corresponds to weak completion semantics for a specific class of modified programs.
Abstract: Formal approaches that aim at representing human reasoning should be evaluated based on how humans actually reason. One way of doing so is to investigate whether psychological findings of human reasoning patterns are represented in the theoretical model. The computational logic approach discussed here is the so-called weak completion semantics which is based on the three-valued ᴌukasiewicz logic. We explain how this approach adequately models Byrne’s suppression task, a psychological study where the experimental results show that participants’ conclusions systematically deviate from the classical logically correct answers. As weak completion semantics is a novel technique in the field of computational logic, it is important to examine how it corresponds to other already established non-monotonic approaches. For this purpose we investigate the relation of weak completion with respect to completion and three-valued stable model semantics. In particular, we show that well-founded semantics, a widely accepted...

Proceedings ArticleDOI
22 Apr 2014
TL;DR: The PlanCompS project has developed a component-based approach to semantics that provides good modularity, facilitates reuse, and supports co-evolution of languages and their formal semantics.
Abstract: Semantic specifications of programming languages typically have poor modularity This hinders reuse of parts of the semantics of one language when specifying a different language -- even when the two languages have many constructs in common -- and evolution of a language may require major reformulation of its semantics Such drawbacks have discouraged language developers from using formal semantics to document their designsIn the PlanCompS project, we have developed a component-based approach to semantics Here, we explain its modularity aspects, and present an illustrative case study Our approach provides good modularity, facilitates reuse, and supports co-evolution of languages and their formal semantics It could be particularly useful in connection with domain-specific languages and language-driven software development

Book ChapterDOI
01 Jan 2014
TL;DR: This chapter draws together material from several papers to deliver a coherent account of a journey from the foundations of a mathematics with names, via logical systems based on those foundations, to concrete applications in axiomatising systems with binding.
Abstract: Nominal techniques concern the study of names using mathematical semantics. Whereas in much previous work names in abstract syntax were studied, here we will study them in meta-mathematics. More specifically, we survey the application of nominal techniques to languages for unification, rewriting, algebra, and first-order logic.What characterises the languages of this chapter is that they are first-order in character, and yet they can specify and reason on names. In the languages we develop, it will be fairly straightforward to give first-order ‘nominal’ axiomatisations of name-related things like alpha-equivalence, capture-avoiding substitution, beta- and eta-equivalence, first-order logic with its quantifiers—and as we shall see, also arithmetic. The formal axiomatisations we arrive at will closely resemble ‘natural behaviour’; the specifications we see typically written out in normal mathematical usage.This is possible because of a novel name-carrying semantics in nominal sets, through which our languages will have name-permutations and term-formers that can bind as primitive built-in features.This chapter draws together material from several papers to deliver a coherent account of a journey from the foundations of a mathematics with names, via logical systems based on those foundations, to concrete applications in axiomatising systems with binding. Definitions and proofs have been improved, generalised, and shortened, and placed into an overall narrative.On the way we touch on a variety of definitions and results. These include: the nominal unification algorithm; nominal rewriting and its confluence proofs; nominal algebra, its soundness, completeness, and an HSP theorem; permissive-nominal logic and its soundness and completeness; various axiomatisations with pointers to proofs of their correctness; and we conclude with a case study stating and proving correct a finite first-order axiomatisation of arithmetic in permissive-nominal logic.

Posted Content
TL;DR: This work redefine the semantics of Moggi’s computational ?
Abstract: Wadler and Thiemann unified type-and-effect systems with monadic semantics via a syntactic correspondence and soundness results with respect to an operational semantics. They conjecture that a general, “coherent” denotational semantics can be given to unify effect systems with a monadic-style semantics. We provide such a semantics based on the novel structure of an indexed monad, which we introduce. We redefine the semantics of Moggi’s computational ?-calculus in terms of (strong) indexed monads which gives a oneto-one correspondence between indices of the denotations and the effect annotations of traditional effect systems. Dually, this approach yields indexed comonads which gives a unified semantics and effect system to contextual notions of effect (called coeffects), which we have previously described

Book ChapterDOI
05 Apr 2014
TL;DR: This work shows how to automatically derive pretty-big- step rules directly from small-step rules by 'refocusing', which gives the best of both worlds: one only needs to write the relatively concise small-Step specifications, but the reasoning can be big-step as well as small- step.
Abstract: Big-step semantics for languages with abrupt termination and/or divergence suffer from a serious duplication problem, addressed by the novel 'pretty-big-step' style presented by Chargueraud at ESOP'13. Such rules are less concise than corresponding small-step rules, but they have the same advantages as big-step rules for program correctness proofs. Here, we show how to automatically derive pretty-big-step rules directly from small-step rules by 'refocusing'. This gives the best of both worlds: we only need to write the relatively concise small-step specifications, but our reasoning can be big-step as well as small-step. The use of strictness annotations to derive small-step congruence rules gives further conciseness.