scispace - formally typeset
Search or ask a question

Showing papers presented at "International Symposium on Theoretical Aspects of Computer Software in 1997"


Book ChapterDOI
23 Sep 1997
TL;DR: This paper explains how computational reflection can help build efficient certified decision procedure in reduction systems and discusses the concept of total reflection, which is not yet implemented in Coq but can be tested as the extraction process is effective.
Abstract: In this paper we explain how computational reflection can help build efficient certified decision procedure in reduction systems. We have developed a decision procedure on abelian rings in the Coq system but the approach we describe applies to all reduction systems that allow the definition of concrete types (or datatypes). We show that computational reflection is more efficient than an LCF-like approach to implement decision procedures in a reduction system. We discuss the concept of total reflection, which we have investigated in Coq using two facts: the extraction process available in Coq and the fact that the implementation language of the Coq system can be considered as a sublanguage of Coq. Total reflection is not yet implemented in Coq but we can test its performance as the extraction process is effective. Both reflection and total reflection are conservative extensions of the reduction system in which they are used. We also discuss performance and related approaches. In the paper,we assume basic knowledges of ML and proof-checkers.

134 citations


Book ChapterDOI
23 Sep 1997
TL;DR: A detailed comparison of four models of a recursive-record encoding of a calculus of primitive objects using the typed lambda-calculus F < ω : as a common basis is offered.
Abstract: Recent years have seen the development of several foundational models for statically typed object-oriented programming. But despite their intuitive similarity, differences in the technical machinery used to formulate the various proposals have made them difficult to compare. Using the typed lambda-calculus F < ω : as a common basis, we now offer a detailed comparison of four models: (1) a recursive-record encoding similar to the ones used by Cardelli [Car84], Reddy [Red88, KR94], Cook [Coo89, CHC90], and others; (2) Hofmann, Pierce, and Turner's existential encoding [PT94, HP95]; (3) Bruce's model based on existential and recursive types [Bru94]; and (4) Abadi, Cardelli, and Viswanathan's type-theoretic encoding [ACV96] of a calculus of primitive objects.

101 citations


Book ChapterDOI
23 Sep 1997
TL;DR: These rules have the form of typing rules for a basic concurrent language with cryptographic primitives, the spi calculus, and guarantee that, if a protocol typechecks, then it does not leak its secret inputs.
Abstract: We develop principles and rules for achieving secrecy properties in security protocols Our approach is based on traditional classification techniques, and extends those techniques to handle concurrent processes that use shared-key cryptography The rules have the form of typing rules for a basic concurrent language with cryptographic primitives, the spi calculus They guarantee that, if a protocol typechecks, then it does not leak its secret inputs

98 citations


Book ChapterDOI
23 Sep 1997
TL;DR: Two mutual encodings of the Calculus of Inductive Constructions in Zermelo-Fraenkel set theory are presented, relating the number of universes in the type theory with the numberof inaccessible cardinals in the set theory.
Abstract: We present two mutual encodings, respectively of the Calculus of Inductive Constructions in Zermelo-Fraenkel set theory and the opposite way More precisely, we actually construct two families of encodings, relating the number of universes in the type theory with the number of inaccessible cardinals in the set theory The main result is that both hierarchies of logical formalisms interleave wrt expressive power and thus are essentially equivalent Both encodings are quite elementary: type theory is interpreted in set theory through a generalization of Coquand's simple proof-irrelevance interpretation Set theory is encoded in type theory using a variant of Aczel's encoding; we have formally checked this last part using the Coq proof assistant

89 citations


Book ChapterDOI
23 Sep 1997
TL;DR: This work precisely characterize a class of cyclic lambda-graphs, and then gives a sound and complete axiomatization of the terms that represent a given graph, and defines the infinite normal form or Levy-Longo tree of a cyclic term.
Abstract: We precisely characterize a class of cyclic lambda-graphs, and then give a sound and complete axiomatization of the terms that represent a given graph. The equational axiom system is an extension of lambda calculus with the letrec construct. In contrast to current theories, which impose restrictions on where the rewriting can take place, our theory is very liberal, e.g., it allows rewriting under lambda-abstractions and on cycles. As shown previously, the reduction theory is non-confluent. We thus introduce an approximate notion of confluence. Using this notion we define the infinite normal form or Levy-Longo tree of a cyclic term. We show that the infinite normal form defines a congruence on the set of terms. We relate our cyclic lambda calculus to the traditional lambda calculus and to the infinitary lambda calculus. Since most implementations of non-strict functional languages rely on sharing to avoid repeating computations, we develop a variant of our calculus that enforces the sharing of computations and show that the two calculi are observationally equivalent. For reasoning about strict languages we develop a call-by-value variant of the sharing calculus. We state the difference between strict and non-strict computations in terms of different garbage collection rules. We relate the call-by-value calculus to Moggi's computational lambda calculus and to Hasegawa's calculus.

72 citations


Book ChapterDOI
23 Sep 1997
TL;DR: A two-level version λc of the computational lambda calculus is defined and it is demonstrated that it is an inevitable description for sound specialization and developed a sound specializer similar to continuation-based specializers.
Abstract: Moggi's computational lambda calculus λc is a well-established model of computation. We define a two-level version λc of the computational lambda calculus and demonstrate that it is an inevitable description for sound specialization. We implement the calculus in terms of a standard two-level lambda calculus via a continuation-passing style transformation. This transformation is sound and complete with respect to λc; it forms a reflection in the two-level lambda calculus of λc. As a practical ramification of this work we show that several published specialization algorithms are unsound and develop a sound specializer similar to continuation-based specializers.

32 citations


Book ChapterDOI
23 Sep 1997
TL;DR: An axiomatic approach to logical relations and data refinement is introduced and it is proved that any relation between models of such a sketch in a semantic category satisfies a soundness condition and such relations compose.
Abstract: We introduce an axiomatic approach to logical relations and data refinement. We consider a programming language and the monad on the category of small categories generated by it. We identify abstract data types for the language with sketches for the associated monad, and define an axiomatic notion of “relation” between models of such a sketch in a semantic category. We then prove three results: (i) such models lift to the whole language together with the sketch; (ii) any such relation satisfies a soundness condition, and (iii) such relations compose. We do this for both equality of data representations and for an ordered version. Finally, we compare our formulation of data refinement with that of Hoare.

32 citations


Book ChapterDOI
23 Sep 1997
TL;DR: A modest conservative extension to ML that allows semi-explicit higher-order polymorphism while preserving the essential properties of ML and is particularly useful in Objective ML where polymorphism replaces subtyping.
Abstract: We propose a modest conservative extension to ML that allows semi-explicit higher-order polymorphism while preserving the essential properties of ML. In our proposal, the introduction of polymorphic types remains fully explicit, that is, both the introduction and the exact polymorphic type must be specified. However, the elimination of polymorphic types is now semi-implicit: only the elimination itself must be specified as the polymorphic type is inferred. This extension is particularly useful in Objective ML where polymorphism replaces subtyping.

30 citations


Book ChapterDOI
23 Sep 1997
TL;DR: This paper gives three category theoretic semantics for modelling continuations and shows the relationships between them, and extends the result about environments to show that the second and third semantics are essentially equivalent, and that they include the first.
Abstract: There have traditionally been two approaches to modelling environments, one by use of finite products in Cartesian closed categories, the other by use of the base categories of indexed categories with structure. Recently, there have been more general definitions along both of these lines: the first generalising from Cartesian to symmetric premonoidal categories, the second generalising from indexed categories with specified structure to κ-categories. The added generality is not of the purely mathematical kind; in fact it is necessary to extend semantics from the logical calculi studied in, say, Type Theory to more realistic programming language fragments. In this paper, we establish an equivalence between these two recent notions. We then use that equivalence to study semantics for continuations. We give three category theoretic semantics for modelling continuations and show the relationships between them. The first is given by a continuations monad. The second is based on a symmetric premonoidal category with a self-adjoint structure. The third is based on a κ-category with indexed self-adjoint structure. We extend our result about environments to show that the second and third semantics are essentially equivalent, and that they include the first.

28 citations


Book ChapterDOI
23 Sep 1997
TL;DR: A new model PAN (= PA + PN) is defined that subsumes both Petri nets and PA-processes and is thus strictly more expressive than Petrinets, and it is shown that the reachability problem is still decidable for PAN.
Abstract: Petri nets and PA-processes are incomparable models of infinite state concurrent systems. We define a new model PAN (= PA + PN) that subsumes both of them and is thus strictly more expressive than Petri nets. It extends Petri nets with the possibility to call subroutines. We show that the reachability problem is still decidable for PAN. It is even decidable, if there is a reachable state that satisfies certain properties that can be encoded in a simple logic.

27 citations


Book ChapterDOI
23 Sep 1997
TL;DR: Three semantic models for actor computation are defined to provide semantics for descriptions of actor components based on actor theories and it is shown that the semantics is a component algebra homomorphism.
Abstract: We define three semantic models for actor computation starting with a generalization to open systems of Clinger's event diagram model, and forming two abstractions: interaction diagrams and interaction paths. An algebra is defined on each semantic domain with operations for parallel composition, hiding of internal actors, and renaming. We use these models to provide semantics for descriptions of actor components based on actor theories and show that the semantics is a component algebra homomorphism.

Book ChapterDOI
23 Sep 1997
TL;DR: The construction of relational interpretations for an ML-like language with recursive functions and recursive types in a purely operational setting is studied, an adaptation of results of Pitts on relational properties of domains to an operational setting, making use of techniques introduced by Mason, Smith, and Talcott for proving operational equivalence of expressions.
Abstract: Relational interpretations of type systems are a useful tool for establishing properties of programming languages. For languages with recursive types the existence of a relational interpretation is often difficult to establish. The most well-known approach is to pass to a domain theoretic model of the language, using the structure of the domain to define a suitable system of relations. Here we study the construction of relational interpretations for an ML-like language with recursive functions and recursive types in a purely operational setting. The construction is an adaptation of results of Pitts on relational properties of domains to an operational setting, making use of techniques introduced by Mason, Smith, and Talcott for proving operational equivalence of expressions. To illustrate the method we give a relational proof of correctness of the continuation-passing transformation used in some compilers for functional languages.

Book ChapterDOI
23 Sep 1997
TL;DR: A type-based technique for the verification of deadlock-freedom in asynchronous concurrent systems that incorporates an elegant treatment of both divergence and successful termination is presented.
Abstract: We present a type-based technique for the verification of deadlock-freedom in asynchronous concurrent systems. Our general approach is to start with a simple interaction category, in which objects are types containing safety specifications and morphisms are processes. We then use a specification structure to add information to the types so that they specify stronger properties. In this paper the starting point is the category ASProc and the extra type information concerns deadlock-freedom. In the resulting category ASPrOC D , combining well-typed processes preserves deadlock-freedom. It is also possible to accommodate non-compositional methods within the same framework. The systems we consider are asynchronous, hence issues of divergence become significant; our approach incorporates an elegant treatment of both divergence and successful termination. As an example, we use our methods to verify the deadlock-freedom of an implementation of the alternating-bit protocol.

Book ChapterDOI
23 Sep 1997
TL;DR: The join-calculus as discussed by the authors is a model for distributed programming languages with migratory features, which allows standard polymorphic ML-like typing and thus an integration in a realistic programming language.
Abstract: The join-calculus is a model for distributed programming languages with migratory features. It is an asynchronous process calculus based on static scope and an explicit notion of locality and failures. It allows standard polymorphic ML-like typing and thus an integration in a realistic programming language. It has a distributed implementation on top of the Caml language. We review here some of the results recently obtained in the join-calculus.

Book ChapterDOI
23 Sep 1997
TL;DR: A first-order modal μ-calculus is presented which uses parameterised maximal fix-points to describe safety and liveness properties of processes and a local model checking proof system is given for deciding if a process satisfies such a formula.
Abstract: We present a first-order modal μ-calculus which uses parameterised maximal fix-points to describe safety and liveness properties of processes. Then we give a local model checking proof system for deciding if a process satisfies such a formula. The processes we consider are those definable in regular value-passing CCS with parameterised recursive definitions. Certain rules in the proof system carry side conditions which leave auxiliary proof obligations of checking properties of the data language.

Book ChapterDOI
23 Sep 1997
TL;DR: A theory and proof rules for the refinement of action systems that communicate via remote procedures based on the data refinement approach are developed and the atomicity refinement of actions is studied.
Abstract: Recently the action systems formalism for parallel and distributed systems has been extended with the procedure mechanism. This gives us a very general framework for describing different communication paradigms for action systems, e.g. remote procedure calls. Action systems come with a design methodology based on the refinement calculus. Data refinement is a powerful technique for refining action systems. In this paper we will develop a theory and proof rules for the refinement of action systems that communicate via remote procedures based on the data refinement approach. The proof rules we develop are compositional so that modular refinement of action systems is supported. As an .example we will especially study the atomicity refinement of actions. This is an important refinement strategy, as it potentially increases the degree of parallelism in an action system.

Book ChapterDOI
23 Sep 1997
TL;DR: In this article, a formal proof for a formula that can be seen as a simple but meaningful program specification is presented, and the computational behaviour of the corresponding term is analyzed by interpreting it as a (higher-order communicating) process formed by distinct subprocesses which co-operate in different ways, producing different results, according to the reduction strategy used.
Abstract: λ Sym PA is a natural deduction system for Peano Arithmetic that was developed in order to provide a basis for the programming with-proofs paradigm in a classical logic setting. In the paper we analyze one of its main features: non-confluence. After looking at which rules can cause non-confluence, we develop in the system a formal proof for a formula that can be seen as a simple but meaningful program specification. The computational behaviour of the corresponding term will be analysed by interpreting it as a (higher-order communicating) process formed by distinct subprocesses which co-operate in different ways, producing different results, according to the reduction strategy used. We also show how to restrict the system in order to get confluence without loosing its computational features. The restricted system enables us to argue for the expressive power of symmetric and non-deterministic calculi like λ Sym PA .

Book ChapterDOI
23 Sep 1997
TL;DR: A semantic proof of the conservativity of higher-orderaction calculi over action calculi is given, and a precise connection with Moggi's computational lambda calculus and notions of computation is made.
Abstract: Milner introduced action calculi as a framework for representing models of interactive behaviour. He also introduced the higher-order action calculi, which add higher-order features to the basic setting. We present type theories for action calculi and higher-order action calculi, and give the categorical models of the higher-order calculi. As applications, we give a semantic proof of the conservativity of higher-order action calculi over action calculi, and a precise connection with Moggi's computational lambda calculus and notions of computation.

Book ChapterDOI
23 Sep 1997
TL;DR: An algorithm for simplifying quantified types in the presence of subtyping is presented and it is proved it is sound and complete for non-recursive and recursive types.
Abstract: Many type inference and program analysis systems include notions of subtyping and parametric polymorphism. When used together, these two features induce equivalences that allow types to be simplified by eliminating quantified variables. Eliminating variables both improves the readability of types and the performance of algorithms whose complexity depends on the number of type variables. We present an algorithm for simplifying quantified types in the presence of subtyping and prove it is sound and complete for non-recursive and recursive types.

Proceedings Article
23 Sep 1997
TL;DR: The join-calculus is a model for distributed programming languages with migratory features based on static scope and an explicit notion of locality and failures that allows standard polymorphic ML-like typing and thus an integration in a realistic programming language.
Abstract: The join-calculus is a model for distributed programming languages with migratory features. It is an asynchronous process calculus based on static scope and an explicit notion of locality and failures. It allows standard polymorphic ML-like typing and thus an integration in a realistic programming language. It has a distributed implementation on top of the Caml language. We review here some of the results recently obtained in the join-calculus.

Book ChapterDOI
23 Sep 1997
TL;DR: The main result is that (w.r.t. the possibility of replacing safely a lazy application by a strict one) the strictness and totality information given by this system is equivalent to the information giving by two separate systems: one for strictness, and one for totality.
Abstract: In this paper we present a revised and extended version of the strictness and totality type assignment system introduced by Solberg, Nielson and Nielson in the Static Analysis Symposium '94. Our main result is that (w.r.t. the possibility of replacing safely a lazy application by a strict one) the strictness and totality information given by this system is equivalent to the information given by two separate systems: one for strictness, and one for totality. This result is interesting from both a theoretical (understanding of the relations between strictness and totality) and a practical (more efficient checking and inference algorithms) point of view. Moreover we prove that both the system for strictness and the system for totality have a sound and complete inclusion relation between types w.r.t. the semantics induced by the term model of a language including a convergence to weak head normal form test at higher types.

Proceedings Article
23 Sep 1997
TL;DR: The paper develops in the λ Sym PA system a formal proof for a formula that can be seen as a simple but meaningful program specification and shows how to restrict the system in order to get confluence without loosing its computational features.
Abstract: λ Sym PA is a natural deduction system for Peano Arithmetic that was developed in order to provide a basis for the programming with-proofs paradigm in a classical logic setting. In the paper we analyze one of its main features: non-confluence. After looking at which rules can cause non-confluence, we develop in the system a formal proof for a formula that can be seen as a simple but meaningful program specification. The computational behaviour of the corresponding term will be analysed by interpreting it as a (higher-order communicating) process formed by distinct subprocesses which co-operate in different ways, producing different results, according to the reduction strategy used. We also show how to restrict the system in order to get confluence without loosing its computational features. The restricted system enables us to argue for the expressive power of symmetric and non-deterministic calculi like λ Sym PA .

Book ChapterDOI
23 Sep 1997
TL;DR: A model-checking method based on approximations and symbolic representations to compute the set of states that satisfy a temporal formula and a verification tool is developed based on this method.
Abstract: Real-time systems can be described using the timed automata of Alur and Dill. Although there exist model-checking algorithms for timed automata, the problem is intractable (PSPACE-complete). In this paper, we propose a model-checking method based on approximations and symbolic representations. We recursively refine over- and underapproximations to compute the set of states that satisfy a temporal formula. The approximate sets are represented using a combination of BDDs (Binary Decision Diagrams) and DBMS (Difference Bound Matrices). We have developed a verification tool based on this method. As a case study, we check safety and liveness properties of an Ethernet protocol.

Book ChapterDOI
23 Sep 1997
TL;DR: This work suggests the use of recursive Bohm trees combinators as a machine-language for reactive programming by considering variations on this basic idea, and generalisations of finite-state transducers suggested by the general formalism of regular Bohm Trees.
Abstract: We present a uniform translation from finite-state transducers to regular Bohm trees presentations. The corresponding Bohm tree represents directly the trace semantics of all finite and infinite behaviours of the given transducer. We consider variations on this basic idea, and generalisations of finite-state transducers suggested by the general formalism of regular Bohm trees. This work suggests the use of recursive Bohm trees combinators as a machine-language for reactive programming.

Book ChapterDOI
Erik Poll1
23 Sep 1997
TL;DR: This work presents an extension F width of system F with a restricted form of subtyping — width-subtyped — on record types, that does provide so-called polymorphic updates and shows it is still possible to give a PER model for this system.
Abstract: It is a well-known problem that F ≤ — the polymorphic lambda calculus F extended with subtyping — does not provide so-called polymorphic updates, and that the standard PER model for F ≤ does not provide interpretations for these operations. The polymorphic updates are interesting because they play an important role in some type-theoretic models of object-oriented languages. We present an extension F width of system F with a restricted form of subtyping — width-subtyping — on record types, that does provide these operations. The main result is that we show it is still possible to give a PER model for this system.

Book ChapterDOI
23 Sep 1997
TL;DR: This paper demonstrates that under an assumption on the arities of a higher- order calculus (analogous to the assumption of simple types in the λ-calculus), β-reduction in higher-order action calculi is strongly normalising.
Abstract: The framework of action calculi accommodates a variety of disciplines of interaction and computation. A general theory of action calculi is under development; each particular action calculus — such as the π-calculus — will possess also a specific theory. It has previously been shown that any action calculus can be extended in a conservative manner to higher-order, thus allowing its actions to be encapsulated and treated as data. The dynamics of each higher-order calculus includes β-reduction, analogous to the λ-calculus. This paper demonstrates that under an assumption on the arities of a higher-order calculus (analogous to the assumption of simple types in the λ-calculus), β-reduction in higher-order action calculi is strongly normalising.

Book ChapterDOI
23 Sep 1997
TL;DR: In this article, a modal connective is proposed to formalise validity in a logical framework for programming consequence relation-based proof systems, which is suitable for natural deduction style presentations based on truth consequence.
Abstract: Logical frameworks, formal systems for programming consequence relation based proof systems, are well known. The notations that have been proposed are suited best to natural deduction style presentations based on truth consequence. We develop a conservative extension of a typical logical framework providing a modal connective which we can use to formalise validity. We argue that this extension is sensible, and provide example encodings of non-standard logics in its terms.

Book ChapterDOI
23 Sep 1997
TL;DR: It is extended Abramsky's result by proving that the Lindenbaum algebra generated by the infinitary logic is a completely distributive lattice dual to the same SFP-domain.
Abstract: The Lindenbaum algebra generated by the Abramsky finitary logic is a distributive lattice dual to an SFP-domain obtained as a solution of a recursive domain equation. We extend Abramsky's result by proving that the Lindenbaum algebra generated by the infinitary logic is a completely distributive lattice dual to the same SFP-domain. As a consequence soundness and completeness of the infinitary logic is obtained for the class of finitary transition systems. A corollary of this result is that the same holds for the infinitary Hennessy-Milner logic.

Book ChapterDOI
Atsushi Ohori1
23 Sep 1997
TL;DR: This work presents a method for coherent transformation of an ML style polymorphic language into an explicitly typed calculus, analyzes the existing methods for compiling record calculus and unboxed calculus, and develops a framework for type based specialization of polymorphism.
Abstract: Flexibility of programming and efficiency of program execution are two important features of a programming language. Unfortunately, however, there is an inherent conflict between these features in design and implementation of a modern statically typed programming language. Flexibility is achieved by high-degree of polymorphism, which is based on generic primitives in an abstract model of computation, while efficiency requires optimal use of low-level primitives specialized to individual data structures. The motivation of this work is to reconcile these two features by developing a mechanism for specializing polymorphic primitives based on static type information. We first present a method for coherent transformation of an ML style polymorphic language into an explicitly typed calculus. We then analyze the existing methods for compiling record calculus and unboxed calculus, extract their common structure, and develop a framework for type based specialization of polymorphism.