scispace - formally typeset
Search or ask a question

Showing papers presented at "International Symposium on Theoretical Aspects of Computer Software in 2001"


Journal ArticleDOI
29 Oct 2001
TL;DR: Nominal Logic is introduced, a version of first-order many-sorted logic with equality containing primitives for renaming via name-swapping, for freshness of names, and for name-binding, and its axioms express properties of these constructs satisfied by the FM-sets model of syntax involving binding.
Abstract: This paper formalises within first-order logic some common practices in computer science to do with representing and reasoning about syntactical structures involving lexically scoped binding constructs. It introduces Nominal Logic, a version of first-order many-sorted logic with equality containing primitives for renaming via name-swapping, for freshness of names, and for name-binding. Its axioms express properties of these constructs satisfied by the FM-sets model of syntax involving binding, which was recently introduced by the author and M.J. Gabbay and makes use of the Fraenkel-Mostowski permutation model of set theory. Nominal Logic serves as a vehicle for making two general points. First, name-swapping has much nicer logical properties than more general, non-bijective forms of renaming while at the same time providing a sufficient foundation for a theory of structural induction/recursion for syntax modulo α-equivalence. Secondly, it is useful for the practice of operational semantics to make explicit the equivariance property of assertions about syntax - namely that their validity is invariant under name-swapping.

418 citations


Journal ArticleDOI
29 Oct 2001
TL;DR: This paper considers LTL with regular valuations: the set of configurations satisfying an atomic proposition can be an arbitrary regular language and claims that the model-checking algorithms provide a general, unifying and efficient framework for solving them.
Abstract: Recent works have proposed pushdown systems as a tool for analyzing programs with (recursive) procedures, and the model-checking problem for LTL has received special attention. However, all these works impose a strong restriction on the possible valuations of atomic propositions: whether a configuration of the pushdown system satisfies an atomic proposition or not can only depend on the current control state of the pushdown automaton and on its topmost stack symbol. In this paper we consider LTL with regular valuations: the set of configurations satisfying an atomic proposition can be an arbitrary regular language. The model-checking problem is solved via two different techniques, with an eye on efficiency. The resulting algorithms are polynomial in certain measures of the problem which are usually small, but can be exponential in the size of the problem instance. However, we show that this exponential blowup is inevitable. The extension to regular valuations allows to model problems in different areas; for instance, we show an application to the analysis of systems with checkpoints. We claim that our model-checking algorithms provide a general, unifying and efficient framework for solving them.

174 citations


Book ChapterDOI
29 Oct 2001
TL;DR: This work compares two views of symmetric cryptographic primitives in the context of the systems that use them and establishes the soundness of the formal definition of equivalence of systems with respect to eavesdroppers.
Abstract: We compare two views of symmetric cryptographic primitives in the context of the systems that use them. We express those systems in a simple programming language; each of the views yields a semantics for the language. One of the semantics treats cryptographic operations formally (that is, symbolically). The other semantics is more detailed and computational; it treats cryptographic operations as functions on bitstrings. Each semantics leads to a definition of equivalence of systems with respect to eavesdroppers. We establish the soundness of the formal definition with respect to the computational one. This result provides a precise computational justification for formal reasoning about security against eavesdroppers.

161 citations


Proceedings Article
29 Oct 2001
TL;DR: Boxed Ambients are a variant of Mobile Ambients that result from dropping the open capability and providing new primitives for ambient communication while retaining the constructs in and out for mobility.
Abstract: Boxed Ambients are a variant of Mobile Ambients that result from (i) dropping the open capability and (ii) providing new primitives for ambient communication while retaining the constructs in and out for mobility. The new model of communication is faithful to the principles of distribution and location-awareness of Mobile Ambients, and complements the constructs for Mobile Ambient mobility with finer-grained mechanisms for ambient interaction.

136 citations


Proceedings Article
29 Oct 2001
TL;DR: A central theme of this paper is the combination of a logical notion of freshness with inductive and coinductive definitions of properties.
Abstract: We present a logic that can express properties of freshness, secrecy, structure, and behavior of concurrent systems. In addition to standard logical and temporal operators, our logic includes spatial operations corresponding to composition, local name restriction, and a primitive freshname quantifier. Properties can also be defined by recursion; a central theme of this paper is then the combination of a logical notion of freshness with inductive and coinductive definitions of properties.

131 citations


Book ChapterDOI
29 Oct 2001
TL;DR: This article combines a semi-decision procedure for recurrence with a semidecision method for length-boundedness of paths in such a way that an automatic verification method for progress properties of linear and polynomial hybrid automata that may only fail on pathological, practically uninteresting cases.
Abstract: Hybrid automata have been introduced in both control engineering and computer science as a formal model for the dynamics of hybrid discrete-continuous systems. While computability issues concerning safety properties have been extensively studied, liveness properties have remained largely uninvestigated. In this article, we investigate decidability of state recurrence and of progress properties.First, we show that state recurrence and progress are in general undecidable for polynomial hybrid automata. Then, we demonstrate that they are closely related for hybrid automata subject to a simple model of noise, even though these automata are infinite-state systems. Based on this, we augment a semi-decision procedure for recurrence with a semidecision method for length-boundedness of paths in such a way that we obtain an automatic verification method for progress properties of linear and polynomial hybrid automata that may only fail on pathological, practically uninteresting cases. These cases are such that satisfaction of the desired progress property crucially depends on the complete absence of noise, a situation unlikely to occur in real hybrid systems.

51 citations


Book ChapterDOI
29 Oct 2001
TL;DR: It is shown how control-flow-based program transformations in functional languages can be proven correct and how two program transformations - flow-based inlining and lightweight defunctionalization - can beproven correct.
Abstract: We show how control-flow-based program transformations in functional languages can be proven correct. The method relies upon "defunctionalization," a mapping from a higher-order language to a firstorder language. We first show that defunctionalization is correct; using this proof and common semantic techniques, we then show how two program transformations - flow-based inlining and lightweight defunctionalization - can be proven correct.

49 citations


Proceedings Article
29 Oct 2001
TL;DR: This work addresses the problems of implementing the replication operator efficiently in the solos calculus - a calculus of mobile processes without prefix, and shows that nested occurrences of replication can be avoided, that the size of replicated terms can be limited to three particles, and that the usual unfolding semantics of replicate can be replaced by three simple reduction rules.
Abstract: We address the problems of implementing the replication operator efficiently in the solos calculus - a calculus of mobile processes without prefix. This calculus is expressive enough to admit an encoding of the whole fusion calculus and thus the ?-calculus. We show that nested occurrences of replication can be avoided, that the size of replicated terms can be limited to three particles, and that the usual unfolding semantics of replication can be replaced by three simple reduction rules. To illustrate the results and show how the calculus can be efficiently implemented we present a graphic representation of agents in the solos calculus, adapting ideas from interaction diagrams and pi-nets.

30 citations


Book ChapterDOI
29 Oct 2001
TL;DR: A relational model of heap structure capable of expressing sharing and mutual influence between objects; a declarative specification style that works in the presence of collaboration; and a tool-supported constraint analysis to expose problems in a diagram that captures, at a design level, a pattern of interaction.
Abstract: The state of the practice in object-oriented software development has moved beyond reuse of code to reuse of conceptual structures such as design patterns. This paper draws attention to some difficulties that need to be solved if this style of development is to be supported by formal methods. In particular, the centrality of object interactions in many designs makes traditional reasoning less useful, since classes cannot be treated fruitfully in isolation from one another. We propose some ideas towards dealing with these issues: a relational model of heap structure capable of expressing sharing and mutual influence between objects; a declarative specification style that works in the presence of collaboration; and a tool-supported constraint analysis to expose problems in a diagram that captures, at a design level, a pattern of interaction. We illustrate these ideas with an example taken from a program used in the formatting of this paper.

29 citations


Journal ArticleDOI
Philip Wadler1
29 Oct 2001
TL;DR: It is shown that in the presence of Reynolds's parametricity property that this is indeed the case, for propositions corresponding to inductive definitions of naturals, products, sums, and fixpoint types.
Abstract: The second-order polymorphic lambda calculus, F2, was independently discovered by Girard and Reynolds. Girard additionally proved a representation theorem: every function on natural numbers that can be proved total in second-order intuitionistic propositional logic, P2, can be represented in F2. Reynolds additionally proved an abstraction theorem: for a suitable notion of logical relation, every term in F2 takes related arguments into related results. We observe that the essence of Girard's result is a projection from P2 into F2, and that the essence of Reynolds's result is an embeddingof F2 into P2, and that the Reynolds embeddingfollo wed by the Girard projection is the identity. The Girard projection discards all first-order quantifiers, so it seems unreasonable to expect that the Girard projection followed by the Reynolds embed-dingshould also be the identity. However, we show that in the presence of Reynolds's parametricity property that this is indeed the case, for propositions correspondingto inductive definitions of naturals, products, sums, and fixpoint types.

28 citations


Book ChapterDOI
29 Oct 2001
TL;DR: It is shown that bisimulation, simulation, and in fact all relations between bisimulations and trace inclusion are undecidable for lossy channel systems (and for lossed vector addition systems).
Abstract: Lossy channel systems are systems of finite state automata that communicate via unreliable unbounded fifo channels. Today the main open question in the theory of lossy channel systems is whether bisimulation is decidable.We show that bisimulation, simulation, and in fact all relations between bisimulation and trace inclusion are undecidable for lossy channel systems (and for lossy vector addition systems).

Book ChapterDOI
29 Oct 2001
TL;DR: A notion of bisimulation for graph rewriting systems is introduced, allowing to prove observational equivalence for dynamically evolving graphs and networks, and an up-to technique simplifying bisimilarity proofs is introduced.
Abstract: We introduce a notion of bisimulation for graph rewriting systems, allowing us to prove observational equivalence for dynamically evolving graphs and networks.We use the framework of synchronized graph rewriting with mobility which we describe in two different, but operationally equivalent ways: on graphs defined as syntactic judgements and by using tile logic. One of the main results of the paper says that bisimilarity for synchronized graph rewriting is a congruence whenever the rewriting rules satisfy the basic source property. Furthermore we introduce an up-to technique simplifying bisimilarity proofs and use it in an example to show the equivalence of a communication network and its specification.

Book ChapterDOI
29 Oct 2001
TL;DR: An operational model for socket programming with a substantial fraction of UDP and ICMP, including loss and failure is given, not tied to a particular programming language, but can be used with any language equipped with an operational semantics for system calls.
Abstract: Network programming is notoriously hard to understand: one has to deal with a variety of protocols (IP, ICMP, UDP, TCP etc.), concurrency, packet loss, host failure, timeouts, the complex sockets interface to the protocols, and subtle portability issues. Moreover, the behavioural properties of operating systems and the network are not well documented.A few of these issues have been addressed in the process calculus and distributed algorithm communities, but there remains a wide gulf between what has been captured in semantic models and what is required for a precise understanding of the behaviour of practical distributed programs that use these protocols.In this paper we demonstrate (in a preliminary way) that the gulf can be bridged. We give an operational model for socket programming with a substantial fraction of UDP and ICMP, including loss and failure. The model has been validated by experiment against actual systems. It is not tied to a particular programming language, but can be used with any language equipped with an operational semantics for system calls - here we give such a language binding for an OCaml fragment. We illustrate the model with a few small network programs.

Book ChapterDOI
29 Oct 2001
TL;DR: Two modal typing systems with the approximation modality are presented, which has been proposed by the author to capture selfreferences involved in computer programs and their specifications and implies the decidability of type inhabitance in the typing systems.
Abstract: We present two modal typing systems with the approximation modality, which has been proposed by the author to capture selfreferences involved in computer programs and their specifications. The systems are based on the simple and the F-semantics of types, respectively, and correspond to the same modal logic, which is considered the intuitionistic version of the logic of provability. We also show Kripke completeness of the modal logic and its decidability, which implies the decidability of type inhabitance in the typing systems.

Book ChapterDOI
29 Oct 2001
TL;DR: The weakest congruence that preserves the predicate "any-lock" which distinguishes those systems that can stop executing visible actions from those that cannot is introduced and it is shown that there is no minimum (least) characterisation for the CSP failures-divergences equivalence.
Abstract: In process algebras the weakest congruences that preserve interesting properties of systems are of theoretical and practical importance. A system can stop executing visible actions in two ways: by deadlocking or livelocking. The weakest deadlock-preserving congruence was published in [20]. The weakest livelock-preserving congruence and the weakest congruence that preserves all traces of visible actions leading to a livelock were published in [17]. In this paper we will equate deadlock and livelock. We introduce the weakest congruence that preserves the predicate "any-lock" which distinguishes those systems that can stop executing visible actions from those that cannot. We also present the weakest congruence that preserves all traces after which the system can stop executing visible actions. Finally, we give two simple weakest-congruence characterisations for the CSP failures-divergences equivalence, one of which is a minimal characterisation in a well-defined sense. However, we also show that there is no minimum (least) characterisation.

Book ChapterDOI
29 Oct 2001
TL;DR: The modelisation of a special class of timed automata, named p-automata in the proof assistant Coq, is presented, which emphasizes the specific features of Coq which have been used, in particular dependent types and tactics based on computational reflection.
Abstract: This paper presents the modelisation of a special class of timed automata, named p-automata in the proof assistant Coq. This work was performed in the framework of the CALIFE project1 which aims to build a general platform for specification, validation and test of critical algorithms involved in telecommunications. This paper does not contain new theoretical results but explains how to combine and adapt known techniques in order to build an environment dedicated to a class of problems. It emphasizes the specific features of Coq which have been used, in particular dependent types and tactics based on computational reflection.

Book ChapterDOI
Kazunori Ueda1
29 Oct 2001
TL;DR: In this paper, the authors propose a capability type system for recycling concurrent programs, which allows concurrent reading of data structures via controlled aliasing, which is similar to our approach in this paper.
Abstract: The use of types to deal with access capabilities of program entities is becoming increasingly popular.In concurrent logic programming, the first attempt was made in Moded Flat GHC in 1990, which gave polarity structures (modes) to every variable occurrence and every predicate argument. Strong moding turned out to play fundamental roles in programming, implementation and the in-depthunderstanding of constraint-based concurrent computation. The moding principle guarantees that each variable is written only once and encourages capability-conscious programming. Furthermore, it gives less generic modes to programs that discard or duplicate data, thus providing the view of "data as resources." A simple linearity system built upon the mode system distinguishes variables read only once from those read possibly many times, enabling compile-time garbage collection. Compared to linear types studied in other programming paradigms, the primary issue in constraint-based concurrency has been to deal with logical variables and highly non-strict data structures they induce.In this paper, we put our resource-consciousness one step forward and consider a class of 'ecological' programs which recycle or return all the resources given to them while allowing concurrent reading of data structures via controlled aliasing. This completely recyclic subset enforces us to think more about resources, but the resulting programs enjoy high symmetry which we believe has more than aesthetic implications to our programming practice in general.The type system supporting recyclic concurrent programming gives a [-1, +1] capability to eacho ccurrence of variable and function symbols (constructors), where positive/negative values mean read/write capabilities, respectively, and fractions mean non-exclusive read/write paths. The capabilities are intended to be statically checked or reconstructed so that one can tell the polarity and exclusiveness of each piece of information handled by concurrent processes. The capability type system refines and integrates the mode system and the linearity system for Moded Flat GHC. Its arithmetic formulation contributes to the simplicity.The execution of a recyclic program proceeds so that every variable has zero-sum capability and the resources (i.e., constructors weighted by their capabilities) a process absorbs match the resources it emits. Constructors accessed by a process withan exclusive read capability can be reused for other purposes.The first half of this paper is devoted to a tutorial introduction to constraint-based concurrency in the hope that it will encourage crossfertilization of different concurrency formalisms.

Book ChapterDOI
29 Oct 2001
TL;DR: It is shown that simple structural conditions on proofs of convergence of equational programs, in the intrinsic-theories verification framework of [16], correspond to resource bounds on program execution, and provides a user-transparent method for certifying the computational complexity of functional programs.
Abstract: We show that simple structural conditions on proofs of convergence of equational programs, in the intrinsic-theories verification framework of [16], correspond to resource bounds on program execution. These conditions may be construed as reflecting finitistic-predicative reasoning. The results provide a user-transparent method for certifying the computational complexity of functional programs. In particular, we define natural notions of data-positive formulas and of data-predicative derivations, and show that restricting induction to data-positive formulas captures precisely the primitive recursive functions, data-predicative derivations characterize the Kalmar-elementary functions, and the combination of both yields the poly-time functions.

Book ChapterDOI
Makoto Hamana1
29 Oct 2001
TL;DR: A logic programming language based on Fiore, Plotkin and Turi's binding algebras, with an operational semantics by SLD-resolution and unification algorithm for binding terms and a type theory which reflects this categorical semantics.
Abstract: We give a logic programming language based on Fiore, Plotkin and Turi's binding algebras.In this language, we can use not only first-order terms but also terms involving variable binding.The aim of this language is similar to Nadathur and Miller's ?Prolog, which can also deal with binding structure by introducing ?-terms in higher-order logic. But the notion of binding used here is finer in a sense than the usual ?-binding. We explicitly manage names used for binding and treat ?-conversion with respect to them. Also an important difference is the form of application related to s-conversion, i.e. we only allow the form (M x), where x is a (object) variable, instead of usual application (M N). This notion of binding comes from the semantics of binding by the category of presheaves. We firstly give a type theory which reflects this categorical semantics. Then we proceed along the line of first-order logic programming language, namely, we give a logic of this language, an operational semantics by SLD-resolution and unification algorithm for binding terms.

Book ChapterDOI
29 Oct 2001
TL;DR: The syntax, the operational semantics, and the type system of ?
Abstract: We propose the ?D-calculus, a process calculus that can flexibly model fine-grained control of resource access in distributed computation, with a type system that statically prevents access violations. Access control of resources is important in distributed computation, where resources themselves or their contents may be transmitted from one domain to another and thereby vital resources may be exposed to unauthorized processes. In ?D, a notion of hierarchical domains is introduced as an abstraction of protection domains, and considered as the unit of access control. Domainsare treated as first-class values and can be created dynamically. In addition, the hierarchal structure of domains can be extended dynamically as well. These features are the source of the expressiveness of ?D. This paper presents the syntax, the operational semantics, and the type system of ?D, with examples to demonstrate its expressiveness.

Book ChapterDOI
29 Oct 2001
TL;DR: In this paper, a labelled tableau calculus for propositional BI logic is proposed, which can be viewed as a merging of intuitionistic logic and multiplicative intuitionistic linear logic.
Abstract: In this paper, we study proof-search in the propositional BI logic that can be viewed as a merging of intuitionistic logic and multiplicative intuitionistic linear logic. With its underlying sharing interpretation, BI has been recently used for logic programming or reasoning about mutable data structures. We propose a labelled tableau calculus for BI, the use of labels making it possible to generate countermodels. We show that, from a given formula A, a non-redundant tableau construction procedure terminates and yields either a tableau proof of A or a countermodel of A in terms of the Kripke resource monoid semantics. Moreover, we show the finite model property for BI with respect to this semantics.

Book ChapterDOI
29 Oct 2001
TL;DR: This work proves strong normalization of second order ?
Abstract: Parigot suggested symmetric structural reduction rules for application to µ-abstraction in [9]to ensure unique representation of data type. We prove strong normalization of second order ?µ-calculus with these rules.

Proceedings Article
01 Jan 2001
TL;DR: It is shown that, from a given formula A, a non-redundant tableau construction procedure terminates and yields either a tableau proof of A or a countermodel of A in terms of the Kripke resource monoid semantics.
Abstract: In this paper, we study proof-search in the propositional BI logic that can be viewed as a merging of intuitionistic logic and multiplicative intuitionistic linear logic With its underlying sharing interpretation, BI has been recently used for logic programming or reasoning about mutable data structures We propose a labelled tableau calculus for BI, the use of labels making it possible to generate countermodels We show that, from a given formula $A$, a non-redundant tableau construction procedure terminates and yields either a tableau proof of $A$ or a countermodel of $A$ in terms of the Kripke resource monoid semantics Moreover, we show the finite model property for BI with respect to this semantics

Book ChapterDOI
29 Oct 2001
TL;DR: This work describes the specification and implementation of Unison - a file synchronizer engineered for portability, speed, and robustness, with thousands of daily users, and presents a precise high-level specification of the behavior, an idealized implementation, and the outline of a proof that the implementation satisfies the specification.
Abstract: File synchronizers are tools that reconcile disconnected modifications to replicated directory structures. Like other replication and reconciliation facilities provided by modern operating systems and middleware layers, trustworthy synchronizers are notoriously difficult to build: they must deal correctly with both the semantic complexities of file systems and the unpredictable failure modes arising from distributed operation. On the other hand, synchronizers are simpler than most of their relatives in that they operate as stand-alone, user-level utilities, whose intended behavior is relatively easy to isolate from the other functions of the system. This combination of subtlety and isolation makes synchronizers attractive candidates for precise mathematical specification.We describe the specification and implementation of Unison - a file synchronizer engineered for portability, speed, and robustness, with thousands of daily users. Unison's code base and its specification have evolved in parallel, over several years, and each has strongly influenced the other. We present a precise high-level specification of Unison's behavior, an idealized implementation, and the outline of a proof (which we have formalized using Coq) that the implementation satisfies the specification. We begin with a straightforward definition of the system's core behavior - propagation of changes and detection of conflicting changes - then refine it to take into account the possibility of failures during reconciliation, then refine it again to cover synchronization of "metadata" such as permissions and modification times.In each part, we address two critical issues: first, the relation between the informal expectations of users and our mathematical specification, and, second, the relation between our idealized implementation and the actual code base (i.e., the abstractions needed to obtain a tractable mathematical object from a real-world systems program, and the extent to which studying this idealized implementation sheds useful light on the real one).

Book ChapterDOI
Keye Martin1
29 Oct 2001
TL;DR: A notion of complexity for the renee equation is introduced and used to develop a method for analyzing search algorithms which enables a uniform treatment of techniques that manipulate discrete data as well as those which manipulate continuous data.
Abstract: We introduce a notion of complexity for the renee equation and use it to develop a method for analyzing search algorithms which enables a uniform treatment of techniques that manipulate discrete data, like linear and binary search of lists, as well as those which manipulate continuous data, like methods for zero finding in numerical analysis

Book ChapterDOI
29 Oct 2001
TL;DR: This paper demonstrates the generation of a linear time query processing algorithm based on the constructive proof of Higman's lemma described by Murthy-Russell (IEEE LICS 1990) and has, until now, not been published elsewhere.
Abstract: This paper demonstrates the generation of a linear time query processing algorithm based on the constructive proof of Higman's lemma described by Murthy-Russell (IEEE LICS 1990). A linear time evaluation of a fixed disjunctive monadic query in an indefinite database on a linearly ordered domain, first posed by Van der Meyden (ACM PODS 1992), is used as an example. Van der Meyden showed the existence of a linear time algorithm, but an actual construction has, until now, not been published elsewhere.

Book ChapterDOI
29 Oct 2001
TL;DR: This work introduces type systems which ensure that programs use the forwarding mechanism in a coordinated way, ruling out various run time hazards.
Abstract: We consider processor architectures where communication of values is achieved through operand queues instead of registers Explicit forwarding tags in an instruction's code denote the source of its operands and the destination of its result We give operational models for sequential and distributed execution, where no assumptions are made about the relative speed of functional units We introduce type systems which ensure that programs use the forwarding mechanism in a coordinated way, ruling out various run time hazards Deadlocks due to operand starvation, operand queue mismatches, non-determinism due to race conditions and deadlock due to the finite length of operand queues are eliminated Types are based on the shape of operand queue configurations, abstracting from the value of an operand and from the order of items in operand queues Extending ideas from the literature relating program fragments adjacent in the control flow graph, the type system is generalised to forwarding across basic blocks

Book ChapterDOI
29 Oct 2001
TL;DR: A type theory with infinitary intersection and union types for the lazy ?
Abstract: A type theory with infinitary intersection and union types for the lazy ?-calculus is introduced. Types are viewed as upper closed subsets of a Scott domain. Intersection and union type constructors are interpreted as the set-theoretic intersection and union, respectively, even when they are not finite. The assignment of types to ?-terms extends naturally the basic type assignment system. We prove soundness and completeness using a generalization of Abramsky's finitary domain logic for applicative transition systems.