scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 2009"


Journal ArticleDOI
TL;DR: The proof-theoretic presentation of coinductive definitions and proofs offered by Coq is explained, and it is shown that it facilitates the discovery and the presentation of the results.
Abstract: Using a call-by-value functional language as an example, this article illustrates the use of coinductive definitions and proofs in big-step operational semantics, enabling it to describe diverging evaluations in addition to terminating evaluations. We formalize the connections between the coinductive big-step semantics and the standard small-step semantics, proving that both semantics are equivalent. We then study the use of coinductive big-step semantics in proofs of type soundness and proofs of semantic preservation for compilers. A methodological originality of this paper is that all results have been proved using the Coq proof assistant. We explain the proof-theoretic presentation of coinductive definitions and proofs offered by Coq, and show that it facilitates the discovery and the presentation of the results.

191 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated algorithmic randomness on more general spaces than the Cantor space, namely computable metric spaces, and developed a unified framework allowing computations with probability measures.
Abstract: In this paper, we investigate algorithmic randomness on more general spaces than the Cantor space, namely computable metric spaces. To do this, we first develop a unified framework allowing computations with probability measures. We show that any computable metric space with a computable probability measure is isomorphic to the Cantor space in a computable and measure-theoretic sense. We show that any computable metric space admits a universal uniform randomness test (without further assumption).

129 citations


Journal ArticleDOI
TL;DR: A major goal of this paper is to show that RLS does not force or pre-impose any given language definitional style, and that its flexibility and ease of use makes RLS an appealing framework for exploring new definitional styles.
Abstract: This paper shows how rewriting logic semantics (RLS) can be used as a computational logic framework for operational semantic definitions of programming languages. Several operational semantics styles are addressed: big-step and small-step structural operational semantics (SOS), modular SOS, reduction semantics with evaluation contexts, continuation-based semantics, and the chemical abstract machine. Each of these language definitional styles can be faithfully captured as an RLS theory, in the sense that there is a one-to-one correspondence between computational steps in the original language definition and computational steps in the corresponding RLS theory. A major goal of this paper is to show that RLS does not force or pre-impose any given language definitional style, and that its flexibility and ease of use makes RLS an appealing framework for exploring new definitional styles.

103 citations


Journal ArticleDOI
TL;DR: Algorithms that directly infer very simple forms of 1-unambiguous regular expressions from positive data are described, both in terms of regular expressions and of (not necessarily minimal) deterministic finite automata.
Abstract: We describe algorithms that directly infer very simple forms of 1-unambiguous regular expressions from positive data. Thus, we characterize the regular language classes that can be learned this way, both in terms of regular expressions and in terms of (not necessarily minimal) deterministic finite automata.

87 citations


Journal ArticleDOI
TL;DR: This work proposes a process calculus to study the behavioural theory of Mobile Ad Hoc Networks and proves that the two semantics coincide, and uses the (bi)simulation proof method to formally prove a number of non-trivial properties of ad hoc networks.
Abstract: We propose a process calculus to study the behavioural theory of Mobile Ad Hoc Networks. The operational semantics of our calculus is given both in terms of a Reduction Semantics and in terms of a Labelled Transition Semantics. We prove that the two semantics coincide. The labelled transition system is then used to derive the notions of (weak) simulation and bisimulation for ad hoc networks. The labelled bisimilarity completely characterises reduction barbed congruence, a standard branching-time and contextually-defined program equivalence. We then use our (bi)simulation proof method to formally prove a number of non-trivial properties of ad hoc networks.

77 citations


Journal ArticleDOI
TL;DR: The quantified constraint satisfaction framework is used to study how the complexity of deciding such a game depends on the parameter set of allowed predicates, and it is shown that the complexity is determined by the surjective polymorphisms of the constraint predicates.
Abstract: We study the complexity of two-person constraint satisfaction games. An instance of such a game is given by a collection of constraints on overlapping sets of variables, and the two players alternately make moves assigning values from a finite domain to the variables, in a specified order. The first player tries to satisfy all constraints, while the other tries to break at least one constraint; the goal is to decide whether the first player has a winning strategy. We show that such games can be conveniently represented by a logical form of quantified constraint satisfaction, where an instance is given by a first-order sentence in which quantifiers alternate and the quantifier-free part is a conjunction of (positive) atomic formulas; the goal is to decide whether the sentence is true. While the problem of deciding such a game is PSPACE-complete in general, by restricting the set of allowed constraint predicates, one can obtain infinite classes of constraint satisfaction games of lower complexity. We use the quantified constraint satisfaction framework to study how the complexity of deciding such a game depends on the parameter set of allowed predicates. With every predicate, one can associate certain predicate-preserving operations, called polymorphisms. We show that the complexity of our games is determined by the surjective polymorphisms of the constraint predicates. We illustrate how this result can be used by identifying the complexity of a wide variety of constraint satisfaction games.

73 citations


Journal ArticleDOI
TL;DR: It is shown that contrarily to LAL, DLAL ensures good properties on lambda-terms: subject reduction is satisfied and a well-typed term admits a polynomial bound on the length of any of its beta reduction sequences.
Abstract: We present a polymorphic type system for lambda calculus ensuring that well-typed programs can be executed in polynomial time: dual light affine logic (DLAL). DLAL has a simple type language with a linear and an intuitionistic type arrow, and one modality. It corresponds to a fragment of light affine logic (LAL). We show that contrarily to LAL, DLAL ensures good properties on lambda-terms (and not only on proof-nets): subject reduction is satisfied and a well-typed term admits a polynomial bound on the length of any of its beta reduction sequences. We also give a translation of LAL into DLAL and deduce from it that all polynomial time functions can be represented in DLAL.

59 citations


Journal ArticleDOI
TL;DR: The maximum amount of information that can be extracted by repeated experiments coincides with the absolute leakage A of the process, so the overall extraction cost is at least A/R, where R is the rate of theprocess.
Abstract: Building on simple information-theoretic concepts, we study two quantitative models of information leakage in the pi-calculus. The first model presupposes an attacker with an essentially unlimited computational power. The resulting notion of absolute leakage, measured in bits, is in agreement with secrecy as defined by Abadi and Gordon: a process has an absolute leakage of zero precisely when it satisfies secrecy. The second model assumes a restricted observation scenario, inspired by the testing equivalence framework, where the attacker can only conduct repeated success-or-failure experiments on processes. Moreover, each experiment has a cost in terms of communication effort. The resulting notion of leakage rate, measured in bits per action, is in agreement with the first model: the maximum amount of information that can be extracted by repeated experiments coincides with the absolute leakage A of the process. Moreover, the overall extraction cost is at least A/R, where R is the rate of the process. The compositionality properties of the two models are also investigated.

51 citations


Journal ArticleDOI
TL;DR: This work describes the design of MOOSE, its syntax, operational semantics, and type system, and develops a type inference system that establishes the progress property: once a communication has been established, well-typed programs will never starve at communication points.
Abstract: A session takes place between two parties; after establishing a connection, each party interleaves local computations and communications (sending or receiving) with the other. Session types characterise such sessions in terms of the types of values communicated and the shape of protocols, and have been developed for the @p-calculus, CORBA interfaces, and functional languages. We study the incorporation of session types into object-oriented languages through MOOSE, a multi-threaded language with session types, thread spawning, iterative, and higher-order sessions. Our design aims to consistently integrate the object-oriented programming style and sessions, and to be able to treat various case studies from the literature. We describe the design of MOOSE, its syntax, operational semantics, and type system, and develop a type inference system. After proving subject reduction, we establish the progress property: once a communication has been established, well-typed programs will never starve at communication points.

45 citations


Journal ArticleDOI
TL;DR: This work focuses here on the case of perfect VSS where the number of corrupted parties t satisfies t and revisits the following question: what is the optimal round complexity of verifiable secret sharing?
Abstract: We revisit the following question: what is the optimal round complexity of verifiable secret sharing (VSS)? We focus here on the case of perfect VSS where the number of corrupted parties t satisfies t

41 citations


Journal ArticleDOI
TL;DR: In this article, an object calculus, Asynchronous Sequential Processes (ASP), is defined, with its semantics, and it is proved also confluence properties for the ASP calculus, which are a very general and dynamic property ensuring confluence.
Abstract: Deterministic behavior for parallel and distributed computation is rather difficult to ensure. To reach that goal, many formal calculi, languages, and techniques with well-defined semantics have been proposed in the past. But none of them focused on an imperative object calculus with asynchronous communications and futures. In this article, an object calculus, Asynchronous Sequential Processes (ASP), is defined, with its semantics. We prove also confluence properties for the ASP calculus. ASPs main characteristics are asynchronous communications with futures, and sequential execution within each process. This paper provides a very general and dynamic property ensuring confluence. Further, more specific and static properties are derived. Additionally, we present a formalization of distributed components based on ASP, and show how such components are used to statically ensure determinacy. This paper can also be seen as a formalization of the concept of futures in a distributed object setting.

Journal ArticleDOI
TL;DR: It is demonstrated that every Boolean grammar can be transformed into an equivalent (under the new semantics) grammar in normal form, and an O(n^3) algorithm for parsing that applies to any such normalized Boolean grammar is proposed.
Abstract: Boolean grammars [A. Okhotin, Boolean grammars, Information and Computation 194 (1) (2004) 19-48] are a promising extension of context-free grammars that supports conjunction and negation in rule bodies. In this paper, we give a novel semantics for Boolean grammars which applies to all such grammars, independently of their syntax. The key idea of our proposal comes from the area of negation in logic programming, and in particular from the so-called well-founded semantics which is widely accepted in this area to be the ''correct'' approach to negation. We show that for every Boolean grammar there exists a distinguished (three-valued) interpretation of the non-terminal symbols, which satisfies all the rules of the grammar and at the same time is the least fixed-point of an operator associated with the grammar. Then, we demonstrate that every Boolean grammar can be transformed into an equivalent (under the new semantics) grammar in normal form. Based on this normal form, we propose an O(n^3) algorithm for parsing that applies to any such normalized Boolean grammar. In summary, the main contribution of this paper is to provide a semantics which applies to all Boolean grammars while at the same time retaining the complexity of parsing associated with this type of grammars.

Journal ArticleDOI
TL;DR: Bialgebraic semantics is combined with a coalgebraic approach to modal logic in a novel, general approach to proving the compositionality of process equivalences for languages defined by structural operational semantics.
Abstract: Bialgebraic semantics, invented a decade ago by Turi and Plotkin, is an approach to formal reasoning about well-behaved structural operational semantics (SOS). An extension of algebraic and coalgebraic methods, it abstracts from concrete notions of syntax and system behaviour, thus treating various kinds of operational descriptions in a uniform fashion. In this paper, bialgebraic semantics is combined with a coalgebraic approach to modal logic in a novel, general approach to proving the compositionality of process equivalences for languages defined by structural operational semantics. To prove compositionality, one provides a notion of behaviour for logical formulas, and defines an SOS-like specification of modal operators which reflects the original SOS specification of the language. This approach can be used to define SOS congruence formats as well as to prove compositionality for specific languages and equivalences.

Journal ArticleDOI
TL;DR: The data complexity of both satisfiability and finite satisfiability for the two-variable fragment with counting quantifiers is NP-complete as mentioned in this paper, and the data complexity for both query answering and finite query answering is co-NP-complete.
Abstract: The data-complexity of both satisfiability and finite satisfiability for the two-variable fragment with counting quantifiers is NP-complete; the data-complexity of both query answering and finite query answering for the two-variable guarded fragment with counting quantifiers is co-NP-complete.

Journal ArticleDOI
TL;DR: An extension to the infinitary lambda calculus, where Bohm trees can be directly manipulated as infinite terms, yields a more simple and intuitive explanation of the correctness of these Church-Rosser counterexamples.
Abstract: We present an introduction to infinitary lambda calculus, highlighting its main properties. Subsequently we give three applications of infinitary lambda calculus. The first addresses the non-definability of Surjective Pairing, which was shown by the first author not to be definable in lambda calculus. We show how this result follows easily as an application of Berry's Sequentiality Theorem, which itself can be proved in the setting of infinitary lambda calculus. The second pertains to the notion of relative recursiveness of number-theoretic functions. The third application concerns an explanation of counterexamples to confluence of lambda calculus extended with non-left-linear reduction rules: Adding non-left-linear reduction rules such as @dxx->x or the reduction rules for Surjective Pairing to the lambda calculus yields non-confluence, as proved by the second author. We discuss how an extension to the infinitary lambda calculus, where Bohm trees can be directly manipulated as infinite terms, yields a more simple and intuitive explanation of the correctness of these Church-Rosser counterexamples.

Journal Article
Gao Feng1
TL;DR: Aiming at the singularity and slow-convergence problem of the existing terminal sliding mode(TSM) control, the authors proposes a nonsingular and fast terminal sliding function and proves its finite-time convergence property with Lyapunov method.
Abstract: Aiming at the singularity and slow-convergence problem of the existing terminal sliding mode(TSM) control, this paper proposes a nonsingular and fast terminal sliding function and proves its finite-time convergence property with Lyapunov method.On this basis,the control law is synthesized by employing an attractor with negative exponential factor to guarantee the time-continuous control input and global existence of the sliding mode.Theoretical analysis indicates that by selecting the control parameters properly,convergence stagnation of the closed system can be avoided and a preferable robustness can be obtained when the model error and the external disturbance are bounded.

Journal ArticleDOI
TL;DR: The definition of ML^F is revisited, following a more progressive approach and focusing on the design-space and expressiveness, and an interpretation of iML^F types as instantiation-closed sets of Dash System-F types is provided.
Abstract: The language ML^F is a proposal for a new type system that supersedes both ML and System F, allows for efficient, predictable, and complete type inference for partially annotated terms. In this work, we revisit the definition of ML^F, following a more progressive approach and focusing on the design-space and expressiveness. We introduce a Curry-style version iML^F of ML^F and provide an interpretation of iML^F types as instantiation-closed sets of Dash System-F types, from which we derive the definition of type-instance in iML^F. We give equivalent syntactic definition of the type-instance, presented as a set of inference rules. We also show an encoding of iML^F into the closure of Curry-style System F by let-expansion. We derive the Church-style version eML^F by refining types of iML^F so as to distinguish between given and inferred polymorphism. We show an embedding of ML in eML^F and a straightforward encoding of System F into eML^F.

Journal ArticleDOI
TL;DR: It is obtained that the semi-rings N"~^r^a^t >, equipped with the sum order, are free in the class of symmetric inductive ^*-semi-rings, which corresponds to Kozen's axiomatization of regular languages.
Abstract: Iteration semi-rings are Conway semi-rings satisfying Conway's group identities. We show that the semi-rings N^r^a^t > of rational power series with coefficients in the semi-ring N of natural numbers are the free partial iteration semi-rings. Moreover, we characterize the semi-rings N"~"^"r"^"a"^"t > as the free semi-rings in the variety of iteration semi-rings defined by three additional simple identities, where N"~ is the completion of N obtained by adding a point of infinity. We also show that this latter variety coincides with the variety generated by the complete, or continuous semirings. As a consequence of these results, we obtain that the semi-rings N"~^r^a^t >, equipped with the sum order, are free in the class of symmetric inductive ^*-semi-rings. This characterization corresponds to Kozen's axiomatization of regular languages.

Journal Article
TL;DR: To improve the convergence speed of sliding mode variable structure control, a new nonlinear sliding mode surface is proposed in this paper, which can converge to the equilibrium point with a higher speed than both linear sliding mode and terminal sliding mode surfaces, and a new two-power reaching law is proposed which makes the system move toward the sliding mode with a high speed.
Abstract: To improve the convergence speed of sliding mode variable structure control, a new nonlinear sliding mode surface is proposed. When the system reaches any point of the new sliding mode surface, it can converge to the equilibrium point with a higher speed than both linear sliding mode surface and terminal sliding mode surface. At the same time, a new two-power reaching law is proposed which makes the system move toward the sliding mode with a high speed. And the chattering is also eliminated which is an inherent property of traditional sliding mode variable structure control. At last, this new method is used to simulate the trajectory tracking of a robot. The result shows that the system is of strong robustness and high convergence speed. The validity of this method is proved.

Journal ArticleDOI
TL;DR: This work investigates the idea of learning in the limit in the general case, where both guess retraction and resumption are allowed, and characterization of the limits of non-monotonic learning sequences in terms of the extension relation between guesses.
Abstract: We study an abstract representation of the learning process, which we call learning sequence, aiming at a constructive interpretation of classical logical proofs, that we see as learning strategies, coming from Coquand's game theoretic interpretation of classical logic. Inspired by Gold's notion of limiting recursion and by the Limit-Computable Mathematics by Hayashi, we investigate the idea of learning in the limit in the general case, where both guess retraction and resumption are allowed. The main contribution is the characterization of the limits of non-monotonic learning sequences in terms of the extension relation between guesses.

Journal ArticleDOI
TL;DR: The paper presents a case study on the synthesis of labelled transition systems (ltss) for process calculi, choosing as testbed Milner's Calculus of Communicating System (ccs) as a testbed for a graphical encode based on a graphical encoding.
Abstract: The paper presents a case study on the synthesis of labelled transition systems (ltss) for process calculi, choosing as testbed Milner's Calculus of Communicating System (ccs). The proposal is based on a graphical encoding: each ccs process is mapped into a graph equipped with suitable interfaces, such that the denotation is fully abstract with respect to the usual structural congruence. Graphs with interfaces are amenable to the synthesis mechanism proposed by Ehrig and Konig and based on borrowed contexts (bcs), an instance of relative pushouts originally introduced by Milner and Leifer. The bc mechanism allows the effective construction of an lts that has graphs with interfaces as both states and labels, and such that the associated bisimilarity is automatically a congruence. Our paper focuses on the analysis of the lts distilled by exploiting the encoding of ccs processes: besides offering major technical contributions towards the simplification of the bc mechanism, a key result of our work is the proof that the bisimilarity on processes obtained via bcs coincides with the standard strong bisimilarity for ccs.

Journal ArticleDOI
TL;DR: A recursive measurable set is defined, which extends the corresponding notion due to S@?anin for the Lebesgue measure on the real line for an effectively given second countable locally compact Hausdorff space.
Abstract: We introduce a computable framework for Lebesgue's measure and integration theory in the spirit of domain theory. For an effectively given second countable locally compact Hausdorff space and an effectively given finite Borel measure on the space, we define a recursive measurable set, which extends the corresponding notion due to S@?anin for the Lebesgue measure on the real line. We also introduce the stronger notion of a computable measurable set, where a measurable set is approximated from inside and outside by sequences of closed and open subsets, respectively. The more refined property of computable measurable sets give rise to the idea of partial measurable subsets, which naturally form a domain for measurable subsets. We then introduce interval-valued measurable functions and develop the notion of recursive and computable measurable functions using interval-valued simple functions. This leads us to the interval versions of the main results in classical measure theory. The Lebesgue integral is shown to be a continuous operator on the domain of interval-valued measurable functions and the interval-valued Lebesgue integral provides a computable framework for integration.

Journal ArticleDOI
TL;DR: A simple order-theoretic generalization, possibly non-monotone, of set- theoretic inductive definitions and is preserved by abstraction that allows structural operational semantics to describe simultaneously the finite terminating and infinite diverging behaviors of programs.
Abstract: We propose a simple order-theoretic generalization, possibly non-monotone, of set-theoretic inductive definitions. This generalization covers inductive, co-inductive and bi-inductive definitions and is preserved by abstraction. This allows structural operational semantics to describe simultaneously the finite terminating and infinite diverging behaviors of programs. This is illustrated on grammars and the structural bifinitary small big-step trace relational operational semantics of the call-by-value @l-calculus (for which co-induction is shown to be inadequate).

Journal ArticleDOI
TL;DR: A GSOS-like rule format for name-passing process calculi is introduced that corresponds to theories in nominal logic and a natural behavioural equivalence—a form of open bisimilarity—is a congruence.
Abstract: We introduce a GSOS-like rule format for name-passing process calculi. Specifications in this format correspond to theories in nominal logic. The intended models of such specifications arise by initiality from a general categorical model theory. For operational semantics given in this rule format, a natural behavioural equivalence—a form of open bisimilarity—is a congruence.

Journal ArticleDOI
TL;DR: Turing's work in Computing and in Morphogenesis will be seen as part of a scientific path which goes from Laplace's understanding of deterministic predictability to the developments of Poincare's analysis of unpredictability in non-linear systems, at the core of Turing's 1952 work.
Abstract: This text presents a survey and a conceptual analysis of a path which goes from Programming to Physics and Biology. Schrodinger's early reflections on coding and the genome will be a starting point: by his (and Turing's) remarks, a link is explicitly made between the notion of program and the analysis of causality and determination in Physics. In particular, Turing's work in Computing and in Morphogenesis (his 1952 paper on continuous dynamics) will be seen as part of a scientific path which goes from Laplace's understanding of deterministic predictability to the developments of Poincare's analysis of unpredictability in non-linear systems, at the core of Turing's 1952 work. The relevance of planetary ''resonance'', in Poincare's Three Body Theorem, and its analogies and differences with logical circularities will then be discussed. On these grounds, some recent technical results will be mentioned relating algorithmic randomness, a strong form of logical undecidability, and physical (deterministic) unpredictability. This will be a way to approach the issue of resonances and circularities in System Biology, where these notions have a deeply different nature, in spite of some confusion which is often made. Finally, three aspects of the author's (and his collaborators') recent work in System Biology will be surveyed. They concern an approach to biological structural stability, as ''extended criticality'', the structure of time and of biological rhythms and the role of a proper biological observable, ''organization''. This is described in terms of ''anti-entropy'', a new notion inspired by a remark by Schrodinger.

Journal ArticleDOI
TL;DR: This work works with an operational notion of compact set and shows that total programs with values on certain types are uniformly continuous on compact sets of total elements and applies this to prove the correctness of non-trivial programs that manipulate infinite data.
Abstract: A number of authors have exported domain-theoretic techniques from denotational semantics to the operational study of contextual equivalence and order. We further develop this, and, moreover, we additionally export topological techniques. In particular, we work with an operational notion of compact set and show that total programs with values on certain types are uniformly continuous on compact sets of total elements. We apply this and other conclusions to prove the correctness of non-trivial programs that manipulate infinite data. What is interesting is that the development applies to sequential programming languages, in addition to languages with parallel features.

Journal ArticleDOI
TL;DR: The Lambda Context Calculus features variables arranged in a hierarchy of strengths such that substitution of a strong variable does not avoid capture with respect to abstraction by a weaker variable, which allows the calculus to express both capture-avoiding and capturing substitution (instantiation).
Abstract: We present the Lambda Context Calculus. This simple lambda-calculus features variables arranged in a hierarchy of strengths such that substitution of a strong variable does not avoid capture with respect to abstraction by a weaker variable. This allows the calculus to express both capture-avoiding and capturing substitution (instantiation). The reduction rules extend the 'vanilla' lambda-calculus in a simple and modular way and preserve the look and feel of a standard lambda-calculus with explicit substitutions. Good properties of the lambda-calculus are preserved. The LamCC is confluent, and a natural injection into the LamCC of the untyped lambda-calculus exists and preserves strong normalisation. We discuss the calculus and its design with full proofs. In the presence of the hierarchy of variables, functional binding splits into a functional abstraction @l (lambda) and a name-binder @? (new). We investigate how the components of this calculus interact with each other and with the reduction rules, with examples. In two more extended case studies we demonstrate how global state can be expressed, and how contexts and contextual equivalence can be naturally internalised using function application.

Journal ArticleDOI
TL;DR: A framework for comparing a cryptographic implementation and its idealization with respect to various security notions is defined, focusing on the computational soundness of static equivalence, a standard tool in cryptographic pi calculi.
Abstract: In this paper we study the link between formal and cryptographic models for security protocols in the presence of passive adversaries. In contrast to other works, we do not consider a fixed set of primitives but aim at results for arbitrary equational theories. We define a framework for comparing a cryptographic implementation and its idealization with respect to various security notions. In particular, we concentrate on the computational soundness of static equivalence, a standard tool in cryptographic pi calculi. We present a soundness criterion, which for many theories is not only sufficient but also necessary. Finally, to illustrate our framework, we establish the soundness of static equivalence for the exclusive OR and a theory of ciphers and lists.

Journal ArticleDOI
TL;DR: In this article, the transition between quantum and classical mechanics is discussed in the situation where the underlying dynamical system has an hyperbolic behavior, and the special role of invariant manifolds is emphasized.
Abstract: We present several recent results concerning the transition between quantum and classical mechanics, in the situation where the underlying dynamical system has an hyperbolic behaviour. The special role of invariant manifolds will be emphasized, and the long time evolution will show how the quantum non-determinism and the classical chaotic sensitivity to initial conditions can be compared, and in a certain sense overlap.

Journal ArticleDOI
TL;DR: The paper defines (bi)simulations up-to a preorder and shows how they can be used to provide a coinductive, ( bi)simulation-like, characterisation of semantic (equivalences) preorders for processes, and provides an alternative axiomatization for any axiomatic preorder in the linear time-branching time spectrum, whose correctness and completeness can be proved once and for all.
Abstract: We define (bi)simulations up-to a preorder and show how we can use them to provide a coinductive, (bi)simulation-like, characterisation of semantic (equivalences) preorders for processes. In particular, we can apply our results to all the semantics in the linear time-branching time spectrum that are defined by preorders coarser than the ready simulation preorder. The relation between bisimulations up-to and simulations up-to allows us to find some new relations between the equivalences that define the semantics and the corresponding preorders. In particular, we have shown that the simulation up-to an equivalence relation is a canonical preorder whose kernel is the given equivalence relation. Since all of these canonical preorders are defined in an homogeneous way, we can prove properties for them in a generic way. As an illustrative example of this technique, we generate an axiomatic characterisation of each of these canonical preorders, that is obtained simply by adding a single axiom to the axiomatization of the original equivalence relation. Thus we provide an alternative axiomatization for any axiomatizable preorder in the linear time-branching time spectrum, whose correctness and completeness can be proved once and for all. Although we first prove, by induction, our results for finite processes, then we see, by using continuity arguments, that they are also valid for infinite (finitary) processes.