scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Structures in Computer Science in 2013"


Journal ArticleDOI
TL;DR: In this article, it was shown that an orthogonal basis for a finite-dimensional Hilbert space can be equivalently characterised as a commutative Frobenius monoid in the category FdHilb.
Abstract: We show that an orthogonal basis for a finite-dimensional Hilbert space can be equivalently characterised as a commutative †-Frobenius monoid in the category FdHilb, which has finite-dimensional Hilbert spaces as objects and continuous linear maps as morphisms, and tensor product for the monoidal structure. The basis is normalised exactly when the corresponding commutative †-Frobenius monoid is special. Hence, both orthogonal and orthonormal bases are characterised without mentioning vectors, but just in terms of the categorical structure: composition of operations, tensor product and the †-functor. Moreover, this characterisation can be interpreted operationally, since the †-Frobenius structure allows the cloning and deletion of basis vectors. That is, we capture the basis vectors by relying on their ability to be cloned and deleted. Since this ability distinguishes classical data from quantum data, our result has important implications for categorical quantum mechanics.

168 citations


Journal ArticleDOI
TL;DR: This work investigates the computing power of a restricted class of DNA strand displacement structures: those that are made of double strands with nicks (interruptions) in the top strand, and imposes restrictions on the single strands they interact with.
Abstract: We investigate the computing power of a restricted class of DNA strand displacement structures: those that are made of double strands with nicks (interruptions) in the top strand. To preserve this structural invariant, we impose restrictions on the single strands they interact with: we consider only two-domain single strands consisting of one toehold domain and one recognition domain. We study fork and join signal processing gates based on these structures, and show that these systems are amenable to formalisation and mechanical verification.

126 citations


Journal ArticleDOI
TL;DR: This paper introduces the notion of a selective adhesive functor, and shows that such a functor embeds the category of open-graphs into the ambient adhesive category of typed graphs, and inherits ‘enough adhesivity’ from the categoryof typed graphs to perform double-pushout (DPO) graph rewriting.
Abstract: String diagrams are a powerful tool for reasoning about physical processes, logic circuits, tensor networks and many other compositional structures. The distinguishing feature of these diagrams is that edges need not be connected to vertices at both ends, and these unconnected ends can be interpreted as the inputs and outputs of a diagram. In this paper, we give a concrete construction for string diagrams using a special kind of typed graph called an open-graph . While the category of open-graphs is not itself adhesive, we introduce the notion of a selective adhesive functor , and show that such a functor embeds the category of open-graphs into the ambient adhesive category of typed graphs. Using this functor, the category of open-graphs inherits ‘enough adhesivity’ from the category of typed graphs to perform double-pushout (DPO) graph rewriting. A salient feature of our theory is that it ensures rewrite systems are ‘type safe’ in the sense that rewriting respects the inputs and outputs. This formalism lets us safely encode the interesting structure of a computational model, such as evaluation dynamics, with succinct, explicit rewrite rules, while the graphical representation absorbs many of the tedious details. Although topological formalisms exist for string diagrams, our construction is discrete and finitary, and enjoys decidable algorithms for composition and rewriting. We also show how open-graphs can be parameterised by graphical signatures, which are similar to the monoidal signatures of Joyal and Street, and define types for vertices in the diagrammatic language and constraints on how they can be connected. Using typed open-graphs, we can construct free symmetric monoidal categories, PROPs and more general monoidal theories. Thus, open-graphs give us a tool for mechanised reasoning in monoidal categories.

59 citations


Journal ArticleDOI
TL;DR: This work defines an information flow type system for a sequential JVM-like language that includes all these programming features, and proves, in the Coq proof assistant, that it guarantees non-interference.
Abstract: Non-interference guarantees the absence of illicit information flow throughout program execution. It can be enforced by appropriate information flow type systems. Much of the previous work on type systems for non-interference has focused on calculi or high-level programming languages, and existing type systems for low-level languages typically omit objects, exceptions and method calls. We define an information flow type system for a sequential JVM-like language that includes all these programming features, and we prove, in the Coq proof assistant, that it guarantees non-interference. An additional benefit of the formalisation is that we have extracted from our proof a certified lightweight bytecode verifier for information flow. Our work provides, to the best of our knowledge, the first sound and certified information flow type system for such an expressive fragment of the JVM.

53 citations


Journal ArticleDOI
TL;DR: A formal semantics for Ptolemy is proposed that is modular in the sense that atomic actors and their compositions are treated in a unified way.
Abstract: Ptolemy‡ is an open-source and extensible modelling and simulation framework. It offers heterogeneous modeling capabilities by allowing different models of computation, both untimed and timed, to be composed hierarchically in an arbitrary fashion. This paper proposes a formal semantics for Ptolemy that is modular in the sense that atomic actors and their compositions are treated in a unified way. In particular, all actors conform to an executable interface that contains four functions: fire (produce outputs given current state and inputs); postfire (update state instantaneously); deadline (how much time the actor is willing to let elapse); and time-update (update the state with the passage of time). Composite actors are obtained using composition operators that in Ptolemy are called directors. Different directors realise different models of computation. In this paper, we formally define the directors for the following models of computation: synchronous- reactive, discrete event, continuous time, process networks and modal models.

44 citations


Journal ArticleDOI
TL;DR: This paper provides a formalism that is local in the following very specific sense: calculations pertaining to any region of space–time employ only mathematical objects associated with that region, and shows how to put classical probability theory and quantum theory into this framework.
Abstract: In this paper we consider general probabilistic theories pertaining to circuits that satisfy two very natural assumptions. We provide a formalism that is local in the following very specific sense: calculations pertaining to any region of space–time employ only mathematical objects associated with that region. We call this formalism locality. It incorporates the idea that space and time should be treated on an equal footing. Formulations that use a foliation of space--time to evolve a state do not have this property, nor do histories-based approaches. An operation has inputs and outputs (through which systems travel), for example, A circuit is built by wiring together operations such that we have no open inputs or outputs left over. A fragment is a part of a circuit and may have open inputs and outputs, for example, We show how each operation is associated with a certain mathematical object, which we call a duotensor (this is like a tensor but with a bit more structure). The following diagram shows how a duotensor is represented graphically: We can link duotensors together such that black and white dots match up to get the duotensor corresponding to any fragment. The following diagram shows the duotensor for the above fragment: Links represent summation over the corresponding indices. We can use such duotensors to make probabilistic statements pertaining to fragments. Since fragments are the circuit equivalent of arbitrary space–time regions, we have formalism locality. The probability for a circuit is given by the corresponding duotensorial calculation (which is a scalar since there are no indices left over). We show how to put classical probability theory and quantum theory into this framework.

36 citations


Journal ArticleDOI
TL;DR: A divide and conquer approach for formally verifying timed probabilistic requirements on successful completion of the driving task and collision freedom based on formal specifications of a set of given manoeuvring and communication capabilities of the car is proposed.
Abstract: We propose a design and verification methodology supporting the early phases of system design for cooperative driver assistance systems, focusing on the realisability of new automotive functions. Specifically, we focus on applications where drivers are supported in complex driving tasks by safe strategies involving the coordinated movements of multiple vehicles to complete the driving task successfully. We propose a divide and conquer approach for formally verifying timed probabilistic requirements on successful completion of the driving task and collision freedom based on formal specifications of a set of given manoeuvring and communication capabilities of the car. In particular, this allows an assessment of whether they are sufficient to implement strategies for successful completion of the driving task.

30 citations


Journal ArticleDOI
TL;DR: This framework takes a balanced approach between model theory and proof theory, and permits the representation of logics in a way that comprises all major ingredients of a logic: syntax, models, satisfaction, judgments and proofs.
Abstract: Mathematical logic and computer science have driven the design of a growing number of logics and related formalisms such as set theories and type theories. In response to this population explosion, logical frameworks have been developed as formal meta-languages in which to represent, structure, relate and reason about logics.Research on logical frameworks has diverged into separate communities, often with conflicting backgrounds and philosophies. In particular, two of the most important logical frameworks are the framework of institutions, from the area of model theory based on category theory, and the Edinburgh Logical Framework LF, from the area of proof theory based on dependent type theory. Even though their ultimate motivations overlap – for example in applications to software verification – they have fundamentally different perspectives on logic.In the current paper, we design a logical framework that integrates the frameworks of institutions and LF in a way that combines their complementary advantages while retaining the elegance of each of them. In particular, our framework takes a balanced approach between model theory and proof theory, and permits the representation of logics in a way that comprises all major ingredients of a logic: syntax, models, satisfaction, judgments and proofs. This provides a theoretical basis for the systematic study of logics in a comprehensive logical framework. Our framework has been applied to obtain a large library of structured and machine-verified encodings of logics and logic translations.

28 citations


Journal ArticleDOI
TL;DR: It is shown convincingly that the smart combination of performability evaluation with stochastic model-checking techniques, developed over the last decade, provides a powerful and unified method of performance evaluation, thereby combining the advantages of earlier approaches.
Abstract: This paper gives a bird's-eye view of the various ingredients that make up a modern, model-checking-based approach to performability evaluation: Markov reward models, temporal logics and continuous stochastic logic, model-checking algorithms, bisimulation and the handling of non-determinism. A short historical account as well as a large case study complete this picture. In this way, we show convincingly that the smart combination of performability evaluation with stochastic model-checking techniques, developed over the last decade, provides a powerful and unified method of performability evaluation, thereby combining the advantages of earlier approaches.

27 citations


Journal ArticleDOI
TL;DR: The aim of the current paper is to show how Krivine's classical realisability can be understood as an instance of the categorical approach to real isability as started by Martin Hyland in Hyland (1982) and described in detail in van Oosten (2008).
Abstract: In a sequence of papers (Krivine 2001; Krivine 2003; Krivine 2009), J.-L. Krivine introduced his notion of classical realisability for classical second-order logic and Zermelo–Fraenkel set theory. Moreover, in more recent work (Krivine 2008), he has considered forcing constructions on top of it with the ultimate aim of providing a realisability interpretation for the axiom of choice.The aim of the current paper is to show how Krivine's classical realisability can be understood as an instance of the categorical approach to realisability as started by Martin Hyland in Hyland (1982) and described in detail in van Oosten (2008). Moreover, we will give an intuitive explanation of the iteration of realisability as described in Krivine (2008).

24 citations


Journal ArticleDOI
TL;DR: A new approach is proposed to the development of combined models (co-models), in which an initial discrete-event model may include approximations of continuous-time behaviour that can subsequently be replaced by couplings to continuous- time models.
Abstract: The effective use of model-based formal methods in the development of complex embedded systems requires the integration of discrete-event models of controllers with continuous-time models of their environments. This paper proposes a new approach to the development of such combined models (co-models), in which an initial discrete-event model may include approximations of continuous-time behaviour that can subsequently be replaced by couplings to continuous-time models. An operational semantics of co-simulation allows the discrete and continuous models to run on their respective simulators and managed by a coordinating co-simulation engine. This permits the exploration of the composite co-model's behaviour in a range of operational scenarios. The approach has been realised using the Vienna Development Method (VDM) as the discrete-event formalism, and 20-sim as the continuous-time framework, and has been applied successfully to a case study based on the distributed controller for a personal transporter device.

Journal ArticleDOI
TL;DR: The relationship of this approach with two others: Krivine et al.'s stochastic bigraphs, restricted to Milner's place graphs, and Coppo et al.'s Stochastic Calculus of Wrapped Compartments are discussed, which provide evidence for the fundamental nature of the approach.
Abstract: We present a simple stochastic rule-based approach to multi-level modelling for computational systems biology. Populations are modelled using multi-level multisets; these contain both species and agents, with the latter possibly containing further such multisets. Rules are pairs of such multisets, but they may now also include variables (as well as species and agents), together with an associated stochastic rate. We give two illustrative examples. The first is an extracellular model of virus infection, coupled with an intracellular model of viral reproduction; this model can demonstrate successive waves of infection. The second is a model of cell division in which a repressor protein is diluted in successive generations, so eventually repression no longer occurs. The multi-level multiset approach can also be seen in terms of stochastic term rewriting for the theory of a commutative monoid equipped with extra constants (for the species) and unary operations (for the agents). We further discuss the relationship of this approach with two others: Krivine et al .'s stochastic bigraphs, restricted to Milner's place graphs, and Coppo et al .'s Stochastic Calculus of Wrapped Compartments. These various relationships provide evidence for the fundamental nature of the approach.

Journal ArticleDOI
TL;DR: Higher-order psi-calculi are considered through a technically surprisingly simple extension of the framework, and it is shown how an arbitrary psi-Calculus can be lifted to its higher-order counterpart in a canonical way.
Abstract: Psi-calculi is a parametric framework for extensions of the pi-calculus; in earlier work we have explored their expressiveness and algebraic theory. In this paper we consider higher-order psi-calcu ...

Journal ArticleDOI
TL;DR: A particular contribution of this work is a new approach to developing and analysing RBAC policies using a UML-based domain-specific language (DSL), which allows the hiding of the mathematical structures of the underlying authorisation constraints implemented in OCL.
Abstract: The stringent security requirements of organisations like banks or hospitals frequently adopt role-based access control (RBAC) principles to represent and simplify their internal permission management. While representing a fundamental advanced RBAC concept enabling precise restrictions on access rights, authorisation constraints increase the complexity of the resulting security policies so that tool support for convenient creation and adequate validation is required. A particular contribution of our work is a new approach to developing and analysing RBAC policies using a UML-based domain-specific language (DSL), which allows the hiding of the mathematical structures of the underlying authorisation constraints implemented in OCL. The DSL we present is highly configurable and extensible with respect to new concepts and classes of authorisation constraints, and allows the developer to validate RBAC policies in an effective way. The handling of dynamic (that is, time-dependent) constraints, their visual representation through the RBAC DSL and their analysis all form another part of our contribution. The approach is supported by a UML and OCL validation tool.

Journal ArticleDOI
TL;DR: An overview of Alloy in the context of its three largest application domains, lightweight modelling, bounded code verification and test-case generation, and three recent application-driven extensions, an imperative extension to the language, a compiler to executable code and a proof-capable analyser based on SMT are provided.
Abstract: Alloy is a declarative language for lightweight modelling and analysis of software. The core of the language is based on first-order relational logic, which offers an attractive balance between analysability and expressiveness. The logic is expressive enough to capture the intricacies of real systems, but is also simple enough to support fully automated analysis with the Alloy Analyzer. The Analyzer is built on a SAT-based constraint solver and provides automated simulation, checking and debugging of Alloy specifications. Because of its automated analysis and expressive logic, Alloy has been applied in a wide variety of domains. These applications have motivated a number of extensions both to the Alloy language and to its SAT-based analysis. This paper provides an overview of Alloy in the context of its three largest application domains, lightweight modelling, bounded code verification and test-case generation, and three recent application-driven extensions, an imperative extension to the language, a compiler to executable code and a proof-capable analyser based on SMT.

Journal ArticleDOI
TL;DR: A generalised framework of site graphs is introduced in order to provide the first fully semantic definition of the side-effect-free core of the rule-based language Kappa.
Abstract: A generalized framework of site graphs is introduced in order to provide the first fully semantic definition of the side-effect-free core of the rule-based language Kappa. This formalization allows for the use of types either to confirm that a rule respects a certain invariant or to guide a restricted refinement process that allows us to constrain its run-time applicability.

Journal ArticleDOI
TL;DR: It is shown that, as a rule, physical models are not time-robust, and that time-determinism is a sufficient condition for time- Robustness, and an execution engine is defined that coordinates the execution of the application software so that it meets its timing constraints.
Abstract: The correct and efficient implementation of general real-time applications remains very much an open problem. A key issue is meeting timing constraints whose satisfaction depends on features of the execution platform, in particular its speed. Existing rigorous implementation techniques are applicable to specific classes of systems, for example, with periodic tasks or time-deterministic systems. We present a general model-based implementation method for real-time systems based on the use of two models: 1) An abstract model representing the behaviour of real-time software as a timed automaton, which describes user-defined platform-independent timing constraints. Its transitions are timeless and correspond to the execution of statements of the real-time software. 2) A physical model representing the behaviour of the real-time software running on a given platform. It is obtained by assigning execution times to the transitions of the abstract model. A necessary condition for implementability is time-safety, that is, any (timed) execution sequence of the physical model is also an execution sequence of the abstract model. Time-safety simply means that the platform is fast enough to meet the timing requirements. As execution times of actions are not known exactly, time-safety is checked for the worst-case execution times of actions by making an assumption of time-robustness: time-safety is preserved when the speed of the execution platform increases. We show that, as a rule, physical models are not time-robust, and that time-determinism is a sufficient condition for time-robustness. For a given piece of real-time software and an execution platform corresponding to a time-robust model, we define an execution engine that coordinates the execution of the application software so that it meets its timing constraints. Furthermore, in the case of non-robustness, the execution engine can detect violations of time-safety and stop execution. We have implemented the execution engine for BIP programs with real-time constraints and validated the implementation method for two case studies. The experimental results for a module of a robotic application show that the CPU utilisation and the size of the model are reduced compared with existing implementations. The experimental results for an adaptive video encoder also show that a lack of time-robustness may seriously degrade the performance for increasing platform execution speed.

Journal ArticleDOI
TL;DR: Event Identifier Logic is introduced, which extends Hennessy–Milner logic by the addition of reverse as well as forward modalities and identifiers to keep track of events, and it is shown that this logic corresponds to hereditary history-preserving (HH) bisimulation equivalence within a particular true-concurrency model, namely, stable configuration structures.
Abstract: In this paper we introduce Event Identifier Logic (EIL), which extends Hennessy–Milner logic by the addition of: (1) reverse as well as forward modalities; and (2) identifiers to keep track of events. We show that this logic corresponds to hereditary history-preserving (HH) bisimulation equivalence within a particular true-concurrency model, namely, stable configuration structures. We also show how natural sublogics of EIL correspond to coarser equivalences. In particular, we provide logical characterisations of weak-history- preserving (WH) and history-preserving (H) bisimulation. Logics corresponding to HH and H bisimulation have been given previously, but none, as far as we are aware, corresponding to WH bisimulation (when autoconcurrency is allowed). We also present characteristic formulas that characterise individual structures with respect to history-preserving equivalences.

Journal ArticleDOI
TL;DR: In this paper, the decidability of the logic T → of Ticket Entailment with the partial basis BB′IW has been proved, which is equivalent to the problem of type inhabitation for the restriction of simply typed lambda calculus to hereditarily right-maximal terms.
Abstract: We prove the decidability of the logic T → of Ticket Entailment. This issue was first raised by Anderson and Belnap within the framework of relevance logic, and is equivalent to the question of the decidability of type inhabitation in simply typed combinatory logic with the partial basis BB′IW . We solve the equivalent problem of type inhabitation for the restriction of simply typed lambda calculus to hereditarily right-maximal terms.

Journal ArticleDOI
TL;DR: A simple algebraic criterion is established for the existence of an equilibrium that satisfies the detailed balance condition familiar from the thermodynamics of reaction networks and it is found that when such a probability exists, it can be described by a free energy function that combines an internal energy term and an entropy term.
Abstract: This paper is concerned with the asymptotic properties of a restricted class of Petri nets equipped with stochastic mass action semantics. We establish a simple algebraic criterion for the existence of an equilibrium, that is to say an invariant probability that satises the detailed balance condition familiar from the thermodynamics of reaction networks. We also nd that when such a probability exists, it can be described by a free energy function which combines an internal energy term and an entropy one. Under strong additional conditions, we show how the entropy term can be deconstructed using the ner-grained individual-token semantics of Petri nets.

Journal ArticleDOI
TL;DR: This work discusses quantum algorithms based on the Bernstein–Vazirani algorithm for finding which input variables a Boolean function depends on, and describes quantum algorithms for learning the exact form of certain quadratic and cubic Boolean functions.
Abstract: We discuss quantum algorithms based on the Bernstein–Vazirani algorithm for finding which input variables a Boolean function depends on. There are 2n possible linear Boolean functions of n input variables; given a linear Boolean function, the Bernstein–Vazirani quantum algorithm can deterministically identify which one of these Boolean functions we are given using just one single function query. We show how the same quantum algorithm can also be used to learn which input variables any other type of Boolean function} depends on. The success probability of learning that the function depends on a particular input variable depends on} the form of the Boolean function that is tested, but does not depend on the total number of input variables. We also outline a procedure based on another quantum algorithm, the Grover search, to amplify further the success probability. Finally, we discuss quantum algorithms for learning the exact form of certain quadratic and cubic Boolean functions.

Journal ArticleDOI
TL;DR: This paper lays the categorical foundations for an algebraic calculus of multirelations, a type of calculus based on multirelational semantics, which is well suited to reasoning about programs involving two kinds of non-determinism.
Abstract: Multirelational semantics are well suited to reasoning about programs involving two kinds of non-determinism. This paper lays the categorical foundations for an algebraic calculus of multirelations.

Journal ArticleDOI
TL;DR: Indexed monoidal algebras are introduced as an equivalent structure for self-dual compact closed categories, and a coherence theorem is proved for the category of such algebraes.
Abstract: Indexed monoidal algebras are introduced as an equivalent structure for self-dual compact closed categories, and a coherence theorem is proved for the category of such algebras. Turing automata and Turing graph machines are defined by generalising the classical Turing machine concept so that the collection of such machines becomes an indexed monoidal algebra. On the analogy of the von Neumann data-flow computer architecture, Turing graph machines are proposed as potentially reversible low-level universal computational devices, and a truly reversible molecular size hardware model is presented as an example.

Journal ArticleDOI
TL;DR: The construction of the T-powerlocale VT out of a frame is based on ideas from coal algebraic logic and makes explicit the connection between the Vietoris construction and Moss's coalgebraic cover modality and it is proved that the operation VT preserves some properties of frames.
Abstract: This paper introduces an endofunctor VT on the category of frames that is parametrised by an endofunctor T on the category Set that satisfies certain constraints. This generalises Johnstone's construction of the Vietoris powerlocale in the sense that his construction is obtained by taking for T the finite covariant power set functor. Our construction of the T-powerlocale VT out of a frame is based on ideas from coalgebraic logic and makes explicit the connection between the Vietoris construction and Moss's coalgebraic cover modality. We show how to extend certain natural transformations between set functors to natural transformations between T-powerlocale functors. Finally, we prove that the operation VT preserves some properties of frames, such as regularity, zero-dimensionality and the combination of zero-dimensionality and compactness.

Journal ArticleDOI
TL;DR: A construction due to Fouché in which a Brownian motion is constructed from an algorithmically random infinite binary sequence is examined, showing that although the construction is provably not computable in the sense of computable analysis, a lower bound for the rate of convergence is computable, making the construction layerwise computable.
Abstract: We examine a construction due to Fouche in which a Brownian motion is constructed from an algorithmically random infinite binary sequence. We show that although the construction is provably not computable in the sense of computable analysis, a lower bound for the rate of convergence is computable in any upper bound for the compressibilty of the sequence, making the construction layerwise computable.

Journal ArticleDOI
TL;DR: For any locally cartesian closed category, the authors proved that a local fibred right adjoint between slices of slices of ℰ is given by a polynomial.
Abstract: For any locally cartesian closed category ℰ, we prove that a local fibred right adjoint between slices of ℰ is given by a polynomial. The slices in question are taken in a well-known fibred sense.

Journal ArticleDOI
TL;DR: A polynomial time quantum attack on the Blum–Micali generator, which is considered secure against threats from classical computers, is presented, and is able to recover previous and future outputs of the generator under attack, thereby completely compromising its unpredictability.
Abstract: There are advantages in the use of quantum computing in the elaboration of attacks on certain pseudorandom generators when compared with analogous attacks using classical computing. This paper presents a polynomial time quantum attack on the Blum–Micali generator, which is considered secure against threats from classical computers. The proposed attack uses a Grover inspired procedure together with the quantum discrete logarithm, and is able to recover previous and future outputs of the generator under attack, thereby completely compromising its unpredictability. The attack can also be adapted to other generators, such as Blum–Micali generators with multiple hard-core predicates and generators from the Blum–Micali construction, and also to scenarios where the requirements on the bits are relaxed. Such attacks represent a threat to the security of the pseudorandom generators adopted in many real-world cryptosystems.

Journal ArticleDOI
TL;DR: The semantic foundations of frame and anti-frame rules are discussed, the first sound model for Charguéraud and Pottier's type and capability system including both of these rules is presented, and invariants are generalised to families of invariants indexed over preorders.
Abstract: Frame and anti-frame rules have been proposed as proof rules for modular reasoning about programs. Frame rules allow the hiding of irrelevant parts of the state during verification, whereas the anti-frame rule allows the hiding of local state from the context. We discuss the semantic foundations of frame and anti-frame rules, and present the first sound model for Chargueraud and Pottier's type and capability system including both of these rules. The model is a possible worlds model based on the operational semantics and step-indexed heap relations, and the worlds are given by a recursively defined metric space. We also extend the model to account for Pottier's generalised frame and anti-frame rules, where invariants are generalised to families of invariants indexed over preorders. This generalisation enables reasoning about some well-bracketed as well as locally monotone uses of local state.

Journal ArticleDOI
TL;DR: In this article, the dynamics of entanglement in some hybrid qubit-qutrit systems under the influence of global, collective, local and multilocal depolarising noise are studied.
Abstract: In this paper we study the dynamics of entanglement in some hybrid qubit–qutrit systems under the influence of global, collective, local and multilocal depolarising noise. We show that depolarising noise can be used to induce entanglement. A critical point exists under every coupling of the system with the environment at which all the states are equally entangled. Furthermore, we show that no ESD occurs when either only the qubit is coupled to its local environment or the system is coupled to a multilocal or global environment. This is an important result for various quantum information processing tasks and thus merits further investigation.

Journal ArticleDOI
TL;DR: A case study is used to outline a method of modelling, performance evaluation and behaviour preserving reduction of concurrent computing systems, and apply it to the dining philosophers system.
Abstract: We define a number of stochastic equivalences in the dtsPBC framework, which is a discrete time stochastic extension of finite Petri box calculus (PBC) enriched with iteration. These equivalences allow the identification of stochastic processes that have similar behaviour but are differentiated by the semantics of the calculus. We explain how the equivalences we propose can be used to reduce transition systems of expressions, and demonstrate how to apply the equivalences to compare the stationary behaviour. The equivalences guarantee a coincidence of performance indices for stochastic systems, and can be used for performance analysis simplification. We use a case study to outline a method of modelling, performance evaluation and behaviour preserving reduction of concurrent computing systems, and apply it to the dining philosophers system.