scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 2006"


Journal ArticleDOI
TL;DR: New systems that extend the epistemic base language with a new notion of 'relativized common knowledge' are proposed, in such a way that the resulting full dynamic logic of information flow allows for a compositional analysis of all epistemic postconditions via perspicuous 'reduction axioms'.
Abstract: Current dynamic epistemic logics for analyzing effects of informational events often become cumbersome and opaque when common knowledge is added for groups of agents. Still, postconditions involving common knowledge are essential to successful multi-agent communication. We propose new systems that extend the epistemic base language with a new notion of 'relativized common knowledge', in such a way that the resulting full dynamic logic of information flow allows for a compositional analysis of all epistemic postconditions via perspicuous 'reduction axioms'. We also show how such systems can deal with factual alteration, rather than just information change, making them cover a much wider range of realistic events. After a warmup stage of analyzing logics for public announcements, our main technical results are expressivity and completeness theorems for a much richer logic that we call LCC. This is a dynamic epistemic logic whose static base is propositional dynamic logic (PDL), interpreted epistemically. This system is capable of expressing all model-shifting operations with finite action models, while providing a compositional analysis for a wide range of informational events. This makes LCC a serious candidate for a standard in dynamic epistemic logic, as we illustrate by analyzing some complex communication scenarios, including sending successive emails with both 'cc' and 'bcc' lines, and other private announcements to subgroups. Our proofs involve standard modal techniques, combined with a new application of Kleene's theorem on finite automata, as well as new Ehrenfeucht games of model comparison.

361 citations


Journal ArticleDOI
TL;DR: This work presents a statistical approach to probabilistic model checking, employing hypothesis testing and discrete-event simulation, and can at least bound the probability of generating an incorrect answer to a verification problem.
Abstract: Probabilistic verification of continuous-time stochastic processes has received increasing attention in the model-checking community in the past 5 years, with a clear focus on developing numerical solution methods for model checking of continuous-time Markov chains. Numerical techniques tend to scale poorly with an increase in the size of the model (the "state space explosion problem"), however, and are feasible only for restricted classes of stochastic discrete-event systems. We present a statistical approach to probabilistic model checking, employing hypothesis testing and discrete-event simulation. Since we rely on statistical hypothesis testing, we cannot guarantee that the verification result is correct, but we can at least bound the probability of generating an incorrect answer to a verification problem.

226 citations


Journal ArticleDOI
TL;DR: This paper is a devoted to pure bigraphs, which in turn underlie various more refined forms, and it is shown that behavioural analysis for Petri nets, π-calculus and mobile ambients can all be recovered in the uniform framework ofbigraphs.
Abstract: Bigraphs are graphs whose nodes may be nested, representing locality, independently of the edges connecting them. They may be equipped with reaction rules, forming a bigraphical reactive system (Brs) in which bigraphs can reconfigure themselves. Following an earlier paper describing link graphs, a constituent of bigraphs, this paper is a devoted to pure bigraphs, which in turn underlie various more refined forms. Elsewhere it is shown that behavioural analysis for Petri nets, π-calculus and mobile ambients can all be recovered in the uniform framework of bigraphs. The paper first develops the dynamic theory of an abstract structure, a wide reactive system (Wrs), of which a Brs is an instance. In this context, labelled transitions are defined in such a way that the induced bisimilarity is a congruence. This work is then specialised to Brss, whose graphical structure allows many refinements of the theory. The latter part of the paper emphasizes bigraphical theory that is relevant to the treatment of dynamics via labelled transitions. As a running example, the theory is applied to finite pure CCS, whose resulting transition system and bisimilarity are analysed in detail. The paper also mentions briefly the use of bigraphs to model pervasive computing and biological systems.

172 citations


Journal ArticleDOI
TL;DR: In this paper, a man-in-the-middle attack on PKINIT, the public key extension of the widely deployed Kerberos 5 authentication protocol, is reported.
Abstract: We report on a man-in-the-middle attack on PKINIT, the public key extension of the widely deployed Kerberos 5 authentication protocol. This flaw allows an attacker to impersonate Kerberos administrative principals (KDC) and end-servers to a client, hence breaching the authentication guarantees of Kerberos. It also gives the attacker the keys that the KDC would normally generate to encrypt the service requests of this client, hence defeating confidentiality as well. The discovery of this attack caused the IETF to change the specification of PKINIT and Microsoft to release a security update for some Windows operating systems. We discovered this attack as part of an ongoing formal analysis of the Kerberos protocol suite, and we have formally verified several possible fixes to PKINIT--including the one adopted by the IETF--that prevent our attack.

91 citations


Journal ArticleDOI
TL;DR: It is shown that the concept of event bisimulation arises naturally from taking the cocongruence point of view for probabilistic systems and can be given a pleasing categorical treatment in line with general coalgebraic principles.
Abstract: We introduce a new notion of bisimulation, called event bisimulation on labelled Markov processes and compare it with the, now standard, notion of probabilistic bisimulation, originally due to Larsen and Skou. Event bisimulation uses a sub σ-algebra as the basic carrier of information rather than an equivalence relation. The resulting notion is thus based on measurable subsets rather than on points: hence the name. Event bisimulation applies smoothly for general measure spaces; bisimulation, on the other hand, is known only to work satisfactorily for analytic spaces. We prove the logical characterization theorem for event bisimulation without having to invoke any of the subtle aspects of analytic spaces that feature prominently in the corresponding proof for ordinary bisimulation. These complexities only arise when we show that on analytic spaces the two concepts coincide. We show that the concept of event bisimulation arises naturally from taking the cocongruence point of view for probabilistic systems. We show that the theory can be given a pleasing categorical treatment in line with general coalgebraic principles. As an easy application of these ideas we develop a notion of "almost sure" bisimulation; the theory comes almost "for free" once we modify Giry's monad appropriately.

90 citations


Journal Article
TL;DR: In this article, the formation of hypervelocity stars (HVS) in the centre of the Milky Way due to inspiralling intermediate-mass black holes (IMBHs) was studied.
Abstract: We have performed N-body simulations of the formation of hypervelocity stars (HVS) in the centre of the Milky Way due to inspiralling intermediate-mass black holes (IMBHs). We considered IMBHs of different masses, all starting from circular orbits at an initial distance of 0.1 pc. We find that the IMBHs sink to the centre of the Galaxy due to dynamical friction, where they deplete the central cusp of stars. Some of these stars become HVS and are ejected with velocities sufficiently high to escape the Galaxy. Since the HVS carry with them information about their origin, in particular in the moment of ejection, the velocity distribution and the direction in which they escape the Galaxy, detecting a population of HVS will provide insight in the ejection processes and could therefore provide indirect evidence for the existence of IMBHs.

88 citations


Journal ArticleDOI
TL;DR: A Kleene theorem is shown, namely the equivalence between communicating automata, globally cooperative compositional message sequence graphs, and monadic second order logic, which extends results for universally bounded models.
Abstract: The behavior of a network of communicating automata is called existentially bounded if communication events can be scheduled in such a way that the number of messages in transit is always bounded by a value that depends only on the machine, not the run itself. We show a Kleene theorem for existentially bounded communicating automata, namely the equivalence between communicating automata, globally cooperative compositional message sequence graphs, and monadic second order logic. Our characterization extends results for universally bounded models, where for each and every possible scheduling of communication events, the number of messages in transit is uniformly bounded. As a consequence, we give solutions in spirit of Madhusudan (2001) for various model checking problems on networks of communicating automata that satisfy our optimistic restriction.

80 citations


Journal ArticleDOI
TL;DR: This paper proposes a new approach to SMT(T1 ∪ T2), where the enumerator of truth assignments is integrated with two decision procedures, one for T1 andOne for T2, acting independently from each other.
Abstract: Many approaches to deciding the satisfiability of quantifier-free formulae with respect to a background theory T-also known as Satisfiability Modulo Theory, or SMT(T)-rely on the integration between an enumerator of truth assignments and a decision procedure for conjunction of literals in T. When the background theory T is the combination T1 ∪ T2 of two simpler theories, the approach is typically instantiated by means of a theory combination schema (e.g. Nelson-Oppen, Shostak). In this paper we propose a new approach to SMT(T1 ∪ T2), where the enumerator of truth assignments is integrated with two decision procedures, one for T1 and one for T2, acting independently from each other. The key idea is to search for a truth assignment not only to the atoms occurring in the formula, but also to all the equalities between variables which are shared between the theories. This approach is simple and expressive: for instance, no modification is required to handle non-convex theories (as opposed to traditional Nelson-Oppen combinations which require a mechanism for splitting). Furthermore, it can be made practical by leveraging on state-of-the-art boolean and SMT search techniques, and on theory layering (i.e., cheaper reasoning first, and more often). We provide thorough experimental evidence to support our claims: we instantiate the framework with two decision procedures for the combinations of Equality and Uninterpreted Functions (EUF) and Linear Arithmetic (LA), both for (the convex case of) reals and for (the non-convex case of) integers; we analyze the impact of the different optimizations on a variety of test cases; and we compare the approach with state-of-the-art competitor tools, showing that our implemented tool compares positively with them, sometimes with dramatic gains in performance.

74 citations


Journal ArticleDOI
TL;DR: This paper relates the complexity of testing the monotonicity of a function over the d-dimensional cube to the Shannon entropy of the underlying distribution, and provides an improved upper bound on the query complexity of the property tester.
Abstract: In property testing, we are given oracle access to a function f, and we wish to test if the function satisfies a given property P, or it is e-far from having that property. In a more general setting, the domain on which the function is defined is equipped with a probability distribution, which assigns different weight to different elements in the domain. This paper relates the complexity of testing the monotonicity of a function over the d-dimensional cube to the Shannon entropy of the underlying distribution. We provide an improved upper bound on the query complexity of the property tester.

72 citations


Journal ArticleDOI
TL;DR: In this article, the authors generalize existing connections between automata and logic to a coalgebraic abstraction level, and introduce various notions of F-automata, devices that operate on pointed F-coalgebras.
Abstract: This paper generalizes existing connections between automata and logic to a coalgebraic abstraction level. Let F: Set to Set be a standard functor that preserves weak pullbacks. We introduce various notions of F-automata, devices that operate on pointed F-coalgebras. The criterion under which such an automaton accepts or rejects a pointed coalgebra is formulated in terms of an infinite two-player graph game. We also introduce a language of coalgebraic fixed point logic for F-coalgebras, and we provide a game semantics for this language. Finally, we show that the two approaches are equivalent in expressive power. We prove that any coalgebraic fixed point formula can be transformed into an F-automaton that accepts precisely those pointed F-coalgebras in which the formula holds. And conversely, we prove that any F-automaton can be converted into an equivalent fixed point formula that characterizes the pointed F-coalgebras accepted by the automaton.

71 citations


Journal ArticleDOI
TL;DR: A system in which Isabelle users obtain automatic support from automatic theorem provers such as Vampire and SPASS, and a working prototype that uses background processes already provides much of the desired functionality.
Abstract: Interactive theorem provers require too much effort from their users We have been developing a system in which Isabelle users obtain automatic support from automatic theorem provers (ATPs) such as Vampire and SPASS An ATP is invoked at suitable points in the interactive session, and any proof found is given to the user in a window displaying an Isar proof script There are numerous differences between Isabelle (polymorphic higher-order logic with type classes, natural deduction rule format) and classical ATPs (first-order, untyped, and clause form) Many of these differences have been bridged, and a working prototype that uses background processes already provides much of the desired functionality

Journal ArticleDOI
TL;DR: All algorithms for sorting linear permutations by transpositions can be used to sort circular permutations, and a new O(n 3/2 log n) 1.5-approximation algorithm is observed, which is considerably simpler than previously reported.
Abstract: An important problem in genome rearrangements is sorting permutations by transpositions. The complexity of the problem is still open, and two rather complicated 1.5-approximation algorithms for sorting linear permutations are known (Bafna and Pevzner, 98 and Christie, 99). The fastest known algorithm is the quadratic algorithm of Bafna and Pevzner. In this paper, we observe that the problem of sorting circular permutations by transpositions is equivalent to the problem of sorting linear permutations by transpositions. Hence, all algorithms for sorting linear permutations by transpositions can be used to sort circular permutations. Our main result is a new O(n3/2√log n) 1.5-approximation algorithm, which is considerably simpler than the previous ones, and whose analysis is significantly less involved.

Journal ArticleDOI
TL;DR: The use of recursive coalgebras as a paradigm of structured recursion in programming semantics is motivated and new conditions for the recursiveness of a coalgebra based on comonads, comonad-coalgebrAs and distributive laws of functors over comonADs are given.
Abstract: The concept of recursive coalgebra of a functor was introduced in the 1970s by Osius in his work on categorical set theory to discuss the relationship between wellfounded induction and recursively specified functions. In this paper, we motivate the use of recursive coalgebras as a paradigm of structured recursion in programming semantics, list some basic facts about recursive coalgebras and, centrally, give new conditions for the recursiveness of a coalgebra based on comonads, comonad-coalgebras and distributive laws of functors over comonads. We also present an alternative construction using countable products instead of coffee comonads.

Journal ArticleDOI
TL;DR: This work investigates the category of Eilenberg-Moore algebras for the Giry monad associated with stochastic relations over Polish spaces with continuous maps as morphisms with positive convex structures on the base space.
Abstract: We investigate the category of Eilenberg-Moore algebras for the Giry monad associated with stochastic relations over Polish spaces with continuous maps as morphisms. The algebras are identified as the positive convex structures on the base space. The forgetful functor assigning a positive convex structure the underlying Polish space has the stochastic powerdomain as its left adjoint.

Journal ArticleDOI
TL;DR: The axiomatization is shown to be weakly complete relative to an oracle for analytical reasoning and is carried out using a non-trivial extension of the Fagin-Halpern-Megiddo technique together with three Henkin style completions.
Abstract: A finitary axiomatization for EQPL (exogenous quantum propositional logic) is presented. The axiomatization is shown to be weakly complete relative to an oracle for analytical reasoning. The proof is carried out using a non-trivial extension of the Fagin-Halpern-Megiddo technique together with three Henkin style completions.

Journal ArticleDOI
TL;DR: The authors' SLLL-Bases approximate the successive minima of the lattice in nearly the same way as LLL-bases and householder reflections are shown to provide better accuracy than Gram-Schmidt for orthogonalizing LLL -bases in floating point arithmetic.
Abstract: We modify the concept of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lovasz, Factoring polynomials with rational coefficients, Math. Ann. 261 (1982) 515-534 towards a faster reduction algorithm. We organize LLL-reduction in segments of the basis. Our SLLL-bases approximate the successive minima of the lattice in nearly the same way as LLL-bases. For integer lattices of dimension n given by a basis of length 2^O^(^n^), SLLL-reduction runs in O(n^5^ ^+^@e) bit operations for every @e>0, compared to O(n^7^ ^+^@e) for the original LLL and to O(n^6^ ^+^@e) for the LLL-algorithms of Schnorr, A more efficient algorithm for lattice reduction, Journal of Algorithm, 9 (1988) 47-62 and Storjohann, Faster Algorithms for Integer Lattice Basis Reduction. TR 249, Swiss Federal Institute of Technology, ETH-Zurich, Department of Computer Science, Zurich, Switzerland, July 1996. We present an even faster algorithm for SLLL-reduction via iterated subsegments running in O(n^3log n) arithmetic steps. Householder reflections are shown to provide better accuracy than Gram-Schmidt for orthogonalizing LLL-bases in floating point arithmetic. .

Journal Article
TL;DR: Simulation result demonstrates that the improved algorithm is more reliable and robust to irregular network topology than the traditional DV-Hop localization algorithms,especially when the ratio of beacon nodes is relatively low and the networkTopology is sparse.
Abstract: Based on the characteristics of DV-Hop,an improved scheme for this typical range-free localization algorithm in wireless sensor network is proposed.The main principle of the improved scheme is to introduce concept of collinearity degree into the selection phase of beacon nodes.Furthermore,a method for determining threshold value of adaptive collinearity degree based on the localized network topology is proposed.Not only the topology relation of beacon nodes but also the relation between unknown nodes and beacon nodes are considered in the improved algorithm.Simulation result demonstrates that the improved algorithm is more reliable and robust to irregular network topology than the traditional DV-Hop localization algorithms,especially when the ratio of beacon nodes is relatively low and the network topology is sparse.

Journal ArticleDOI
TL;DR: This work considers internal labelled transition systems within the sheaf topos, and axiomatise a class that is in precise correspondence with the coalgebraic and the indexed labelled transition system models.
Abstract: We study three operational models of name-passing process calculi: coalgebras on (pre)sheaves, indexed labelled transition systems, and history dependent automata The coalgebraic model is considered both for presheaves over the category of finite sets and injections, and for its subcategory of atomic sheaves known as the Schanuel topos Each coalgebra induces an indexed labelled transition system Such transition systems are characterised, relating the coalgebraic approach to an existing model of name-passing Further, we consider internal labelled transition systems within the sheaf topos, and axiomatise a class that is in precise correspondence with the coalgebraic and the indexed labelled transition system models By establishing and exploiting the equivalence of the Schanuel topos with a category of named-sets, these internal labelled transition systems are also related to the theory of history dependent automata

Journal ArticleDOI
TL;DR: The paper settles a long standing problem for Mazurkiewicz traces: the pure future local temporal logic defined with the basic modalities exists-next and until is expressively complete, which means every first-order definable language of MazurKiewicz traces can be defined in a pure futureLocal temporal logic.
Abstract: The paper settles a long standing problem for Mazurkiewicz traces: the pure future local temporal logic defined with the basic modalities exists-next and until is expressively complete. This means every first-order definable language of Mazurkiewicz traces can be defined in a pure future local temporal logic. The analogous result with a global interpretation has been known, but the treatment of a local interpretation turned out to be much more involved. Local logics are interesting because both the satisfiability problem and the model checking problem are solvable in PSPACE for these logics whereas they are non-elementary for global logics. Both, the (previously known) global and the (new) local results generalize Kamp's Theorem for words, because for sequences local and global viewpoints coincide.

Journal ArticleDOI
TL;DR: In this paper, the termination of well-typed π-calculus processes is defined as follows: a term terminates if all its reduction sequences are of finite length.
Abstract: A term terminates if all its reduction sequences are of finite length. We show four type systems that ensure termination of well-typed π-calculus processes. The systems are obtained by successive refinements of the types of the simply typed π-calculus. For all (but one of) the type systems we also present upper bounds to the number of steps well-typed processes take to terminate. The termination proofs use techniques from term rewriting systems. We show the usefulness of the type systems on some non-trivial examples: the encodings of primitive recursive functions, the protocol for encoding separate choice in terms of parallel composition, a symbol table implemented as a dynamic chain of cells.

Journal ArticleDOI
TL;DR: In this paper, it was shown that graph isomorphism is in the complexity class SPP, and hence it is in ⊕P (in fact, in ModkP for each k ≥ 2).
Abstract: We show that Graph Isomorphism is in the complexity class SPP, and hence it is in ⊕P (in fact, in ModkP for each k ≥ 2). These inclusions for Graph Isomorphism were not known prior to membership in SPP. We derive this result as a corollary of a more general result: we show that a generic problem FIND-GROUP has an FPSPP algorithm. This general result has other consequences: for example, it follows that the hidden subgroup problem for permutation groups, studied in the context of quantum algorithms, has an FPSPP algorithm. Also, some other algorithmic problems over permutation groups known to be at least as hard as Graph Isomorphism (e.g., coset intersection) are in SPP, and thus in ModkP for each k ≥ 2.

Journal ArticleDOI
TL;DR: A new class of event structures, called locally finite, that extend confusion-free event structure are considered that have the property that ''concurrent processes are independent in the probabilistic sense.
Abstract: This paper is devoted to probabilistic models for concurrent systems under their true-concurrency semantics. Here we address probabilistic event structures. We consider a new class of event structures, called locally finite, that extend confusion-free event structure. In locally finite event structures, maximal configurations can be tiled with branching cells: branching cells are minimal and finite sub-structures capturing the choices performed while scanning a maximal configuration. The probabilistic event structures that we introduce have the property that ''concurrent processes are independent in the probabilistic sense.''

Journal ArticleDOI
TL;DR: The implication problem and the finite implication problem for implicationsH ⇒ F, where F is an emvd and H a conjunction of emvds and fds, are unsolvable.
Abstract: By an implication for database dependencies we mean an expression H ⇒ F , where H is a conjunction of dependencies and F a single dependency. Fixing a class of such implications, a solution of the (finite) implication problem consists in an algorithmic procedure deciding for every implication in the class whether or not it holds in all (finite) databases (in which it is to be interpreted). In [3] this problem was studied for dependencies which are functional (fd) or embedded multivalued (emvd). As pointed out by Luc Segoufin, what was really shown is the following. Theorem 1. The implication problem and the finite implication problem for implicationsH ⇒ F , where F is an emvd and H a conjunction of emvds and fds, are unsolvable.

Journal ArticleDOI
TL;DR: The calculus is shown to be complete provided that functions that are not in the intersection of the component signatures are declared as partial and if the unsatisfiability of a goal modulo the combined theory does not depend on the totality of the functions in the extensions, the inconsistency will be effectively found.
Abstract: The paper presents a modular superposition calculus for the combination of first-order theories involving both total and partial functions. The modularity of the calculus is a consequence of the fact that all the inferences are pure-only involving clauses over the alphabet of either one, but not both, of the theories-when refuting goals represented by sets of pure formulae. The calculus is shown to be complete provided that functions that are not in the intersection of the component signatures are declared as partial. This result also means that if the unsatisfiability of a goal modulo the combined theory does not depend on the totality of the functions in the extensions, the inconsistency will be effectively found. Moreover, we consider a constraint superposition calculus for the case of hierarchical theories and show that it has a related modularity property. Finally, we identify cases where the partial models can always be made total so that modular superposition is also complete with respect to the standard (total function) semantics of the theories.

Journal ArticleDOI
TL;DR: This work builds on the construction of the universal Harsanyi type spaces by Heifetz and Samet and papers by Rosziger and Jacobs on coalgebraic modal logic, and construct logical languages, probabilistic logics of transition systems, and interpret them on coalgebras.
Abstract: We prove that every functor on the category Meas of measurable spaces built from the identity and constant functors using products, coproducts, and the probability measure functor @D has a final coalgebra. Our work builds on the construction of the universal Harsanyi type spaces by Heifetz and Samet and papers by Rosziger and Jacobs on coalgebraic modal logic. We construct logical languages, probabilistic logics of transition systems, and interpret them on coalgebras. The final coalgebra is carried by the set of descriptions of all points in all coalgebras. For the category Set, we work with the functor D of discrete probability measures. We prove that every functor on Set built from D and the expected functors has a final coalgebra. The work for Set differs from the work for Meas: negation in needed for final coalgebras on Set but not for Meas.

Journal ArticleDOI
TL;DR: This paper presents a new approach for combining decision procedures for the word problem in the non-disjoint case that applies to equational theories induced by modal logics, but is not restricted to them.
Abstract: Previous results for combining decision procedures for the word problem in the non-disjoint case do not apply to equational theories induced by modal logics-which are not disjoint for sharing the theory of Boolean algebras. Conversely, decidability results for the fusion of modal logics are strongly tailored towards the special theories at hand, and thus do not generalize to other types of equational theories. In this paper, we present a new approach for combining decision procedures for the word problem in the non-disjoint case that applies to equational theories induced by modal logics, but is not restricted to them. The known fusion decidability results for modal logics are instances of our approach. However, even for equational theories induced by modal logics our results are more general since they are not restricted to so-called normal modal logics.

Journal ArticleDOI
TL;DR: It is shown that in contrast to classical (algorithmic) randomness--which cannot be naturally characterised in terms of plain complexity--asymptotic randomness admits such a characterisation.
Abstract: We introduce the zeta number, natural halting probability, and natural complexity of a Turing machine and we relate them to Chaitin's Omega number, halting probability, and program-size complexity. A classification of Turing machines according to their zeta numbers is proposed: divergent, convergent, and tuatara. We prove the existence of universal convergent and tuatara machines. Various results on (algorithmic) randomness and partial randomness are proved. For example, we show that the zeta number of a universal tuatara machine is c.e. and random. A new type of partial randomness, asymptotic randomness, is introduced. Finally we show that in contrast to classical (algorithmic) randomness--which cannot be naturally characterised in terms of plain complexity--asymptotic randomness admits such a characterisation.

Journal ArticleDOI
TL;DR: This work presents the pattern-matching spi-calculus, which is an obvious extension of the spi -calculus to include pattern- matching as primitive, and shows that any appropriately typed process is guaranteed to satisfy robust authenticity, secrecy and integrity properties.
Abstract: Cryptographic protocols often make use of nested cryptographic primitives, for example signed message digests, or encrypted signed messages. Gordon and Jeffrey's prior work on types for authenticity did not allow for such nested cryptography. In this work, we present the pattern-matching spi-calculus, which is an obvious extension of the spi-calculus to include pattern-matching as primitive. The novelty of the language is in the accompanying type system, which uses the same language of patterns to describe complex data dependencies which cannot be described using prior type systems. We show that any appropriately typed process is guaranteed to satisfy robust authenticity, secrecy and integrity properties.

Journal ArticleDOI
TL;DR: This paper illustrates the relevance of distributive laws for the solution of recursive equations, and shows that one approach for obtaining coinductive solutions of equations via infinite terms is in fact a special case of a more general approach using an extended form of coinduction via distributive Laws.
Abstract: This paper illustrates the relevance of distributive laws for the solution of recursive equations, and shows that one approach for obtaining coinductive solutions of equations via infinite terms is in fact a special case of a more general approach using an extended form of coinduction via distributive laws.

Journal ArticleDOI
TL;DR: It is shown that no algorithm can have absolute competitive ratio greater than 0, and an algorithm with asymptotic competitive ratio 1/2 is presented, which is the best possible.
Abstract: In this paper we consider the following problems: we are given a set of n items {u1,....,un} and a number of unit-capacity bins. Each item ui has a size wi ∈ (0, 1] and a penalty pi ≥ 0. An item can be either rejected, in which case we pay its penalty, or put into one bin under the constraint that the total size of the items in the bin is no greater than 1. No item can be spread into more than one bin. The objective is to minimize the total number of used bins plus the total penalty paid for the rejected items. We call the problem bin packing with rejection penalties, and denote it as BPR. For the on-line BPR problem, we present an algorithm with an absolute competitive ratio of 2.618 while the lower bound is 2.343, and an algorithm with an asymptotic competitive ratio arbitrarily close to 1.75 while the lower bound is 1.540. For the off-line BPR problem, we present an algorithm with an absolute worst-case ratio of 2 while the lower bound is 1.5, and an algorithm with an asymptotic worst-case ratio of 1.5. We also study a closely related bin covering version of the problem. In this case pi means some amount of profit. If an item is rejected, we get its profit, or it can be put into a bin in such a way that the total size of the items in the bin is no smaller than 1. The objective is to maximize the number of covered bins plus the total profit of all rejected items. We call this problem bin covering with rejection (BCR). For the on-line BCR problem, we show that no algorithm can have absolute competitive ratio greater than 0, and present an algorithm with asymptotic competitive ratio 1/2, which is the best possible. For the off-line BCR problem, we also present an algorithm with an absolute worst-case ratio of 1/2 which matches the lower bound.