scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 2008"


Journal ArticleDOI
TL;DR: A framework in which anonymity protocols are interpreted as noisy channels in the information-theoretic sense is considered, and the idea of using the notion of capacity as a measure of the loss of anonymity is explored, and various notions of anonymity can be expressed.
Abstract: We consider a framework in which anonymity protocols are interpreted as noisy channels in the information-theoretic sense, and we explore the idea of using the notion of capacity as a measure of the loss of anonymity. Such idea was already suggested by Moskowitz, Newman and Syverson, in their analysis of the covert channel that can be created as a result of non-perfect anonymity. We consider the case in which some leak of information is intended by design, and we introduce the notion of conditional capacity to rule out this factor, thus retrieving a natural correspondence with the notion of anonymity. Furthermore, we show how to compute the capacity and the conditional capacity when the anonymity protocol satisfies certain symmetries. We also investigate how the adversary can test the system to try to infer the user's identity, and we study how his probability of success depends on the characteristics of the channel. We then illustrate how various notions of anonymity can be expressed in this framework, and show the relation with some definitions of probabilistic anonymity in literature. Finally, we show how to compute the matrix of the channel (and hence the capacity and conditional capacity) using model checking.

223 citations


Journal ArticleDOI
TL;DR: This work proves the first explicit approximation lower bounds for various kinds of domination problems (connected, total, independent) in bounded degree graphs in boundeddegree graphs for the Minimum Dominating Set problem.
Abstract: We study approximation hardness of the Minimum Dominating Set problem and its variants in undirected and directed graphs. Using a similar result obtained by Trevisan for Minimum Set Cover we prove the first explicit approximation lower bounds for various kinds of domination problems (connected, total, independent) in bounded degree graphs. Asymptotically, for degree bound approaching infinity, these bounds almost match the known upper bounds. The results are applied to improve the lower bounds for other related problems such as Maximum Induced Matching and Maximum Leaf Spanning Tree.

190 citations


Journal ArticleDOI
TL;DR: The minimum number of bits of advice that has to be given to an exploration algorithm in order to achieve competitive ratio strictly smaller than 2 is investigated and an exact threshold number is established that turns out to be roughly loglogD, where D is the diameter of the tree.
Abstract: We study the amount of knowledge about the network that is required in order to efficiently solve a task concerning this network. The impact of available information on the efficiency of solving network problems, such as communication or exploration, has been investigated before but assumptions concerned availability of particular items of information about the network, such as the size, the diameter, or a map of the network. In contrast, our approach is quantitative: we investigate the minimum number of bits of information (bits of advice) that has to be given to an algorithm in order to perform a task with given efficiency. We illustrate this quantitative approach to available knowledge by the task of tree exploration. A mobile entity (robot) has to traverse all edges of an unknown tree, using as few edge traversals as possible. The quality of an exploration algorithm A is measured by its competitive ratio, i.e., by comparing its cost (number of edge traversals) to the length of the shortest path containing all edges of the tree. Depth-First-Search has competitive ratio 2 and, in the absence of any information about the tree, no algorithm can beat this value. We determine the minimum number of bits of advice that has to be given to an exploration algorithm in order to achieve competitive ratio strictly smaller than 2. Our main result establishes an exact threshold number of bits of advice that turns out to be roughly loglogD, where D is the diameter of the tree. More precisely, for any constant c, we construct an exploration algorithm with competitive ratio smaller than 2, using at most loglogD-c bits of advice, and we show that every algorithm using loglogD-g(D) bits of advice, for any function g unbounded from above, has competitive ratio at least 2.

77 citations


Journal ArticleDOI
TL;DR: A region of points at which even approximating T(G;x,y) is as hard as #P is identified, which is an interesting consequence of this work since the corresponding decision problem is in P for example for @l=6.
Abstract: The Tutte polynomial of a graph G is a two-variable polynomial T(G;x,y) that encodes many interesting properties of the graph. We study the complexity of the following problem, for rationals x and y: take as input a graph G, and output a value which is a good approximation to T(G;x,y). Jaeger et al. have completely mapped the complexity of exactly computing the Tutte polynomial. They have shown that this is #P-hard, except along the hyperbola (x-1)(y-1)=1 and at four special points. We are interested in determining for which points (x,y) there is a fully polynomial randomised approximation scheme (FPRAS) for T(G;x,y). Under the assumption RP NP, we prove that there is no FPRAS at (x,y) if (x,y) is in one of the half-planes x NP, there is no FPRAS at the point (x,y)=(0,[email protected]) when @l>2 is a positive integer. Thus, there is no FPRAS for counting nowhere-zero @l flows for @l>2. This is an interesting consequence of our work since the corresponding decision problem is in P for example for @l=6. Although our main concern is to distinguish regions of the Tutte plane that admit an FPRAS from those that do not, we also note that the latter regions exhibit different levels of intractability. At certain points (x,y), for example the integer points on the x-axis, or any point in the positive quadrant, there is a randomised approximation scheme for T(G;x,y) that runs in polynomial time using an oracle for an NP predicate. On the other hand, we identify a region of points (x,y) at which even approximating T(G;x,y) is as hard as #P.

76 citations


Journal ArticleDOI
TL;DR: The main technical result of the present paper is a sound and complete axiomatization of the propositional fragment of computability logic whose vocabulary includes all three - parallel, choice and sequential - sorts of conjunction and disjunction.
Abstract: Computability logic (CL) is a semantical platform and research program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth which it has more traditionally been. Formulas in CL stand for (interactive) computational problems, understood as games between a machine and its environment; logical operators represent operations on such entities; and ''truth'' is understood as existence of an effective solution, i.e., of an algorithmic winning strategy. The formalism of CL is open-ended, and may undergo series of extensions as the study of the subject advances. The main groups of operators on which CL has been focused so far are the parallel, choice, branching, and blind operators, with the logical behaviors of the first three groups resembling those of the multiplicatives, additives and exponentials of linear logic, respectively. The present paper introduces a new important group of operators, called sequential. The latter come in the form of sequential conjunction and disjunction, sequential quantifiers, and sequential recurrences (''exponentials''). As the name may suggest, the algorithmic intuitions associated with this group are those of sequential computations, as opposed to the intuitions of parallel computations associated with the parallel group of operations. Specifically, while playing a parallel combination of games means playing all components of the combination simultaneously, playing a sequential combination means playing the components in a sequential fashion, one after one. The main technical result of the present paper is a sound and complete axiomatization of the propositional fragment of computability logic whose vocabulary, together with negation, includes all three - parallel, choice and sequential - sorts of conjunction and disjunction. An extension of this result to the first-order level is also outlined.

71 citations


Journal ArticleDOI
TL;DR: A framework for compositional analysis of a large class of security protocols is developed, intended to facilitate automatic as well as manual verification of large structured security protocols.
Abstract: Automatic security protocol analysis is currently feasible only for small protocols. Since larger protocols quite often are composed of many small protocols, compositional analysis is an attractive, but non-trivial approach. We have developed a framework for compositional analysis of a large class of security protocols. The framework is intended to facilitate automatic as well as manual verification of large structured security protocols. Our approach is to verify properties of component protocols in a multi-protocol environment, then deduce properties about the composed protocol. To reduce the complexity of multi-protocol verification, we introduce a notion of protocol independence and prove a number of theorems that enable analysis of independent component protocols in isolation. To illustrate the applicability of our framework to real-world protocols, we study a key establishment sequence in WiMAX consisting of three subprotocols. Except for a small amount of trivial reasoning, the analysis is done using automatic tools.

60 citations


Journal ArticleDOI
TL;DR: It is discussed how the finite over- and under-approximations can be used to check properties of systems modelled by graph transformation systems, illustrating this with some small examples.
Abstract: We propose a technique for the analysis of infinite-state graph transformation systems, based on the construction of finite structures approximating their behaviour. Following a classical approach, one can construct a chain of finite under-approximations (k-truncations) of the Winskel style unfolding of a graph grammar. More interestingly, also a chain of finite over-approximations (k-coverings) of the unfolding can be constructed. The fact that k-truncations and k-coverings approximate the unfolding with arbitrary accuracy is formalised by showing that both chains converge (in a categorical sense) to the full unfolding. We discuss how the finite over- and under-approximations can be used to check properties of systems modelled by graph transformation systems, illustrating this with some small examples. We also describe the Augur tool, which provides a partial implementation of the proposed constructions, and has been used for the verification of larger case studies.

52 citations


Journal ArticleDOI
TL;DR: This work shows how to design secure authentication protocols for a non-standard class of scenarios where authentication is not bootstrapped from a PKI, shared secrets or trusted third parties, but rather using a minimum of work by human user(s) implementing the low-band width unspoofable channels between them.
Abstract: We show how to design secure authentication protocols for a non-standard class of scenarios. In these authentication is not bootstrapped from a PKI, shared secrets or trusted third parties, but rather using a minimum of work by human user(s) implementing the low-band width unspoofable channels between them. We develop both pairwise and group protocols which are essentially optimal in human effort and, given that, computation. We compare our protocols with recent pairwise protocols proposed by, for example, Hoepman and Vaudenay. We introduce and analyse a new cryptographic primitive-a digest function-that is closely related to short-output universal hash functions.

52 citations


Journal ArticleDOI
TL;DR: A new class of automata is introduced, bounded history automata, to represent execution monitors constrained by memory limitations, to define a precise taxonomy of security policies that are enforceable under memory-limitation constraints.
Abstract: Recently, attention has been given to formally characterize security policies that are enforceable by different kinds of security mechanisms. A very important research problem is the characterization of security policies that are enforceable by execution monitors constrained by memory limitations. This paper contributes to give more precise answers to this research problem. To represent execution monitors constrained by memory limitations, we introduce a new class of automata, bounded history automata. Characterizing memory limitations leads us to define a precise taxonomy of security policies that are enforceable under memory-limitation constraints.

50 citations


Journal ArticleDOI
TL;DR: This paper studies sixteen communication primitives, arising from the combination of four useful programming features: synchronism, arity, communication medium, message passing vs shared dataspaces and pattern-matching, and compares every pair of primitives to obtain a hierarchy of languages based on their relative expressive power.
Abstract: In this paper, we study sixteen communication primitives, arising from the combination of four useful programming features: synchronism (synchronous vs asynchronous primitives), arity (monadic vs polyadic data), communication medium (message passing vs shared dataspaces) and pattern-matching. Some of these primitives have already been used in at least one language which has appeared in the literature; however, to reason uniformly on such primitives, we plug them into a common framework based on the @p. By means of possibility/impossibility of 'reasonable' encodings, we compare every pair of primitives to obtain a hierarchy of languages based on their relative expressive power.

43 citations


Journal ArticleDOI
TL;DR: This paper investigates both the precision and the model checking efficiency of abstract models designed to preserve branching time logics w.r.t. a 3-valued semantics and suggests an efficient algorithm in which the abstract model is constructed during model checking, by need.
Abstract: This paper investigates both the precision and the model checking efficiency of abstract models designed to preserve branching time logics w.r.t. a 3-valued semantics. Current abstract models use ordinary transitions to over approximate the concrete transitions, while they use hyper transitions to under approximate the concrete transitions. In this work, we refer to precision measured w.r.t. the choice of abstract states, independently of the formalism used to describe abstract models. We show that current abstract models do not allow maximal precision. We suggest a new class of models and a construction of an abstract model which is most precise w.r.t. any choice of abstract states. As before, the construction of such models might involve an exponential blowup, which is inherent by the use of hyper transitions. We therefore suggest an efficient algorithm in which the abstract model is constructed during model checking, by need. Our algorithm achieves maximal precision w.r.t. the given property while remaining quadratic in the number of abstract states. To complete the picture, we incorporate it into an abstraction-refinement framework.

Journal ArticleDOI
TL;DR: It is proved in particular that they are incomparable with respect to language equivalence, and this has surprising consequences on timed automata, for instance, on the power of non-deterministic clock resets.
Abstract: Timed Petri nets and timed automata are two standard models for the analysis of real-time systems. We study in this paper their relationship, and prove in particular that they are incomparable with respect to language equivalence. In fact, we study the more general model of timed Petri nets with read-arcs (RA-TdPN), already introduced in [J. Srba, Timed-arc petri nets vs. networks of automata, in: Proceedings of the 26th International Conference Application and Theory of Petri Nets (ICATPN 05), Lecture Notes in Computer Science, vol. 3536, Springer, Berlin, 2005, pp. 385-402], which unifies both models of timed Petri nets and of timed automata, and prove that the coverability problem remains decidable for this model. Then, we establish numerous expressiveness results and prove that Zeno behaviours discriminate between several sub-classes of RA-TdPNs. This has surprising consequences on timed automata, for instance, on the power of non-deterministic clock resets.

Journal ArticleDOI
TL;DR: It is concluded that decisiveness is a real restriction on Gold's model of explanatory (or in the limit) learning of grammars for languages from positive data, and non U-shaped learning liberalizes the requirement of decisivity from being a restriction on all hypotheses output to the same restriction but only on correct hypotheses.
Abstract: Overregularization seen in child language learning, for example, verb tense constructs, involves abandoning correct behaviours for incorrect ones and later reverting to correct behaviours. Quite a number of other child development phenomena also follow this U-shaped form of learning, unlearning and relearning. A decisive learner does not do this and, more generally, never abandons an hypothesis H for an inequivalent one where it later conjectures an hypothesis equivalent to H, where equivalence means semantical or behavioural equivalence. The first main result of the present paper entails that decisiveness is a real restriction on Gold's model of explanatory (or in the limit) learning of grammars for languages from positive data. This result also solves an open problem posed in 1986 by Osherson, Stob and Weinstein. Second-time decisive learners semantically conjecture each of their hypotheses for any language at most twice. By contrast, such learners are shown not to restrict Gold's model of learning. Non U-shaped learning liberalizes the requirement of decisiveness from being a restriction on all hypotheses output to the same restriction but only on correct hypotheses. The situation regarding learning power for non U-shaped learning is a little more complex than that for decisiveness. This is explained shortly below. Gold's original model for learning grammars from positive data, called EX-learning, requires, for success, syntactic convergence to a correct grammar. A slight variant, called BC-learning, requires only semantic convergence to a sequence of correct grammars that need not be syntactically identical to one another. The second main result says that non U-shaped learning does not restrict EX-learning. However, from an argument of Fulk, Jain and Osherson, non U-shaped learning does restrict BC-learning. In the final section is discussed the possible meaning, for cognitive science, of these results and, in this regard, indicated are some avenues worthy of future investigation.

Journal ArticleDOI
TL;DR: This work proves that, for any k>=4, there exists a graph with connected visible search number at most k, and monotone connected visiblesearch number >k, and it is proved that, as opposed to the non-connected variant of visible graph searching, ''recontamination helps'' forconnected visible search.
Abstract: Search games are attractive for their correspondence with classical width parameters. For instance, the invisible search number (a.k.a. node search number) of a graph is equal to its pathwidth plus 1, and the visible search number of a graph is equal to its treewidth plus 1. The connected variants of these games ask for search strategies that are connected, i.e., at every step of the strategy, the searched part of the graph induces a connected subgraph. We focus on monotone search strategies, i.e., strategies for which every node is searched exactly once. The monotone connected visible search number of an n-node graph is at most O(logn) times its visible search number. First, we prove that this logarithmic bound is tight. Precisely, we prove that there is an infinite family of graphs for which the ratio monotone connected visible search number over visible search number is @W(logn). Second, we prove that, as opposed to the non-connected variant of visible graph searching, ''recontamination helps'' for connected visible search. Precisely, we prove that, for any k>=4, there exists a graph with connected visible search number at most k, and monotone connected visible search number >k

Journal ArticleDOI
TL;DR: The EXPTIME-completeness of the model-checking problem for 112-player BPA games and qualitative PCTL formulae is derived and it is shown that the qualitative extended reachability problem is decidable in polynomial time.
Abstract: We consider a class of infinite-state Markov decision processes generated by stateless pushdown automata. This class corresponds to 112-player games over graphs generated by BPA systems or (equivalently) 1-exit recursive state machines. An extended reachability objective is specified by two sets S and T of safe and terminal stack configurations, where the membership to S and T depends just on the top-of-the-stack symbol. The question is whether there is a suitable strategy such that the probability of hitting a terminal configuration by a path leading only through safe configurations is equal to (or different from) a given [email protected]?{0,1}. We show that the qualitative extended reachability problem is decidable in polynomial time, and that the set of all configurations for which there is a winning strategy is effectively regular. More precisely, this set can be represented by a deterministic finite-state automaton with a fixed number of control states. This result is a generalization of a recent theorem by Etessami and Yannakakis which says that the qualitative termination for 1-exit RMDPs (which exactly correspond to our 112-player BPA games) is decidable in polynomial time. Interestingly, the properties of winning strategies for the extended reachability objectives are quite different from the ones for termination, and new observations are needed to obtain the result. As an application, we derive the EXPTIME-completeness of the model-checking problem for 112-player BPA games and qualitative PCTL formulae.

Journal ArticleDOI
TL;DR: This work presents a novel maximal model construction for the fragment of the modal @m-calculus with boxes and greatest fixed points only, and adapt it to control-flow graphs modelling components described in a sequential procedural language.
Abstract: We present a method for algorithmic, compositional verification of control-flow-based safety properties of sequential programs with procedures. The application of the method involves three steps: (1) decomposing the desired global property into local properties of the components, (2) proving the correctness of the property decomposition by using a maximal model construction, and (3) verifying that the component implementations obey their local specifications. We consider safety properties of both the structure and the behaviour of program control flow. Our compositional verification method builds on a technique proposed by Grumberg and Long that uses maximal models to reduce compositional verification of finite-state parallel processes to standard model checking. We present a novel maximal model construction for the fragment of the modal @m-calculus with boxes and greatest fixed points only, and adapt it to control-flow graphs modelling components described in a sequential procedural language. We extend our verification method to programs with private procedures by defining an abstraction, presented as an inlining transformation. All algorithms have been implemented in a tool set automating all required verification steps. We validate our approach on an electronic purse case study.

Journal ArticleDOI
TL;DR: The basic superposition calculus is extended with a decomposition inference rule, which can be used for general first-order theorem proving with any resolution-based calculus compatible with the standard notion of redundancy.
Abstract: We present a decision procedure for the description logic SHIQ based on the basic superposition calculus, and show that it runs in exponential time for unary coding of numbers. To derive our algorithm, we extend basic superposition with a decomposition inference rule, which transforms conclusions of certain inferences into equivalent, but simpler clauses. This rule can be used for general first-order theorem proving with any resolution-based calculus compatible with the standard notion of redundancy.

Journal ArticleDOI
TL;DR: A generalized Paige-Tarjan algorithm for computing the minimal refinement of an abstract interpretation-based model that strongly preserves some given language is designed, and it is shown how GPT allows to deal with strong preservation of new languages by providing an efficient algorithm that computes the coarsest refinement of a given partition that strong preserves a language generated by the reachability operator.
Abstract: The Paige and Tarjan algorithm (PT) for computing the coarsest refinement of a state partition which is a bisimulation on some Kripke structure is well known. It is also well known in model checking that bisimulation is equivalent to strong preservation of CTL or, equivalently, of Hennessy-Milner logic. Drawing on these observations, we analyze the basic steps of the PT algorithm from an abstract interpretation perspective, which allows us to reason on strong preservation in the context of arbitrary (temporal) languages and of generic abstract models, possibly different from standard state partitions, specified by abstract interpretation. This leads us to design a generalized Paige-Tarjan algorithm, called GPT, for computing the minimal refinement of an abstract interpretation-based model that strongly preserves some given language. It turns out that PT is a straight instance of GPT on the domain of state partitions for the case of strong preservation of Hennessy-Milner logic. We provide a number of examples showing that GPT is of general use. We first show how a well-known efficient algorithm for computing stuttering equivalence can be viewed as a simple instance of GPT. We then instantiate GPT in order to design a new efficient algorithm for computing simulation equivalence that is competitive with the best available algorithms. Finally, we show how GPT allows to deal with strong preservation of new languages by providing an efficient algorithm that computes the coarsest refinement of a given partition that strongly preserves a language generated by the reachability operator.

Journal ArticleDOI
TL;DR: This work shows an attack on the revised GM protocol for any number (n>4) of signers, and argues that the message exchange structure of GM's main protocol is flawed: whatever the trusted party does will result in unfairness for some signer.
Abstract: A multi-party contract signing protocol allows a set of participants to exchange messages with each other with a view to arriving in a state in which each of them has a pre-agreed contract text signed by all the others. Garay and Mackenzie (GM) proposed such protocol based on private contract signatures, but it was later shown to be flawed by Chadha, Kremer and Scedrov (CKS); the authors CKS also provided a fix to the GM protocol by revising one of its sub-protocols. We show an attack on the revised GM protocol for any number (n>4) of signers. Furthermore, we argue that our attack shows that the message exchange structure of GM's main protocol is flawed: whatever the trusted party does will result in unfairness for some signer. This means that it is impossible to define a trusted party protocol for Garay and MacKenzie's main protocol; we call this ''resolve-impossibility''. We propose a new optimistic multi-party contract signing protocol, also based on private contract signatures. We present a proof that our protocol satisfies fairness as well as its formal analysis in NuSMV model checker for the case of five signers. The protocol requires n(n-1)(@?n/2@?+1) messages to be sent in the optimistic execution, which is about half the number of messages required by the state-of-the-art Baum-Waidner and Waidner protocol, and in contrast with Baum-Waidner and Waidner, it does not use a non-standard notion of a signed contract.

Journal ArticleDOI
TL;DR: The main goal of this paper is to setup a general approach that works for a whole class of monoidal theories which contains many of the specific cases that have been considered so far in an ad-hoc way and show that the well-defined symbolic constraints that are generated by reasonable protocols can be solved provided that unification in the monoidal theory satisfies some additional properties.
Abstract: We are interested in the design of automated procedures for analyzing the (in)security of cryptographic protocols in the Dolev-Yao model for a bounded number of sessions when we take into account some algebraic properties satisfied by the operators involved in the protocol. This leads to a more realistic model in comparison to what we get under the perfect cryptography assumption, but it implies that protocol analysis deals with terms modulo some equational theory instead of terms in a free algebra. The main goal of this paper is to setup a general approach that works for a whole class of monoidal theories which contains many of the specific cases that have been considered so far in an ad-hoc way (e.g. exclusive or, Abelian groups, exclusive or in combination with the homomorphism axiom). We follow a classical schema for cryptographic protocol analysis which proves first a locality result and then reduces the insecurity problem to a symbolic constraint solving problem. This approach strongly relies on the correspondence between a monoidal theory E and a semiring S"E which we use to deal with the symbolic constraints. We show that the well-defined symbolic constraints that are generated by reasonable protocols can be solved provided that unification in the monoidal theory satisfies some additional properties. The resolution process boils down to solving particular quadratic Diophantine equations that are reduced to linear Diophantine equations, thanks to linear algebra results and the well-definedness of the problem. Examples of theories that do not satisfy our additional properties appear to be undecidable, which suggests that our characterization is reasonably tight.

Journal ArticleDOI
TL;DR: The structure of Eilenberg-Moore algebras for the Giry monad for subprobabilities on Polish spaces is investigated in some detail and the general results for the discrete Girymonad are adapted.
Abstract: In Information and Computation 204 (2006), 1756-1781, the structure of Eilenberg-Moore algebras for the Giry monad for subprobabilities on Polish spaces is investigated in some detail by the present author. This note corrects a gap in one of the proofs. Additionally, it adapts the general results for the discrete Giry monad.

Journal ArticleDOI
TL;DR: It is shown that the reachability of the rotation problem is undecidable on the 3-sphere and other rotation problems can be formulated as matrix problems over complex and hypercomplex numbers.
Abstract: We examine computational problems on quaternion matrix and rotation semigroups. It is shown that in the ultimate case of quaternion matrices, in which multiplication is still associative, most of the decision problems for matrix semigroups are undecidable in dimension two. The geometric interpretation of matrix problems over quaternions is presented in terms of rotation problems for the 2- and 3-sphere. In particular, we show that the reachability of the rotation problem is undecidable on the 3-sphere and other rotation problems can be formulated as matrix problems over complex and hypercomplex numbers.

Journal Article
TL;DR: A discrete third order ESO(Extended State Observer) is proposed which is based on the frame of the continuous ESO, and the analysis of its stability is presented and simulation results show the effectiveness of the theoretical analysis.
Abstract: A discrete third order ESO(Extended State Observer) is proposed which is based on the frame of the continuous ESO,and the analysis of its stability is presented.When the disturbances do not vary too much,the proposed ESO can render the estimation errors of system states and disturbances to be constrained in a small bounded region.Simulation results show the effectiveness of the theoretical analysis.

Journal ArticleDOI
TL;DR: This work defines random sequences with respect to a conditional probability by using a section of the set of random points of product space and shows that this definition is consistent with Fubini's theorem and equivalent to the relative notion of randomness under a condition.
Abstract: We study a universal Martin-Lof test with respect to a computable probability on a product space. Then, we define random sequences with respect to a conditional probability by using a section of the set of random points of product space. We show that (1) our definition is consistent with Fubini's theorem, and (2) it is equivalent to the relative notion of randomness under a condition. This is an extension of Lambalgen's theorem (1987) to a correlated probability.

Journal ArticleDOI
TL;DR: It is shown that calculating the minimum equivalent DNF for a monotone formula is possible in output-polynomial time if and only if P=NP, and that checking whether two formulas are isomorphic has the same complexity for arbitrary formulas as for monot one formulas.
Abstract: We investigate the complexity of finding prime implicants and minimum equivalent DNFs for Boolean formulas, and of testing equivalence and isomorphism of monotone formulas. For DNF related problems, the complexity of the monotone case differs strongly from the arbitrary case. We show that it is DP-complete to check whether a monomial is a prime implicant for an arbitrary formula, but the equivalent problem for monotone formulas is in L. We show PP-completeness of checking if the minimum size of a DNF for a monotone formula is at most k, and for k in unary, we show that the complexity of the problem drops to coNP. In Christopher Umans [Christopher Umans, The minimum equivalent DNF problem and shortest implicants, Journal of Computer and System Sciences 63 (4) (2001) 597-611] a similar problem for arbitrary formulas was shown to be @?"2^p-complete. We show that calculating the minimum equivalent DNF for a monotone formula is possible in output-polynomial time if and only if P=NP. Finally, we disprove a conjecture from Steffen Reith [Steffen Reith, On the complexity of some equivalence problems for propositional calculi, in: Proceedings of the 28th International Symposium on Mathematical Foundations of Computer Science (MFCS), vol. 2747, Lecture Notes in Computer Science, Springer, 2003, pp. 632-641] by showing that checking whether two formulas are isomorphic has the same complexity for arbitrary formulas as for monotone formulas.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the computational complexity of the type checking problem in the latter setting, where both the input and output schema as well as the transformation are part of the input for the problem.
Abstract: Typechecking consists of statically verifying whether the output of an XML transformation always conforms to an output type for documents satisfying a given input type. In this general setting, both the input and output schema as well as the transformation are part of the input for the problem. However, scenarios where the input or output schema can be considered to be fixed, are quite common in practice. In the present work, we investigate the computational complexity of the typechecking problem in the latter setting.

Journal ArticleDOI
TL;DR: A class of hybrid automata is introduced for which the reachability problem can be decided and the problem of deciding whether a hybrid automaton belongs to this class can be again decided using first-order formulae over the reals, and it is shown that the techniques permit effective model checking for a nontrivial fragment of CTL.
Abstract: Hybrid systems are dynamical systems with the ability to describe mixed discrete-continuous evolution of a wide range of systems. Consequently, at first glance, hybrid systems appear powerful but recalcitrant, neither yielding to analysis and reasoning through a purely continuous-time modeling as with systems of differential equations, nor open to inferential processes commonly used for discrete state-transition systems such as finite state automata. A convenient and popular model, called hybrid automata, was introduced to model them and has spurred much interest on its tractability as a tool for inference and model checking in a general setting. Intuitively, a hybrid automaton is simply a ''finite-state'' automaton with each state augmented by continuous variables, which evolve according to a set of well-defined continuous laws, each specified separately for each state. This article investigates both the notion of hybrid automaton and the model checking problem over such a structure. In particular, it relates first-order theories and analysis results on multivalued maps and reduces the bounded reachability problem for hybrid automata whose continuous laws are expressed by inclusions (x'@?f(x,t)) to a decidability problem for first-order formulaeover the reals. Furthermore, the paper introduces a class of hybrid automata for which the reachability problem can be decided and shows that the problem of deciding whether a hybrid automaton belongs to this class can be again decided using first-order formulaeover the reals. Despite the fact that the bisimulation quotient for this class of hybrid automata can be infinite, we show that our techniques permit effective model checking for a nontrivial fragment of CTL.

Journal ArticleDOI
TL;DR: It is proved that in case of a finite alphabet with at least two actions, failure semantics affords a finite basis, while for ready simulation, completed simulation, simulation, possible worlds, ready trace, failure trace and ready semantics, such a finite based does not exist.
Abstract: Van Glabbeek presented the linear time-branching time spectrum of behavioral semantics. He studied these semantics in the setting of the basic process algebra BCCSP, and gave finite, sound and ground-complete, axiomatizations for most of these semantics. Groote proved for some of van Glabbeek's axiomatizations that they are @w-complete, meaning that an equation can be derived if (and only if) all of its closed instantiations can be derived. In this paper, we settle the remaining open questions for all the semantics in the linear time-branching time spectrum, either positively by giving a finite sound and ground-complete axiomatization that is @w-complete, or negatively by proving that such a finite basis for the equational theory does not exist. We prove that in case of a finite alphabet with at least two actions, failure semantics affords a finite basis, while for ready simulation, completed simulation, simulation, possible worlds, ready trace, failure trace and ready semantics, such a finite basis does not exist. Completed simulation semantics also lacks a finite basis in case of an infinite alphabet of actions.

Journal ArticleDOI
TL;DR: This work designs optimal offline and online algorithms for two uniformly related machines, both when the machine of higher hierarchy is faster and when it is slower, as well as for the case of three identical machines.
Abstract: We consider preemptive offline and online scheduling on identical machines and uniformly related machines in the hierarchical model, with the goal of minimizing the makespan. In this model, each job can be assigned to a subset of the machines which is a prefix of the machine set. We design optimal offline and online algorithms for two uniformly related machines, both when the machine of higher hierarchy is faster and when it is slower, as well as for the case of three identical machines. Specifically, for each one of the three variants, we give a simple formula to compute the makespan of an optimal schedule, provide a linear time offline algorithm which computes an optimal schedule and design an online algorithm of the best possible competitive ratio.

Journal ArticleDOI
TL;DR: This article uses information-theoretic concepts to investigate the refinement of a probabilistic, entropy-based information flow property, and considers the abstract and concrete models as views on the same stochastic process.
Abstract: Information flow properties, which describe confidentiality requirements, are not generally preserved under behavior refinement. This article describes a formal framework for refinement relations between nondeterministic probabilistic processes that capture sufficient conditions to preserve information flow properties. In particular, it uses information-theoretic concepts to investigate the refinement of a probabilistic, entropy-based information flow property. The refinement relation considers the abstract and concrete models as views on the same stochastic process. Probabilistic CSP provides the semantic basis for this investigation.