Showing papers in "Logical Methods in Computer Science in 2014"
••
TL;DR: A framework for studying languages over infinite alphabets and the ensuing automata theory, where the key role is played by an automorphism group of the alphabet is developed.
Abstract: We study languages over infinite alphabets equipped with some structure that
can be tested by recognizing automata. We develop a framework for studying such
alphabets and the ensuing automata theory, where the key role is played by an
automorphism group of the alphabet. In the process, we generalize nominal sets
due to Gabbay and Pitts.
120 citations
•
TL;DR: In this paper, the authors consider MDPs with multiple limit-average (or mean-payoff) functions and show that both randomization and memory are necessary for strategies, and that finite-memory randomized strategies are sufficient.
Abstract: We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We consider two different objectives, namely, expectation and satisfaction objectives. Given an MDP with k reward functions, in the expectation objective the goal is to maximize the expected limit-average value, and in the satisfaction objective the goal is to maximize the probability of runs such that the limit-average value stays above a given vector. We show that under the expectation objective, in contrast to the single-objective case, both randomization and memory are necessary for strategies, and that finite-memory randomized strategies are sufficient. Under the satisfaction objective, in contrast to the single-objective case, infinite memory is necessary for strategies, and that randomized memoryless strategies are sufficient for epsilon-approximation, for all epsilon>0. We further prove that the decision problems for both expectation and satisfaction objectives can be solved in polynomial time and the trade-off curve (Pareto curve) can be epsilon-approximated in time polynomial in the size of the MDP and 1/epsilon, and exponential in the number of reward functions, for all epsilon>0. Our results also reveal flaws in previous work for MDPs with multiple mean-payoff functions under the expectation objective, correct the flaws and obtain improved results.
81 citations
••
TL;DR: In this article, the problem of evaluating a Boolean conjunctive query against a guarded first-order theory is equivalent to checking whether "F and not Q" is unsatisfiable.
Abstract: Evaluating a Boolean conjunctive query Q against a guarded first-order theory
F is equivalent to checking whether "F and not Q" is unsatisfiable. This
problem is relevant to the areas of database theory and description logic.
Since Q may not be guarded, well known results about the decidability,
complexity, and finite-model property of the guarded fragment do not obviously
carry over to conjunctive query answering over guarded theories, and had been
left open in general. By investigating finite guarded bisimilar covers of
hypergraphs and relational structures, and by substantially generalising
Rosati's finite chase, we prove for guarded theories F and (unions of)
conjunctive queries Q that (i) Q is true in each model of F iff Q is true in
each finite model of F and (ii) determining whether F implies Q is
2EXPTIME-complete. We further show the following results: (iii) the existence
of polynomial-size conformal covers of arbitrary hypergraphs; (iv) a new proof
of the finite model property of the clique-guarded fragment; (v) the small
model property of the guarded fragment with optimal bounds; (vi) a
polynomial-time solution to the canonisation problem modulo guarded
bisimulation, which yields (vii) a capturing result for guarded bisimulation
invariant PTIME.
47 citations
••
TL;DR: In this paper, the authors study the almost-sure model-checking problem for stochastic timed automata, that is, given a timed automaton A and a property ϕ, they want to decide whether A satisfies ϕ with probability 1.
Abstract: A stochastic timed automaton is a purely stochastic process defined on a timed automaton, in which both delays and discrete choices are made randomly. We study the almost-sure model-checking problem for this model, that is, given a stochastic timed automaton A and a property ϕ, we want to decide whether A satisfies ϕ with probability 1. In this paper, we identify several classes of automata and of properties for which this can be decided. The proof relies on the construction of a finite abstraction, called the thick graph, that we interpret as a finite Markov chain, and for which we can decide the almost-sure model-checking problem. Correctness of the abstraction holds when automata are almost-surely fair, which we show, is the case for two large classes of systems, single-clock automata and so-called weak-reactive automata. Techniques employed in this article gather tools from real-time verification and probabilistic verification, as well as topological games played on timed automata.
40 citations
••
TL;DR: In this paper, the authors consider two-player games played on finite graphs equipped with costs on edges and introduce two winning conditions, cost parity and cost-streett, which require bounds on the cost between requests and their responses.
Abstract: We consider two-player games played on finite graphs equipped with costs on edges and introduce two winning conditions, cost-parity and cost-Streett, which require bounds on the cost between requests and their responses. Both conditions generalize the corresponding classical omega-regular conditions and the corresponding finitary conditions. For parity games with costs we show that the first player has positional winning strategies and that determining the winner lies in NP and coNP. For Streett games with costs we show that the first player has finite-state winning strategies and that determining the winner is EXPTIME-complete. The second player might need infinite memory in both games. Both types of games with costs can be solved by solving linearly many instances of their classical variants.
37 citations
••
TL;DR: In this paper, the authors consider MDPs with multiple limit-average functions and show that both randomization and memory are necessary and sufficient for strategies to achieve Pareto optimal values.
Abstract: We study Markov decision processes (MDPs) with multiple
limit-average (or mean-payoff) functions. We consider two
different objectives, namely, expectation and satisfaction
objectives. Given an MDP with k limit-average functions, in the
expectation objective the goal is to maximize the expected
limit-average value, and in the satisfaction objective the goal
is to maximize the probability of runs such that the
limit-average value stays above a given vector. We show that
under the expectation objective, in contrast to the case of one
limit-average function, both randomization and memory are
necessary for strategies even for epsilon-approximation, and
that finite-memory randomized strategies are sufficient for
achieving Pareto optimal values. Under the satisfaction
objective, in contrast to the case of one limit-average
function, infinite memory is necessary for strategies achieving
a specific value (i.e. randomized finite-memory strategies are
not sufficient), whereas memoryless randomized strategies are
sufficient for epsilon-approximation, for all epsilon>0. We
further prove that the decision problems for both expectation
and satisfaction objectives can be solved in polynomial time
and the trade-off curve (Pareto curve) can be
epsilon-approximated in time polynomial in the size of the MDP
and 1/epsilon, and exponential in the number of limit-average
functions, for all epsilon>0. Our analysis also reveals flaws
in previous work for MDPs with multiple mean-payoff functions
under the expectation objective, corrects the flaws, and allows
us to obtain improved results.
35 citations
••
TL;DR: In this article, a trace-by-trace approach is proposed to compare execution probabilities of single traces instead of entire trace distributions, and turns out to be compositional, and is fully backward compatible with testing equivalences for restricted classes of processes.
Abstract: Two of the most studied extensions of trace and testing equivalences to
nondeterministic and probabilistic processes induce distinctions that have been
questioned and lack properties that are desirable. Probabilistic
trace-distribution equivalence differentiates systems that can perform the same
set of traces with the same probabilities, and is not a congruence for parallel
composition. Probabilistic testing equivalence, which relies only on extremal
success probabilities, is backward compatible with testing equivalences for
restricted classes of processes, such as fully nondeterministic processes or
generative/reactive probabilistic processes, only if specific sets of tests are
admitted. In this paper, new versions of probabilistic trace and testing
equivalences are presented for the general class of nondeterministic and
probabilistic processes. The new trace equivalence is coarser because it
compares execution probabilities of single traces instead of entire trace
distributions, and turns out to be compositional. The new testing equivalence
requires matching all resolutions of nondeterminism on the basis of their
success probabilities, rather than comparing only extremal success
probabilities, and considers success probabilities in a trace-by-trace fashion,
rather than cumulatively on entire resolutions. It is fully backward compatible
with testing equivalences for restricted classes of processes; as a
consequence, the trace-by-trace approach uniformly captures the standard
probabilistic testing equivalences for generative and reactive probabilistic
processes. The paper discusses in full details the new equivalences and
provides a simple spectrum that relates them with existing ones in the setting
of nondeterministic and probabilistic processes.
34 citations
••
TL;DR: In this article, an effect system for core Eff, an ML-style programming language with first-class algebraic effects and handlers, is presented, and the safety theorem in Twelf is proved.
Abstract: We present an effect system for core Eff, a simplified variant of Eff, which
is an ML-style programming language with first-class algebraic effects and
handlers. We define an expressive effect system and prove safety of operational
semantics with respect to it. Then we give a domain-theoretic denotational
semantics of core Eff, using Pitts's theory of minimal invariant relations, and
prove it adequate. We use this fact to develop tools for finding useful
contextual equivalences, including an induction principle. To demonstrate their
usefulness, we use these tools to derive the usual equations for mutable state,
including a general commutativity law for computations using non-interfering
references. We have formalized the effect system, the operational semantics,
and the safety theorem in Twelf.
32 citations
••
TL;DR: This work studies the expressiveness of CTL with quantification over atomic propositions, shows in particular that QCTL coincides with Monadic Second-Order Logic for both semantics and characterises the complexity of its model-checking and satisfiability problems.
Abstract: While it was defined long ago, the extension of CTL with quantification over
atomic propositions has never been studied extensively. Considering two
different semantics (depending whether propositional quantification refers to
the Kripke structure or to its unwinding tree), we study its expressiveness
(showing in particular that QCTL coincides with Monadic Second-Order Logic for
both semantics) and characterise the complexity of its model-checking and
satisfiability problems, depending on the number of nested propositional
quantifiers (showing that the structure semantics populates the polynomial
hierarchy while the tree semantics populates the exponential hierarchy).
31 citations
••
TL;DR: In this article, the authors present a comparative experiment on four representative constructions that are considered the most efficient in each approach and propose several optimization heuristics to improve the Safra-Piterman and slice-based constructions.
Abstract: Complementation of B\"uchi automata has been studied for over five decades
since the formalism was introduced in 1960. Known complementation constructions
can be classified into Ramsey-based, determinization-based, rank-based, and
slice-based approaches. Regarding the performance of these approaches, there
have been several complexity analyses but very few experimental results. What
especially lacks is a comparative experiment on all of the four approaches to
see how they perform in practice. In this paper, we review the four approaches,
propose several optimization heuristics, and perform comparative
experimentation on four representative constructions that are considered the
most efficient in each approach. The experimental results show that (1) the
determinization-based Safra-Piterman construction outperforms the other three
in producing smaller complements and finishing more tasks in the allocated time
and (2) the proposed heuristics substantially improve the Safra-Piterman and
the slice-based constructions.
28 citations
••
TL;DR: This paper introduces directed containers to capture the common situation where every position in a data-structure determines another data-Structure, informally, the sub-data-st structure rooted by that position.
Abstract: Abbott, Altenkirch, Ghani and others have taught us that many parameterized
datatypes (set functors) can be usefully analyzed via container representations
in terms of a set of shapes and a set of positions in each shape. This paper
builds on the observation that datatypes often carry additional structure that
containers alone do not account for. We introduce directed containers to
capture the common situation where every position in a data-structure
determines another data-structure, informally, the sub-data-structure rooted by
that position. Some natural examples are non-empty lists and node-labelled
trees, and data-structures with a designated position (zippers). While
containers denote set functors via a fully-faithful functor, directed
containers interpret fully-faithfully into comonads. But more is true: every
comonad whose underlying functor is a container is represented by a directed
container. In fact, directed containers are the same as containers that are
comonads. We also describe some constructions of directed containers. We have
formalized our development in the dependently typed programming language Agda.
••
TL;DR: Polygraphs as discussed by the authors are a higher-dimensional generalization of this notion of presenting, from the setting of monoids to the much more general setting of n-categories, allowing computations on those and their manipulation by a computer.
Abstract: String rewriting systems have proved very useful to study monoids. In good
cases, they give finite presentations of monoids, allowing computations on those and
their manipulation by a computer. Even better, when the presentation is confluent and
terminating, they provide one with a notion of canonical representative of the elements of
the presented monoid. Polygraphs are a higher-dimensional generalization of this notion of
presentation, from the setting of monoids to the much more general setting of n-categories.
One of the main purposes of this article is to give a progressive introduction to the notion
of $\higher-dimensional\ rewriting\ system$provided by polygraphs, and describe its links with
classical rewriting theory, string and term rewriting systems in particular. After introducing
the general setting, we will be interested in proving local confluence for polygraphs presenting
2-categories and introduce a framework in which a finite 3-dimensional rewriting system
admits a finite number of critical pairs
••
TL;DR: In this article, the authors consider three objectives: expected time, long-run average, and timed (interval) reachability, and report on several case studies conducted using a prototypical tool implementation of the algorithms.
Abstract: Markov automata (MAs) extend labelled transition systems with random delays and probabilistic branching. Action-labelled transitions are instantaneous and yield a distribution over states, whereas timed transitions impose a random delay governed by an exponential distribution. MAs are thus a nondeterministic variation of continuous-time Markov chains. MAs are compositional and are used to provide a semantics for engineering frameworks such as (dynamic) fault trees, (generalised) stochastic Petri nets, and the Architecture Analysis & Design Language (AADL). This paper considers the quantitative analysis of MAs. We consider three objectives: expected time, long-run average, and timed (interval) reachability. Expected time objectives focus on determining the minimal (or maximal) expected time to reach a set of states. Long-run objectives determine the fraction of time to be in a set of states when considering an infinite time horizon. Timed reachability objectives are about computing the probability to reach a set of states within a given time interval. This paper presents the foundations and details of the algorithms and their correctness proofs. We report on several case studies conducted using a prototypical tool implementation of the algorithms, driven by the MAPA modelling language for efficiently generating MAs.
••
TL;DR: This work investigates the phenomenon that every monad is a linear state monad by studying a fully-complete state-passing translation from an impure call-by-value language to a new linear type theory: the enriched call- by-value calculus.
Abstract: We investigate the phenomenon that every monad is a linear state monad. We do this by studying a fully-complete state-passing translation from an impure call-by-value language to a new linear type theory: the enriched call-by-value calculus. The results are not specic to store, but can be applied to any computational eect expressible using
••
TL;DR: Priority channel systems as discussed by the authors are a new class of channel systems where messages carry a numeric priority and higher priority messages can supersede lower priority messages preceding them in the communicationbuffers.
Abstract: We introduce Priority Channel Systems, a new class of channel systems where
messages carry a numeric priority and where higher-priority messages can
supersede lower-priority messages preceding them in the fifo communication
buffers. The decidability of safety and inevitability properties is shown via
the introduction of a priority embedding, a well-quasi-ordering that has not
previously been used in well-structured systems. We then show how Priority
Channel Systems can compute Fast-Growing functions and prove that the
aforementioned verification problems are
$\mathbf{F}_{\varepsilon_{0}}$-complete.
••
TL;DR: In this paper, a simple proof of Kamp's theorem is provided for temporal logic subject classification, and the proof is used to prove the existence of a classifier for Temporal Logic subject classification.
Abstract: We provide a simple proof of Kamp’s theorem. 1998 ACM Subject Classification F.4.1 Temporal Logic
••
TL;DR: In this article, a bisimulation theory based on multiparty session types is proposed, where a choreography specification governs the behaviour of session typed processes and their observer, and the observer cooperates with the observed process in order to form complete global session scenario.
Abstract: This paper proposes a bisimulation theory based on multiparty session types
where a choreography specification governs the behaviour of session typed
processes and their observer. The bisimulation is defined with the observer
cooperating with the observed process in order to form complete global session
scenarios and usable for proving correctness of optimisations for globally
coordinating threads and processes. The induced bisimulation is strictly more
fine-grained than the standard session bisimulation. The difference between the
governed and standard bisimulations only appears when more than two interleaved
multiparty sessions exist. This distinct feature enables to reason real
scenarios in the large-scale distributed system where multiple choreographic
sessions need to be interleaved. The compositionality of the governed
bisimilarity is proved through the soundness and completeness with respect to
the governed reduction-based congruence. Finally, its usage is demonstrated by
a thread transformation governed under multiple sessions in a real usecase in
the large-scale cyberinfrustracture.
••
TL;DR: In this article, the complexity of the PSPACE-hard problem of computing the solution of the differential equation with respect to the smoothness of the function is investigated under various assumptions on the function.
Abstract: The computational complexity of the solutions $h$ to the ordinary
differential equation $h(0)=0$, $h'(t) = g(t, h(t))$ under various assumptions
on the function $g$ has been investigated. Kawamura showed in 2010 that the
solution $h$ can be PSPACE-hard even if $g$ is assumed to be Lipschitz
continuous and polynomial-time computable. We place further requirements on the
smoothness of $g$ and obtain the following results: the solution $h$ can still
be PSPACE-hard if $g$ is assumed to be of class $C^1$; for each $k\ge2$, the
solution $h$ can be hard for the counting hierarchy even if $g$ is of class
$C^k$.
••
TL;DR: In this article, the authors define the dual set of initial configurations from which a non-terminating execution exists, as the greatest fixpoint of the function that maps a set of states into its pre-image with respect to the transition relation.
Abstract: We address the problem of conditional termination, which is that of defining the set of initial configurations from which a given program always terminates. First we define the dual set, of initial configurations from which a non-terminating execution exists, as the greatest fixpoint of the function that maps a set of states into its pre-image with respect to the transition relation. This definition allows to compute the weakest non-termination precondition if at least one of the following holds: (i) the transition relation is deterministic, (ii) the descending Kleene sequence over-approximating the greatest fixpoint converges in finitely many steps, or (iii) the transition relation is well founded. We show that this is the case for two classes of relations, namely octagonal and finite monoid affine relations. Moreover, since the closed forms of these relations can be defined in Presburger arithmetic, we obtain the decidability of the termination problem for such loops. We show that the weakest non-termination precondition for octagonal relations can be computed in time polynomial in the size of the binary representation of the relation. Furthermore, for every well-founded octagonal relation, we prove the existence of an effectively computable well-founded witness relation for which a linear ranking function exists. For the class of linear affine relations we show that the weakest non-termination precondition can be defined in Presburger arithmetic if the relation has the finite monoid property. Otherwise, for a more general subclass, called polynomially bounded affine relations, we give a method of under-approximating the termination preconditions. Finally, we apply the method of computing weakest non-termination preconditions for conjunctive relations (octagonal or affine) to computing termination preconditions for programs with complex transition relations. We provide algorithms for computing transition invariants and termination preconditions, and define a class of programs, whose control structure has no nested loops, for which these algorithms provide precise results. Moreover, it is shown that, for programs with no nested control loops, and whose loops are labeled with octagonal constraints, the dual problem i. e. the existence of infinite runs, is NP-complete.
••
TL;DR: In this paper, it was shown that for every non-integral rational discount factor λ, there is a nondeterminizable λ-NDA, and the class of NDAs with integral discount factors enjoys closure under the algebraic operations min, max, addition and subtraction.
Abstract: A discounted-sum automaton (NDA) is a nondeterministic finite automaton with edge weights, valuing a run by the discounted sum of visited edge weights. More precisely, the weight in the i-th position of the run is divided by λi, where the discount factor λ is a fixed rational number greater than 1. The value of a word is the minimal value of the automaton runs on it. Discounted summation is a common and useful measuring scheme, especially for infinite sequences, reflecting the assumption that earlier weights are more important than later weights. Unfortunately, determinization of NDAs, which is often essential in formal verification, is, in general, not possible. We provide positive news, showing that every NDA with an integral discount factor is determinizable. We complete the picture by proving that the integers characterize exactly the discount factors that guarantee determinizability: for every nonintegral rational discount factor λ, there is a nondeterminizable λ-NDA. We also prove that the class of NDAs with integral discount factors enjoys closure under the algebraic operations min, max, addition, and subtraction, which is not the case for general NDAs nor for deterministic NDAs. For general NDAs, we look into approximate determinization, which is always possible as the influence of a word's suffix decays. We show that the naive approach, of unfolding the automaton computations up to a sufficient level, is doubly exponential in the discount factor. We provide an alternative construction for approximate determinization, which is singly exponential in the discount factor, in the precision, and in the number of states. We also prove matching lower bounds, showing that the exponential dependency on each of these three parameters cannot be avoided. All our results hold equally for automata over finite words and for automata over infinite words.
••
TL;DR: Session automata as mentioned in this paper is an automata model to process data words over an infinite alphabet, which is well suited for modeling protocols in which sessions using fresh values are of major interest, like in security protocols or ad-hoc networks.
Abstract: We introduce session automata, an automata model to process data words, i.e., words over an infinite alphabet. Session automata support the notion of fresh data values, which are well suited for modeling protocols in which sessions using fresh values are of major interest, like in security protocols or ad-hoc networks. Session automata have an expressiveness partly extending, partly reducing that of classical register automata. We show that, unlike register automata and their various extensions, session automata are robust: They (i) are closed under intersection, union, and (resource-sensitive) complementation, (ii) admit a symbolic regular representation, (iii) have a decidable inclusion problem (unlike register automata), and (iv) enjoy logical characterizations. Using these results, we establish a learning algorithm to infer session automata through membership and equivalence queries.
••
TL;DR: In this article, a method of improving proof readability based on Behaghel's First Law of sentence structure is proposed, which maximizes the number of local references to the directly preceding statement in a proof linearisation.
Abstract: In formal proof checking environments such as Mizar it is not merely the
validity of mathematical formulas that is evaluated in the process of adoption
to the body of accepted formalizations, but also the readability of the proofs
that witness validity. As in case of computer programs, such proof scripts may
sometimes be more and sometimes be less readable. To better understand the
notion of readability of formal proofs, and to assess and improve their
readability, we propose in this paper a method of improving proof readability
based on Behaghel's First Law of sentence structure. Our method maximizes the
number of local references to the directly preceding statement in a proof
linearisation. It is shown that our optimization method is NP-complete.
••
TL;DR: Four canonical languages based on each of the possible choices of the algebraic lambda-calculus are proposed, making them general enough to be valid for any sub-language satisfying the corresponding properties.
Abstract: We examine the relationship between the algebraic lambda-calculus, a fragment of the differential lambda-calculus and the linear-algebraic lambda-calculus, a candidate lambda-calculus for quantum computation. Both calculi are algebraic: each one is equipped with an additive and a scalar-multiplicative structure, and their set of terms is closed under linear combinations. However, the two languages were built using different approaches: the former is a call-by-name language whereas the latter is call-by-value; the former considers algebraic equalities whereas the latter approaches them through rewrite rules. In this paper, we analyse how these different approaches relate to one another. To this end, we propose four canonical languages based on each of the possible choices: call-by-name versus call-by-value, algebraic equality versus algebraic rewriting. We show that the various languages simulate one another. Due to subtle interaction between beta-reduction and algebraic rewriting, to make the languages consistent some additional hypotheses such as confluence or normalisation might be required. We carefully devise the required properties for each proof, making them general enough to be valid for any sub-language satisfying the corresponding properties.
••
TL;DR: In this article, a complete polymorphic effect inference algorithm for an ML-style language with handlers of not only exceptions, but of any other algebraic effect such as input & output, mutable references and many others.
Abstract: We present a complete polymorphic effect inference algorithm for an ML-style
language with handlers of not only exceptions, but of any other algebraic
effect such as input & output, mutable references and many others. Our main aim
is to offer the programmer a useful insight into the effectful behaviour of
programs. Handlers help here by cutting down possible effects and the resulting
lengthy output that often plagues precise effect systems. Additionally, we
present a set of methods that further simplify the displayed types, some even
by deliberately hiding inferred information from the programmer.
••
TL;DR: The unification allows some patterns to be more discriminating than others; hence, the behavioural theory must take this aspect into account, so that bisimulation becomes subject to compatibility of patterns.
Abstract: Concurrent pattern calculus (CPC) drives interaction between processes by comparing data structures, just as sequential pattern calculus drives computation. By generalising from pattern matching to pattern unification, interaction becomes symmetrical, with information flowing in both directions. CPC provides a natural language to express trade where information exchange is pivotal to interaction. The unification allows some patterns to be more discriminating than others; hence, the behavioural theory must take this aspect into account, so that bisimulation becomes subject to compatibility of patterns. Many popular process calculi can be encoded in CPC; this allows for a gain in expressiveness, formalised through encodings.
••
TL;DR: In this paper, the authors introduce a logical foundation to reason on tree structures with numerical constraints on the number of node occurrences, and prove that the logic is decidable in single exponential time even if the numerical constraints are in binary form.
Abstract: We introduce a logical foundation to reason on tree structures with
constraints on the number of node occurrences. Related formalisms are limited
to express occurrence constraints on particular tree regions, as for instance
the children of a given node. By contrast, the logic introduced in the present
work can concisely express numerical bounds on any region, descendants or
ancestors for instance. We prove that the logic is decidable in single
exponential time even if the numerical constraints are in binary form. We also
illustrate the usage of the logic in the description of numerical constraints
on multi-directional path queries on XML documents. Furthermore, numerical
restrictions on regular languages (XML schemas) can also be concisely described
by the logic. This implies a characterization of decidable counting extensions
of XPath queries and XML schemas. Moreover, as the logic is closed under
negation, it can thus be used as an optimal reasoning framework for testing
emptiness, containment and equivalence.
••
TL;DR: In this paper, a decision procedure based on the occurrence of special patterns in automata accepting the input languages is proposed to decide whether there exists a locally threshold testable separator.
Abstract: A separator for two languages is a third language containing the first one
and disjoint from the second one. We investigate the following decision
problem: given two regular input languages, decide whether there exists a
locally testable (resp. a locally threshold testable) separator. In both cases,
we design a decision procedure based on the occurrence of special patterns in
automata accepting the input languages. We prove that the problem is
computationally harder than deciding membership. The correctness proof of the
algorithm yields a stronger result, namely a description of a possible
separator. Finally, we discuss the same problem for context-free input
languages.
••
TL;DR: A semantics for CCS is defined which may both be viewed as an innocent form of presheaf semantics and as a concurrent form of game semantics, and an analogue of fair testing equivalence is defined, which is proved fully abstract w.r.t. standard fair test equivalence.
Abstract: In previous work with Pous, we defined a semantics for CCS which may both be viewed as an innocent form of presheaf semantics and as a concurrent form of game semantics. We define in this setting an analogue of fair testing equivalence, which we prove fully abstract w.r.t. standard fair testing equivalence. The proof relies on a new algebraic notion called playground, which represents the 'rule of the game'. From any playground, we derive two languages equipped with labelled transition systems, as well as a strong, functional bisimulation between them.
••
TL;DR: In this article, the authors examined the case when S is a 1-manifold with boundary, not necessarily compact, and showed that a similar result holds in this case under the assumption that S has finitely many components.
Abstract: A semi-computable set S in a computable metric space need not be computable. However, in some cases, if S has certain topological properties, we can conclude that S is computable. It is known that if a semi-computable set S is a compact manifold with boundary, then the computability of $\partial S$ implies the computability of S. In this paper we examine the case when S is a 1-manifold with boundary, not necessarily compact. We show that a similar result holds in this case under assumption that S has finitely many components.
••
TL;DR: In this article, a modular framework is introduced to infer polynomial upper bounds on the complexity of term rewrite systems by combining different criteria, such as matrixinterpretations and match-bounds.
Abstract: All current investigations to analyze the derivational complexity of term
rewrite systems are based on a single termination method, possibly preceded by
transformations. However, the exclusive use of direct criteria is problematic
due to their restricted power. To overcome this limitation the article
introduces a modular framework which allows to infer (polynomial) upper bounds
on the complexity of term rewrite systems by combining different criteria.
Since the fundamental idea is based on relative rewriting, we study how matrix
interpretations and match-bounds can be used and extended to measure complexity
for relative rewriting, respectively. The modular framework is proved strictly
more powerful than the conventional setting. Furthermore, the results have been
implemented and experiments show significant gains in power.