scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Logic and Computation in 2017"


Journal ArticleDOI
TL;DR: The theory of canonical extensions is developed and is applied to obtain a new canonicity proof for those inequalities in the language of Distributive Modal Logic (DML) on which the algorithm ALBA is successful.
Abstract: The theory of canonical extensions typically considers extensions of maps A→B to maps Aδ→Bδ. In the present article, the theory of canonical extensions of maps A→Bδ to maps Aδ→Bδ is developed, and is applied to obtain a new canonicity proof for those inequalities in the language of Distributive Modal Logic (DML) on which the algorithm ALBA [9] is successful.

43 citations


Journal ArticleDOI
TL;DR: The authors wish to thank the anonymous reviewers for their valuable comments and suggestions that have significantly improved the article.
Abstract: The authors wish to thank the anonymous reviewers for their valuable comments and suggestions that have significantly improved the article. They also thank Felix Bou and Tommaso Moraschini for helpful comments on Section 5. The authors acknowledge support of the Spanish projects EdeTRI (TIN2012-39348- C02-01) and 2014 SGR 118. Vidal was supported by a CSIC grant JAE Predoc.

43 citations


Journal ArticleDOI
TL;DR: It is shown that while argument-wise plurality voting satisfies many properties, it fails to guarantee the collective rationality of the outcome, and two graph-theoretical restrictions under which the argument- wise plurality rule does produce collectively rational outcomes are mentioned.
Abstract: Given a set of conflicting arguments, there can exist multiple plausible opinions about which arguments should be accepted, rejected or deemed undecided. We study the problem of how multiple such judgements can be aggregated. We define the problem by adapting various classical social-choice-theoretic properties for the argumentation domain. We show that while argument-wise plurality voting satisfies many properties, it fails to guarantee the collective rationality of the outcome. We then present more general results, proving multiple impossibility results on the existence of any good aggregation operator. After characterizing the sufficient and necessary conditions for satisfying collective rationality, we study whether restricting the domain of argument-wise plurality voting to classical semantics allows us to escape the impossibility result. We close by mentioning a couple of graph-theoretical restrictions under which the argument-wise plurality rule does produce collectively rational outcomes. In addition to identifying fundamental barriers to collective argument evaluation, our results contribute to research at the intersection of the argumentation and computational social choice fields.

42 citations


Journal ArticleDOI
TL;DR: The key messages are that the syntax and proof systems of logics are theories; that both semantics and translations are theory morphisms; and that combinations are colimits.
Abstract: We give general definitions of logical frameworks and logics. Examples include the logical frameworks LF and Isabelle and the logics represented in them. We apply this to give general definitions for equivalence of logics, translation between logics, and combination of logics. We also establish general criteria for the soundness and completeness of these. Our key messages are that the syntax and proof systems of logics are theories; that both semantics and translations are theory morphisms; and that combinations are colimits. Our approach is based on the Mmt language, which lets us combine formalist declarative representations (and thus the associated tool support) with abstract categorical conceptualizations.

37 citations


Journal ArticleDOI
TL;DR: This paper proposes to combine Input/Output logic, a well-known formalism for normative reasoning, with the reification-based approach of Jerry R. Hobbs to create a new framework that will be called ‘reified Input/ Output logic’.
Abstract: In this paper, we propose to combine Input/Output logic, a well-known formalism for normative reasoning, with the reification-based approach of Jerry R. Hobbs. The latter is a wide-coverage logic for Natural Language Semantics able to handle a fairly large set of linguistic phenomena into a simple logical formalism. The result is a new framework that we will call ‘reified Input/Output logic’. This paper represents the first step of a long-term research aiming at filling the gap between Input/Output logic and the richness of Natural Language Semantics. We plan in our future work to use reified Input/Output logic as the underlying formalism for applications in legal informatics to process and reason on existing legal texts, which are available in natural language only.

37 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the canonicity of inequalities of the intuitionistic mu-calculus in the presence of fixed point operators and proposed an algorithm which processes inequalities with the aim of eliminating propositional variables.
Abstract: We investigate the canonicity of inequalities of the intuitionistic mu-calculus. The notion of canonicity in the presence of fixed point operators is not entirely straightforward. In the algebraic setting of canonical extensions we examine both the usual notion of canonicity and what we will call tame canonicity. This latter concept has previously been investigated for the classical mu-calculus by Bezhanishvili and Hodkinson. Our approach is in the spirit of Sahlqvist theory. That is, we identify syntactically-defined classes of inequalities, namely the restricted inductive and tame inductive inequalities, which are, respectively, canonical or tame canonical. Our approach is to use an algorithm which processes inequalities with the aim of eliminating propositional variables. The algorithm we introduce is closely related to the algorithms ALBA and mu-ALBA studied by Conradie, Palmigiano, et al. It is based on a calculus of rewrite rules, the soundness of which rests upon the way in which algebras embed into their canonical extensions and the order-theoretic properties of the latter. We show that the algorithm succeeds on every restricted inductive inequality by means of a so-called proper run, and that this is sufficient to guarantee their canonicity. Likewise, we are able to show that the algorithm succeeds on every tame inductive inequality by means of a so-called tame run. In turn, this guarantees their tame canonicity.

30 citations


Journal ArticleDOI
TL;DR: In this paper, the authors define and axiomatize the least modal logic over the four-element Belnap lattice, which is the logic determined by the class of all Kripke frames where the accessibility relation as well as semantic valuations are four-valued.
Abstract: Combining multi-valued and modal logics into a single system is a long-standing concern in mathematical logic and computer science, see for example [7] and the literature cited there. Recent work in this trend [15, 17, 14] develops modal expansions of many-valued systems that are also inconsistencytolerant, along the tradition initiated by Belnap with his “useful four-valued logic” [3]. Our contribution continues on this line, and the specific problem we address is that of defining and axiomatizing the least modal logic over the four-element Belnap lattice. The problem was inspired by [5], but our solution is quite different from (and in some respects more satisfactory than) that of [5] in that we make an extensive and profitable use of algebraic and topological techniques. In fact, our algebraic and topological analyses of the logic have, in our opinion, an independent interest and contribute to the appeal of our approach. Kripke frames provide a semantics for modal logics that is both flexible with regards to intended applications and interpretations, and highly intuitive. When the non-modal part is multi-valued, though, one may wonder whether the accessibility relation between worlds should remain two-valued or be allowed to assume the same range of truth values as the logic itself. Starting from the point of view of AI applications, [7] argues forcefully that multiple values are an appropriate and useful modeling device. This is the approach taken in [5] and here, too. Our aim is to study the least modal logic over the Belnap lattice, that is, the logic determined by the class of all Kripke frames where the accessibility relation as well as semantic valuations are four-valued.

29 citations


Journal ArticleDOI
TL;DR: This paper identifies a small set of properties that are instantiated in those various consequence relations, namely truth-relationality, valuemonotonicity, validity-coherence, and a constraint of bivalence-compliance, provably replaceable by a structural requisite of non-triviality.
Abstract: Several definitions of logical consequence have been proposed in many-valued logic, which coincide in the two-valued case, but come apart as soon as three truth values come into play. Those definitions include so-called pure consequence, order-theoretic consequence, and mixed consequence. In this paper, we examine whether those definitions together carve out a natural class of consequence relations. We respond positively by identifying a small set of properties that we see instantiated in those various consequence relations, namely truth-relationality, valuemonotonicity, validity-coherence, and a constraint of bivalence-compliance, provably replaceable by a structural requisite of non-triviality. Our main result is that the class of consequence relations satisfying those properties coincides exactly with the class of mixed consequence relations and their intersections, including pure consequence relations and order-theoretic consequence. We provide an enumeration of the set of those relations in finite many-valued logics of two extreme kinds: those in which truth values are well-ordered and those in which values between 0 and 1 are incomparable.

25 citations


Journal ArticleDOI
TL;DR: This article studies the class of strongly perfect MTL-algebras having an involutive co-radical, and the variety they generate, namely SBP0, and establishes categorical equivalences for several of their relevant proper subvarieties by employing a generalized notion of triplets whose main components are a Boolean algebra and a prelinear semihoop.
Abstract: This article studies the class of strongly perfect MTL-algebras, i.e. MTL-algebras having an involutive co-radical, and the variety they generate, namely SBP0. Once these structures will be introduced, we will first establish categorical equivalences for several of their relevant proper subvarieties by employing a generalized notion of triplets whose main components are a Boolean algebra and a prelinear semihoop. When triplets are further expanded by a suitable operation between their semihoop reducts, we define a category of quadruples that are equivalent to the whole category of SBP0-algebras. Finally, we will provide an explicit representation of SBP0-algebras in terms of (weak) Boolean products.

23 citations



Journal ArticleDOI
TL;DR: The cut-elimination theorem is proved for a version of controlled propositional classical logic, i.e. the sequent calculus for classical propositional logic to which a suitable system of control sets is applied.
Abstract: The goal of this article is to design a uniform proof-theoretical framework encompassing classical, non-monotonic and paraconsistent logic. This framework is obtained by the control sets logical device, a syntactical apparatus for controlling derivations. A basic feature of control sets is that of leaving the underlying syntax of a proof system unchanged, while affecting the very combinatorial structure of sequents and proofs. We prove the cut-elimination theorem for a version of controlled propositional classical logic, i.e. the sequent calculus for classical propositional logic to which a suitable system of control sets is applied. Finally, we outline the skeleton of a new (positive) account of non-monotonicity and paraconsistency in terms of concurrent processes.

Journal ArticleDOI
TL;DR: A uniform logical framework is described, based on a bunched logic that combines classical additives and very weak multiplicatives, for reasoning compositionally about access control policy models that provides a way to identify and reason about how vulnerabilities may arise (and be removed) as a result of the architecture of the system.
Abstract: We describe a uniform logical framework, based on a bunched logic that combines classical additives and very weak multiplicatives, for reasoning compositionally about access control policy models. We show how our approach takes account of the underlying system architecture, and so provides a way to identify and reason about how vulnerabilities may arise (and be removed) as a result of the architecture of the system. We consider, using frame rules, how local properties of access control policies are maintained as the system architecture evolves.


Journal ArticleDOI
TL;DR: This article models collective decision making scenarios by using a priority-based aggregation procedure, the so-called lexicographic method, to represent a form of reliability-based ‘deliberation’, providing a logical framework describing the way in which the public and simultaneous announcement of the individual preferences leads to individual preference upgrade.
Abstract: This article models collective decision making scenarios by using a priority-based aggregation procedure, the so-called lexicographic method, to represent a form of reliability-based ‘deliberation’. More precisely, it considers agents with a preference ordering over a set of objects and a reliability ordering over the agents themselves, providing a logical framework describing the way in which the public and simultaneous announcement of the individual preferences leads to individual preference upgrade. The main results are the definitions of this lexicographic upgrade for diverse types of reliability relations (in particular, the preorder and total preorder cases), a sound and complete axiom system for a language describing the effects of such upgrades, and the definitions for non-public variations.

Journal ArticleDOI
TL;DR: This article shows how in the context of a weaker logic, which is called Basic De Morgan Logic, a collection of Tarski-biconditionals can coherently start with such a fully disquotational truth theory and arrive at a strong compositional truth theory by applying a natural uniform reflection principle a finite number of times.
Abstract: Iterated reflection principles have been employed extensively to unfold epistemic commitments that are incurred by accepting a mathematical theory. Recently this has been applied to theories of truth. The idea is to start with a collection of Tarski-biconditionals and arrive by iterated reflection at strong compositional truth theories. In the context of classical logic, it is incoherent to adopt an initial truth theory in which A and ‘A is truen’ are inter-derivable. In this article, we show how in the context of a weaker logic, which we call Basic De Morgan Logic, we can coherently start with such a fully disquotational truth theory and arrive at a strong compositional truth theory by applying a natural uniform reflection principle a finite number of times.

Journal ArticleDOI
TL;DR: An intuitionistic version of modal logic S1+SP is extended and it is shown that L is sound and complete w.r.t. a class of special Heyting algebras with a (non-normal) modal operator.
Abstract: A famous result, conjectured by Godel in 1932 and proved by McKinsey and Tarski in 1948, says that $\varphi$ is a theorem of intuitionistic propositional logic IPC iff its Godel-translation $\varphi'$ is a theorem of modal logic S4 In this paper, we extend an intuitionistic version of modal logic S1+SP, introduced in our previous paper (S Lewitzka, Algebraic semantics for a modal logic close to S1, J Logic and Comp, doi:101093/logcom/exu067) to a classical modal logic L and prove the following: a propositional formula $\varphi$ is a theorem of IPC iff $\square\varphi$ is a theorem of L (actually, we show: $\Phi\vdash_{IPC}\varphi$ iff $\square\Phi\vdash_L\square\varphi$, for propositional $\Phi,\varphi$) Thus, the map $\varphi\mapsto\square\varphi$ is an embedding of IPC into L, ie L contains a copy of IPC Moreover, L is a conservative extension of classical propositional logic CPC In this sense, L is an amalgam of CPC and IPC We show that L is sound and complete wrt a class of special Heyting algebras with a (non-normal) modal operator


Journal ArticleDOI
TL;DR: The present paper describes a method for proving Downward Lowenheim-Skolem Theorem within an arbitrary institution satisfying certain logic properties, and develops another technique, in the spirit of institution-independent model theory, which consists of borrowing the result from a simpler institution across an institution comorphism.
Abstract: The present paper describes a method for proving Downward Lowenheim-Skolem Theorem within an arbitrary institution satisfying certain logic properties. In order to demonstrate the applicability of the present approach, the abstract results are instantiated to many-sorted first-order logic and preorder algebra. In addition to the first technique for proving Downward Lowenheim-Skolem Theorem, another one is developed, in the spirit of institution-independent model theory, which consists of borrowing the result from a simpler institution across an institution comorphism. As a result the Downward LowenheimSkolem Property is exported from first-order logic to partial algebras, and from higher-order logic with intensional Henkin semantics to higher-order logic with extensional Henkin semantics. The second method successfully extends the domain of application of Downward Lowenheim-Skolem Theorem to other non-conventional logical systems for which the first technique may fail. One major application of Downward Lowenheim-Skolem Theorem is interpolation in constructor-based logics with universally quantified sentences. The interpolation property is established by borrowing it from a base institution for its constructor-based variant across an institution morphism. This result is important as interpolation for constructor-based first-order logics is still an open problem.

Journal ArticleDOI
TL;DR: A model checking algorithm for a subset of alternating-time temporal logic with imperfect information and imperfect recall that does not only verify existence of a suitable strategy but also produces one (if it exists).
Abstract: We present a model checking algorithm for a subset of alternating-time temporal logic (ATL) with imperfect information and imperfect recall. This variant of ATL is arguably most appropriate when it comes to modeling and specification of multi-agent systems. The related variant of model checking is known to be theoretically hard (NPto PSPACE-complete, depending on the assumptions), but very few practical attempts at it have been proposed so far. Our algorithm searches through the set of possible uniform strategies, utilizing a simple technique that reduces the search space. In consequence, it does not only verify existence of a suitable strategy but also produces one (if it exists). We validate the algorithm experimentally on a simple scalable class of models, with promising results. We also discuss two variants of the model checking problem, related to the objective vs. subjective interpretation of strategic ability. We provide algorithms for reductions between the two semantic variants of model checking. The algorithms are experimentally validated as well.

Journal ArticleDOI
TL;DR: In this article, the authors analyze three previously introduced argument-based aggregation operators with respect to Pareto optimality and strategyproofness under different general classes of agent preferences and highlight trade-offs between strategic manipulability and social optimality on one hand, and classical logical criteria on the other.
Abstract: An inconsistent knowledge base can be abstracted as a set of arguments and a defeat relation among them. There can be more than one consistent way to evaluate such an argumentation graph. Collective argument evaluation is the prob- lem of aggregating the opinions of multiple agents on how a given set of arguments should be evaluated. It is crucial not only to ensure that the outcome is logically consistent, but also satisfies measures of social optimality and immunity to strategic manipulation. This is because agents have their individual preferences about what the outcome ought to be. In the current paper, we analyze three previously introduced argument-based aggregation operators with respect to Pareto optimality and strategy proofness under different general classes of agent preferences. We highlight funda- mental trade-offs between strategic manipulability and social optimality on one hand, and classical logical criteria on the other. Our results motivate further investigation into the relationship between social choice and argumentation theory. The results are also relevant for choosing an appropriate aggregation operator given the criteria that are considered more important, as well as the nature of agents’ preferences.


Journal ArticleDOI
TL;DR: In this paper, the authors investigated stochastic noneterminism on continuous state spaces by relating non-deterministic kernels and effectivity functions to each other, and defined state bisimilarity for the latter, considering its connection to morphisms.
Abstract: This paper investigates stochastic nondeterminism on continuous state spaces by relating nondeterministic kernels and stochastic effectivity functions to each other. Nondeterministic kernels are functions assigning each state a set o subprobability measures, and effectivity functions assign to each state an upper-closed set of subsets of measures. Both concepts are generalizations of Markov kernels used for defining two different models: Nondeterministic labelled Markov processes and stochastic game models, respectively. We show that an effectivity function that maps into principal filters is given by an image-countable nondeterministic kernel, and that image-finite kernels give rise to effectivity functions. We define state bisimilarity for the latter, considering its connection to morphisms. We provide a logical characterization of bisimilarity in the finitary case. A generalization of congruences (event bisimulations) to effectivity functions and its relation to the categorical presentation of bisimulation are also studied.


Journal ArticleDOI
TL;DR: By giving an explicit representation, it is shown that the completely representable algebras form a basic elementary class, axiomatisable by a universal-existential-universal sentence.
Abstract: For representation by partial functions in the signature with intersection, composition and antidomain, we show that a representation is meet complete if and only if it is join complete. We show that a representation is complete if and only if it is atomic, but that not all atomic representable algebras are completely representable. We show that the class of completely representable algebras is not axiomatisable by any existential-universal-existential first-order theory. By giving an explicit representation, we show that the completely representable algebras form a basic elementary class, axiomatisable by a universal-existential-universal sentence.

Journal ArticleDOI
TL;DR: In this paper, a generalization of CERES to first-order proof schemata is presented, and a schematic version of the sequent calculus called LKSE, and a notion of proof schema based on primitive recursive definitions.
Abstract: The cut-elimination method CERES (for firstand higherorder classical logic) is based on the notion of a characteristic clause set, which is extracted from an LK-proof and is always unsatisfiable. A resolution refutation of this clause set can be used as a skeleton for a proof with atomic cuts only (atomic cut normal form). This is achieved by replacing clauses from the resolution refutation by the corresponding projections of the original proof. We present a generalization of CERES (called CERESs) to first-order proof schemata and define a schematic version of the sequent calculus called LKSE , and a notion of proof schema based on primitive recursive definitions. A method is developed to extract schematic characteristic clause sets and schematic projections from these proof schemata. We also define a schematic resolution calculus for refutation of schemata of clause sets, which can be applied to refute the schematic characteristic clause sets. Finally the projection schemata and resolution schemata are plugged together and a schematic representation of the atomic cut normal forms is obtained. A major benefit of CERESs is the extension of cut-elimination to inductively defined proofs: we compare CERESs with standard calculi using induction rules and demonstrate that CERESs is capable of performing cut-elimination where traditional methods fail. The algorithmic handling of CERESs is supported by a recent extension of the CERES system.

Journal ArticleDOI
Egon Börger1
TL;DR: It is shown how programming features (read: programming constructs) modularize not only the source programs, but also the program property statements and their proofs.
Abstract: We survey the use of Abstract State Machines in the area of programming languages, namely to define behavioural properties of programs at source, intermediate and machine levels in a way that is amenable to mathematical and experimental analysis by practitioners, like correctness and completeness of compilers, etc. We illustrate how theorems about such properties can be integrated into a modular development of programming languages and programs, using as example a Java/JVM compilation correctness theorem about defining, interpreting, compiling and executing Java/JVM code. We show how programming features (read: programming constructs) modularize not only the source programs, but also the program property statements and their proofs.

Journal ArticleDOI
TL;DR: It is proved that Boolean unification with predicates is already undecidable for quantifier-free F and the same holds for F of the form ∀yF ′[X,y] with F ′ quantifiers.
Abstract: In this article, we deal with the following problem which we call Boolean unification with predicates: For a given formula F[X ] in first-order logic with equality containing an n-ary predicate variable X , is there a quantifier-free formula G[x1,...,xn] such that the formula F[G] is valid in first-order logic with equality? We obtain the following results. Boolean unification with predicates for quantifier-free F is 2 -complete. In addition, there exists an EXPTIME algorithm which for an input formula F[X ], given as above, constructs a formula G such that F[G] being valid in first-order logic with equality, if such a formula exists. For F of the form ∀yF ′[X ,y] with F ′ quantifier-free, we prove that Boolean unification with predicates is already undecidable. The same holds for F of the form ∃yF ′[X ,y] for F ′ quantifier-free. Instances of Boolean unification with predicates naturally occur in the context of automated theorem proving. Our results are relevant for cut-introduction and the automated search for induction invariants.

Journal ArticleDOI
TL;DR: This paper deals with tomonoids that are finite and negative, where negativity means that the monoidal identity is the top element, and defines a method of generating all such tomonoid in a stepwise fashion.

Journal ArticleDOI
TL;DR: This work shows that the known bounds on the derivation height are essentially preserved, if the rewrite system fulfils some mild conditions, and re-establish an essentially optimal 2-recursive upper bound on the derived height of finite rewrite systems compatible with a Knuth-Bendix order.
Abstract: We study the complexity of term rewrite systems compatible with the Knuth-Bendix order, if the signature of the rewrite system is potentially infinite. We show that the known bounds on the derivation height are essentially preserved, if the rewrite system fulfils some mild conditions. This allows us to obtain bounds on the derivational height of non simply terminating rewrite systems. As a corollary, we re-establish an essentially optimal 2-recursive upper bound on the derivational complexity of finite rewrite systems compatible with a Knuth-Bendix order. Furthermore we link our main result to results on generalised Knuth-Bendix orders and to recent results on transfinite Knuth-Bendix orders.