scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Mathematics and Artificial Intelligence in 2019"


Journal ArticleDOI
TL;DR: A novel deep learning architecture based on Convolutional Neural Network (CNN) and Long Short Term Neural network (LSTM) is proposed that is supported by introducing the semantic information in representation of the words with the help of knowledge-bases such as WordNet and ConceptNet.
Abstract: As the use of the Internet is increasing, people are connected virtually using social media platforms such as text messages, Facebook, Twitter, etc. This has led to increase in the spread of unsolicited messages known as spam which is used for marketing, collecting personal information, or just to offend the people. Therefore, it is crucial to have a strong spam detection architecture that could prevent these types of messages. Spam detection in noisy platform such as Twitter is still a problem due to short text and high variability in the language used in social media. In this paper, we propose a novel deep learning architecture based on Convolutional Neural Network (CNN) and Long Short Term Neural Network (LSTM). The model is supported by introducing the semantic information in representation of the words with the help of knowledge-bases such as WordNet and ConceptNet. Use of these knowledge-bases improves the performance by providing better semantic vector representation of testing words which earlier were having random value due to not seen in the training. Proposed Experimental results on two benchmark datasets show the effectiveness of the proposed approach with respect to the accuracy and F1-score.

103 citations


Journal ArticleDOI
TL;DR: This research explores the applicability of Artificial Intelligence along with computational logic tools – and in particular the Answer Set Programming (ASP) approach — to the automation of evidence analysis, and presents the formalization of realistic investigative cases via simple ASP programs.
Abstract: In the frame of Digital Forensic (DF) and Digital Investigations (DI), the “Evidence Analysis” phase has the aim to provide objective data, and to perform suitable elaboration of these data so as to help in the formation of possible hypotheses, which could later be presented as elements of proof in court. The aim of our research is to explore the applicability of Artificial Intelligence (AI) along with computational logic tools – and in particular the Answer Set Programming (ASP) approach — to the automation of evidence analysis. We will show how significant complex investigations, hardly solvable for human experts, can be expressed as optimization problems belonging in many cases to the $\mathbb {P}$ or $\mathbb {N}\mathbb {P}$ complexity classes. All these problems can be expressed in ASP. As a proof of concept, in this paper we present the formalization of realistic investigative cases via simple ASP programs, and show how such a methodology can lead to the formulation of tangible investigative hypotheses. We also sketch a design for a feasible Decision Support System (DSS) especially meant for investigators, based on artificial intelligence tools.

32 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an extensive analysis of relative deviation bounds, including detailed proofs of two-sided inequalities and their implications, under the assumption that a moment of the loss is bounded.
Abstract: We present an extensive analysis of relative deviation bounds, including detailed proofs of two-sided inequalities and their implications. We also give detailed proofs of two-sided generalization bounds that hold in the general case of unbounded loss functions, under the assumption that a moment of the loss is bounded. We then illustrate how to apply these results in a sample application: the analysis of importance weighting.

20 citations


Journal ArticleDOI
TL;DR: This work used computer proof-checking methods to verify the correctness of the proofs of the propositions in Euclid Book I, using axioms as close as possible to those of Euclid, in a language closely related to that used in Tarski’s formal geometry.
Abstract: We used computer proof-checking methods to verify the correctness of our proofs of the propositions in Euclid Book I. We used axioms as close as possible to those of Euclid, in a language closely related to that used in Tarski’s formal geometry. We used proofs as close as possible to those given by Euclid, but filling Euclid’s gaps and correcting errors. Euclid Book I has 48 propositions; we proved 235 theorems. The extras were partly “Book Zero”, preliminaries of a very fundamental nature, partly propositions that Euclid omitted but were used implicitly, partly advanced theorems that we found necessary to fill Euclid’s gaps, and partly just variants of Euclid’s propositions. We wrote these proofs in a simple fragment of first-order logic corresponding to Euclid’s logic, debugged them using a custom software tool, and then checked them in the well-known and trusted proof checkers HOL Light and Coq.

17 citations


Journal ArticleDOI
TL;DR: This paper addresses an important limitation in defeasible extensions of description logics, namely the restriction in the semantics of defeasibles concept inclusion to a single preference order on objects, by inducing a modular preferenceOrder on objects from each modular preferenceorder on roles, and using these to relativise defeasibility subsumption.
Abstract: Description logics have been extended in a number of ways to support defeasible reasoning in the KLM tradition. Such features include preferential or rational defeasible concept inclusion, and defeasible roles in complex concept descriptions. Semantically, defeasible subsumption is obtained by means of a preference order on objects, while defeasible roles are obtained by adding a preference order to role interpretations. In this paper, we address an important limitation in defeasible extensions of description logics, namely the restriction in the semantics of defeasible concept inclusion to a single preference order on objects. We do this by inducing a modular preference order on objects from each modular preference order on roles, and using these to relativise defeasible subsumption. This yields a notion of contextualised rational defeasible subsumption, with contexts described by roles. We also provide a semantic construction for rational closure and a method for its computation, and present a correspondence result between the two.

12 citations


Journal ArticleDOI
TL;DR: This is a survey of some recent results relating Dung-style semantics for different types of logical argumentation frameworks and several forms of reasoning with maximally consistent sets (MCS) of premises.
Abstract: This is a survey of some recent results relating Dung-style semantics for different types of logical argumentation frameworks and several forms of reasoning with maximally consistent sets (MCS) of premises. The related formalsims are also examined with respect to some rationality postulates and are carried on to corresponding proof systems for non-monotonic reasoning.

9 citations


Journal ArticleDOI
TL;DR: This paper proposes a set of features which characterize a specific geometric theorem, so that machine learning techniques can be used in geometry and constructed several portfolios for theorem proving in geometry, and also runtime prediction models for provers involved.
Abstract: In recent years, portfolio problem solving found many applications in automated reasoning, primarily in SAT solving and in automated and interactive theorem proving. Portfolio problem solving is an approach in which for an individual instance of a specific problem, one particular, hopefully most appropriate, solving technique is automatically selected among several available ones and used. The selection usually employs machine learning methods. To our knowledge, this approach has not been used in automated theorem proving in geometry so far and it poses a number of new challenges. In this paper we propose a set of features which characterize a specific geometric theorem, so that machine learning techniques can be used in geometry. Relying on these features and using different machine learning techniques, we constructed several portfolios for theorem proving in geometry and also runtime prediction models for provers involved. The evaluation was performed on two corpora of geometric theorems: one coming from geometric construction problems and one from a benchmark set of the GeoGebra tool. The obtained results show that machine learning techniques can be useful in automated theorem proving in geometry, while there is still room for further progress.

9 citations


Journal ArticleDOI
TL;DR: A formal framework to support the choice of actions of a value-driven agent and arrange them into plans that reflect the agent’s preferences is proposed, based on defeasible argumentation.
Abstract: Values are at the heart of human decision-making. They are used to decide whether something or some state of affairs is good or not, and they are also used to address the moral dilemma of the right thing to do under given circumstances. Both uses are present in several everyday situations, from the design of a public policy to the negotiation of employee benefit packages. Both uses of values are specially relevant when one intends to design or validate that artificial intelligent systems behave in a morally correct way. In real life, the choice of policy components or the agreed upon benefit package are processes that involve argumentation. Likewise, the design and deployment of value-driven artificial entities may be well served by embedding practical reasoning capabilities in these entities or using argumentation for their design and certification processes. In this paper, we propose a formal framework to support the choice of actions of a value-driven agent and arrange them into plans that reflect the agent’s preferences. The framework is based on defeasible argumentation. It presumes that agent values are partially ordered in a hierarchy that is used to resolve conflicts between incommensurable values.

9 citations


Journal ArticleDOI
TL;DR: An algorithm to help converting expressions having non-negative quantities in Euclidean geometry theorems to be usable in a complex algebraic geometry prover and it is proved that the algorithm may take doubly exponential time to produce the output in polynomial form.
Abstract: We present an algorithm to help converting expressions having non-negative quantities (like distances) in Euclidean geometry theorems to be usable in a complex algebraic geometry prover. The algorithm helps in refining the output of an existing prover, therefore it supports immediate deployment in high level prover systems. We prove that the algorithm may take doubly exponential time to produce the output in polynomial form, but in many cases it is still computable and useful.

9 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider several natural parameters for the consistency problem of disjunctive ASP and also take the sizes of the answer sets into account; a restriction that is particularly interesting for applications requiring small solutions as encoding subset minimization problems can be done directly due to inherent minimization in its semantics.
Abstract: Disjunctive answer set programming (ASP) is an important framework for declarative modeling and problem solving, where the computational complexity of basic decision problems like consistency (deciding whether a program has an answer set) is located on the second level of the polynomial hierarchy. During the last decades different approaches have been applied to find tractable fragments of programs, in particular, also using parameterized complexity. However, the full potential of parameterized complexity has not been unlocked since only one or very few parameters have been considered at once. In this paper, we consider several natural parameters for the consistency problem of disjunctive ASP. In addition, we also take the sizes of the answer sets into account; a restriction that is particularly interesting for applications requiring small solutions as encoding subset minimization problems in ASP can be done directly due to inherent minimization in its semantics. Previous work on parameterizing the consistency problem by the size of answer sets yielded mostly negative results. In contrast, we start from recent findings for the problem WMMSAT and show several novel fixed-parameter tractability (fpt) results based on combinations of parameters. Moreover, we establish a variety of hardness results (paraNP, W[2], and W[1]-hardness) to assess tightness of our parameter combinations.

9 citations


Journal ArticleDOI
TL;DR: It is shown here using elementary model theoretic tools that the universal first order consequences of any geometric theory T of Pappian planes which is consistent with the analytic geometry of the reals is decidable.
Abstract: We survey the status of decidability of the first order consequences in various axiomatizations of Hilbert-style Euclidean geometry. We draw attention to a widely overlooked result by Martin Ziegler from 1980, which proves Tarski’s conjecture on the undecidability of finitely axiomatizable theories of fields. We elaborate on how to use Ziegler’s theorem to show that the consequence relations for the first order theory of the Hilbert plane and the Euclidean plane are undecidable. As new results we add: It was already known that the universal theory of Hilbert planes and Wu’s orthogonal geometry is decidable. We show here using elementary model theoretic tools that The techniques used were all known to experts in mathematical logic and geometry in the past but no detailed proofs are easily accessible for practitioners of symbolic computation or automated theorem proving.

Journal ArticleDOI
TL;DR: This paper introduces and discusses some formal extensions of MCSs aimed to their practical application in dynamic environments, and provides guidelines for implementations.
Abstract: Multi-Context Systems (MCSs) are able to formally model, in Computational Logic, distributed systems composed of heterogeneous sources, or “contexts”, interacting via special rules called “bridge rules”. In this paper, we consider how to enhance flexibility and generality in bridge-rules definition and use. In particular, we introduce and discuss some formal extensions of MCSs aimed to their practical application in dynamic environments, and we provide guidelines for implementations.

Journal ArticleDOI
TL;DR: This paper characterizes the algebraic independence of a set of polynomials defined based on the sampling pattern, which is closely related to finite completion, and proposes a geometric analysis on the manifold structure for the union of several subspaces to incorporate all given rank constraints simultaneously.
Abstract: This paper is concerned with investigating the fundamental conditions on the locations of the sampled entries, i.e., sampling pattern, for finite completability of a matrix that represents the union of several subspaces with given ranks. In contrast with the existing analysis on Grassmannian manifold for the conventional matrix completion, we propose a geometric analysis on the manifold structure for the union of several subspaces to incorporate all given rank constraints simultaneously. In order to obtain the deterministic conditions on the sampling pattern, we characterizes the algebraic independence of a set of polynomials defined based on the sampling pattern, which is closely related to finite completion. We also give a probabilistic condition in terms of the number of samples per column, i.e., the sampling probability, which leads to finite completability with high probability. Furthermore, using the proposed geometric analysis for finite completability, we characterize sufficient conditions on the sampling pattern that ensure there exists only one completion for the sampled data.

Journal ArticleDOI
TL;DR: The definition of the P-log language is refined by eliminating some ambiguities and incidental decisions made in its original version and slightly modify the formal semantics to better match the intuitive meaning of the language constructs.
Abstract: This paper focuses on the investigation and improvement of knowledge representation language P-log that allows for both logical and probabilistic reasoning. We refine the definition of the language by eliminating some ambiguities and incidental decisions made in its original version and slightly modify the formal semantics to better match the intuitive meaning of the language constructs. We also define a new class of coherent (i.e., logically and probabilistically consistent) P-log programs which facilitates their construction and proofs of correctness. There are a query answering algorithm, sound for programs from this class, and a prototype implementation which, due to their size, are not included in the paper. They, however, can be found in the dissertation of the first author.

Journal ArticleDOI
TL;DR: This paper outlines the implementation of Euclidean geometry based on straightedge and compass constructions in the intuitionistic type theory of the Nuprl proof assistant, which enables a concise and intuitive expression of Euclid’s constructions.
Abstract: Constructions are central to the methodology of geometry presented in the Elements. This theory therefore poses a unique challenge to those concerned with the practice of constructive mathematics: can the Elements be faithfully captured in a modern constructive framework? In this paper, we outline our implementation of Euclidean geometry based on straightedge and compass constructions in the intuitionistic type theory of the Nuprl proof assistant. A result of our intuitionistic treatment of Euclidean geometry is a proof of the second proposition from Book I of the Elements in its full generality; a result that differs from other formally constructive accounts of Euclidean geometry. Our formalization of the straightedge and compass utilizes a predicate for orientation, which enables a concise and intuitive expression of Euclid’s constructions.

Journal ArticleDOI
TL;DR: This article develops a 3-phase compilation scheme for both knowledge bases and skeptical queries to constraint satisfaction problems, and shows how also credulous and weakly skeptical c-inference can be modelled as constraint satisfaction Problems and that the compilation scheme can be extended to such queries.
Abstract: Several different semantics have been proposed for conditional knowledge bases $\mathcal {R}$ containing qualitative conditionals of the form “If A, then usually B”, leading to different nonmonotonic inference relations induced by $\mathcal {R}$ . For the notion of c-representations which are a subclass of all ranking functions accepting $\mathcal {R}$ , a skeptical inference relation, called c-inference and taking all c-representations of $\mathcal {R}$ into account, has been suggested. In this article, we develop a 3-phase compilation scheme for both knowledge bases and skeptical queries to constraint satisfaction problems. In addition to skeptical c-inference, we show how also credulous and weakly skeptical c-inference can be modelled as constraint satisfaction problems, and that the compilation scheme can be extended to such queries. We further extend the compilation approach to knowledge bases evolving over time. The compiled form of $\mathcal {R}$ is reused for incrementally compiling extensions, contractions, and updates of $\mathcal {R}$ . For each compilation step, we prove its soundness and completeness, and demonstrate significant efficiency benefits when querying the compiled version of $\mathcal {R}$ . These findings are also supported by experiments with the software system InfOCF that employs the proposed compilation scheme.

Journal ArticleDOI
TL;DR: A TPTP inspired language is used to write a semi-formal proof of a theorem, that fairly accurately depicts a proof that can be found in mathematical textbooks.
Abstract: In this paper, we propose a new approach for automated verification of informal proofs in Euclidean geometry using a fragment of first-order logic called coherent logic and a corresponding proof representation We use a TPTP inspired language to write a semi-formal proof of a theorem, that fairly accurately depicts a proof that can be found in mathematical textbooks The semi-formal proof is verified by generating more detailed proof objects expressed in the coherent logic vernacular Those proof objects can be easily transformed to Isabelle and Coq proof objects, and also in natural language proofs written in English and Serbian This approach is tested on two sets of theorem proofs using classical axiomatic system for Euclidean geometry created by David Hilbert, and a modern axiomatic system E created by Jeremy Avigad, Edward Dean, and John Mumma

Journal ArticleDOI
TL;DR: ACTLOG as discussed by the authors is a rule-based four-valued language designed to specify actions in a paraconsistent and paracomplete manner, which can be executed even when the underlying belief base contents are inconsistent and/or partial.
Abstract: Contemporary systems situated in real-world open environments frequently have to cope with incomplete and inconsistent information that typically increases complexity of reasoning and decision processes. Realistic modeling of such informationally complex environments calls for nuanced tools. In particular, incomplete and inconsistent information should neither trivialize nor stop both reasoning or planning. The paper introduces ACTLOG, a rule-based four-valued language designed to specify actions in a paraconsistent and paracomplete manner. ACTLOG is an extension of 4QLBel, a language for reasoning with paraconsistent belief bases. Each belief base stores multiple world representations. In this context, ACTLOG’s action may be seen as a belief bases’ transformer. In contrast to other approaches, ACTLOG actions can be executed even when the underlying belief base contents is inconsistent and/or partial. ACTLOG provides a nuanced action specification tools, allowing for subtle interplay among various forms of nonmonotonic, paraconsistent, paracomplete and doxastic reasoning methods applicable in informationally complex environments. Despite its rich modeling possibilities, it remains tractable. ACTLOG permits for composite actions by using sequential and parallel compositions as well as conditional specifications. The framework is illustrated on a decontamination case study known from the literature.

Journal ArticleDOI
TL;DR: This paper proves the equivalence between two very different but complementary mathematical approaches formalizing this theory within the Coq proof assistant based on the matroid structure of incidence geometry in both 2D and 3D.
Abstract: Incidence geometry is a well-established theory which captures the very basic properties of all geometries in terms of points belonging to lines, planes, etc. Moreover, projective incidence geometry leads to a simple framework where many properties can be studied. In this article, we consider two very different but complementary mathematical approaches formalizing this theory within the Coq proof assistant. The first one consists of the usual and synthetic geometric axiom system often encountered in the literature. The second one is more original and relies on combinatorial aspects through the notion of rank which is based on the matroid structure of incidence geometry. This paper mainly contributes to the field by proving the equivalence between these two approaches in both 2D and 3D. This result allows us to study the further automation of many proofs of projective geometry theorems. We give an overview of techniques that will be heavily used in the equivalence proof and are generic enough to be reused later in yet-to-be-written proofs. Finally, we discuss the possibilities of future automation that can be envisaged using the rank notion.

Journal ArticleDOI
TL;DR: In this article, the authors propose two AGM-style characterizations of model repair: one based on belief sets and the other based on structural changes, and they show that the proposed set of postulates fully characterizes the expected rationality of modifications in the model repair problem.
Abstract: This work explores formal aspects of model repair, i.e., how to rationally modify Kripke models representing the behavior of a system in order to satisfy a desired property. We investigate the problem in the light of Alchourron, Gardenfors, and Makinson’s work on belief revision. We propose two AGM-style characterizations of model repair: one based on belief sets and the other based on structural changes. In the first characterization, we define a set of rationality postulates over formulas with a close correspondence to those in the classical belief revision theory. We show that the proposed set of postulates fully characterizes the expected rationality of modifications in the model repair problem. In the second characterization, we propose a new set of rationality postulates based on structural modifications on models. These postulates have a close correspondence to the classical approach of model repair, while preserving the same rationality of the first characterization. We provide two representation results and the connection between them.

Journal ArticleDOI
TL;DR: It is shown that when learned from examples, PLP-forests have better accuracy than singlePLP-trees, and that the choice of a voting rule does not have a major effect on the aggregated order, thus rendering the problem of selecting the “right” rule less critical.
Abstract: We study preference representation models based on partial lexicographic preference trees (PLP-trees). We propose to represent preference relations as forests of small PLP-trees (PLP-forests), and to use voting rules to aggregate orders represented by the individual trees into a single order to be taken as a model of the agent’s preference relation. We show that when learned from examples, PLP-forests have better accuracy than single PLP-trees. We also show that the choice of a voting rule does not have a major effect on the aggregated order, thus rendering the problem of selecting the “right” rule less critical. Next, for the proposed PLP-forest preference models, we develop methods to compute optimal and near-optimal outcomes, the tasks that appear difficult for some other common preference models. Lastly, we compare our models with those based on decision trees, which brings up questions for future research.

Journal ArticleDOI
TL;DR: The suggested way to choose such graph transformation variants may be used to make a choice between different graph grammars in such systems modeling and is illustrated in a model of some business processes that result in the automated choice of the business process adaptation under the assumption that the process changes are minimal towards the terminal state.
Abstract: Graph transformation theory uses rules to perform a graph transformation. However, there is no a way to choose between such different transformations in the case where several of them are applicable. A way to get the choice is suggested here based on the comparing of the values of implications which correspond to different transformation variants. The relationship between the topos of bundles, and the set of graphs with the same vertices, is introduced to include logic into graph transformation theory. Thus, one can use the special type of implication and the truth-values set of such a topos to estimate different variants of graph transformations. In this approach, the maximal part of the initial graph towards the terminal one is conserved in the chosen variant. Analysis of self-adaptive systems uses some graph grammars. Self-adaptive systems autonomously perform an adaptation to changes both in user needs and in their operational environments, while still maintaining some desired properties. The suggested way to choose such graph transformation variants may be used to make a choice between different graph grammars in such systems modeling. This approach is illustrated in a model of some business processes, that result in the automated choice of the business process adaptation under the assumption that the process changes are minimal towards the terminal state.

Journal ArticleDOI
TL;DR: This special issue contains three papers originally presented at ASPOCP 2016, which introduce an extension of existential graphs to be used as an alternative to existential graphs in satisfiability solving.
Abstract: Answer set programming (ASP) is a well-established knowledge representation paradigm in which combinatorial problems can be expressed in an expressive knowledge representation language. Ever since its introduction in the late 1980s, answer set programming has benefited from, and in turn influenced, related computing paradigms. The prototypical example of this cross-fertilization is probably the introduction of conflict-driven clause learning in satisfiability solving, which has led to a new generation of answer set solvers. The relationship between ASP and other computing paradigms such as constraint satisfaction (e.g., in the form of constraint answer set programming), satisfiability module theories (e.g., in the form of ASP modulo theories), quantified Boolean formulas (e.g., in the form of nested logic programs), and many other computing paradigms are the topic of active research. Motivated by this, in 2008, the first workshop on answer set programming and other computing paradigms (ASPOCP) was organized. This tradition has been continued and the 12th edition in this workshop series is soon to be organized. The current special issue was initiated after the 2017 edition of ASPOCP. It contains extended and revised versions of papers presented at ASPOCP 2016 and ASPOCP 2017, as well as an original contribution. Unsurprisingly, given the large number of “other computing paradigms”, the papers found in this special issue are of a very diverse nature. This special issue contains three papers originally presented at ASPOCP 2016. Cabalar, Pérez, and Pérez introduce an extension of existential graphs to be used as an alternative

Journal ArticleDOI
TL;DR: This paper presents a new way of interaction between modelers and solvers to support the Product Development Process by taking into account procedural constraints.
Abstract: This paper presents a new way of interaction between modelers and solvers to support the Product Development Process (PDP). The proposed approach extends the functionalities and the power of the solvers by taking into account procedural constraints. A procedural constraint requires calling a procedure or a function of the modeler. This procedure performs a series of actions and geometric computations in a certain order. The modeler calls the solver for solving a main problem, the solver calls the modeler’s procedures, and similarly procedures of the modeler can call the solver for solving sub-problems. The features, specificities, advantages and drawbacks of the proposed approach are presented and discussed. Several examples are also provided to illustrate this approach.

Journal ArticleDOI
TL;DR: In this paper, a multi-armed bandit procurement auction is studied, where the objective is to maximize the expected utility of the auctioneer subject to incentive compatibility and individual rationality, while simultaneously learning the unknown qualities of the agents.
Abstract: We study the problem of a buyer who gains stochastic rewards by procuring through an auction, multiple units of a service or item from a pool of heterogeneous agents who are strategic on two dimensions, namely cost and capacity. The reward obtained for a single unit from an allocated agent depends on the inherent quality of the agent; the agent’s quality is fixed but unknown. Each agent can only supply a limited number of units (capacity of the agent). The cost incurred per unit and capacity (maximum number of units that can be supplied) are private information of each agent. The auctioneer is required to elicit from the agents their costs as well as capacities (making the mechanism design bidimensional) and further, learn the qualities of the agents as well, with a view to maximize her utility. Motivated by this, we design a bidimensional multi-armed bandit procurement auction that seeks to maximize the expected utility of the auctioneer subject to incentive compatibility and individual rationality, while simultaneously learning the unknown qualities of the agents. We first work with the assumption that the qualities are known, and propose an optimal, truthful mechanism 2D-OPT for the auctioneer to elicit costs and capacities. Next, in order to learn the qualities of the agents as well, we provide sufficient conditions for a learning algorithm to be Bayesian incentive compatible and individually rational. We finally design a novel learning mechanism, 2D-UCB that is stochastic Bayesian incentive compatible and individually rational.

Journal ArticleDOI
TL;DR: A comparison of the results lead to the realization that players with equivalent representation might relax the actual complexity of the problem, and enable manipulation of tournaments that can be controlled in reality.
Abstract: Is it possible for the organizers of a sports tournament to influence the identity of the final winner by manipulating the initial seeding of the tournament? Is it possible to ensure a specific good (i.e. king) player will win at least a certain number of rounds in the tournament? This paper investigates these questions both by means of a theoretical method and a practical approach. The theoretical method focuses on the attempt to identify sufficient conditions to ensure a king player will win at least a pre–defined number of rounds in the tournament. It seems that the tournament must adhere to very strict conditions to ensure the outcome, suggesting that this is a hard problem. The practical approach, on the other hand, uses the Monte Carlo method to demonstrate that these problems are solvable in realistic computational time. A comparison of the results lead to the realization that players with equivalent representation might relax the actual complexity of the problem, and enable manipulation of tournaments that can be controlled in reality.

Journal ArticleDOI
TL;DR: The SAT+CAS method as discussed by the authors is a variant of the Davis-Putnam-Logemann-Loveland DPLL(T) architecture, where the T solver is replaced by a CAS.
Abstract: In this paper, we provide an overview of the SAT+CAS method that combines satisfiability checkers (SAT solvers) and computer algebra systems (CAS) to resolve combinatorial conjectures, and present new results vis-a-vis best matrices. The SAT+CAS method is a variant of the Davis–Putnam–Logemann–Loveland DPLL(T) architecture, where the T solver is replaced by a CAS. We describe how the SAT+CAS method has been previously used to resolve many open problems from graph theory, combinatorial design theory, and number theory, showing that the method has broad applications across a variety of fields. Additionally, we apply the method to construct the largest best matrices yet known and present new skew Hadamard matrices constructed from best matrices. We show the best matrix conjecture (that best matrices exist in all orders of the form r2 + r + 1) which was previously known to hold for r ≤ 6 also holds for r = 7. We also confirmed the results of the exhaustive searches that have been previously completed for r ≤ 6.

Journal ArticleDOI
TL;DR: It is interesting to compare the expressive power of a weak query language using a strong modality, against that of a seemingly stronger query language but perhaps using a weaker modality.
Abstract: For any query language $\mathcal {F}$ , we consider three natural families of boolean queries. Nonemptiness queries are expressed as e ≠ ∅ with e an $\mathcal {F}$ expression. Emptiness queries are expressed as e = ∅. Containment queries are expressed as e1 ⊆ e2. We refer to syntactic constructions of boolean queries as modalities. In first order logic, the emptiness, nonemptiness and containment modalities have exactly the same expressive power. For other classes of queries, e.g., expressed in weaker query languages, the modalities may differ in expressiveness. We propose a framework for studying the expressive power of boolean query modalities. Along one dimension, one may work within a fixed query language and compare the three modalities. Here, we identify crucial query features that enable us to go from one modality to another. Furthermore, we identify semantical properties that reflect the lack of these query features to establish separations. Along a second dimension, one may fix a modality and compare different query languages. This second dimension is the one that has already received quite some attention in the literature, whereas in this paper we emphasize the first dimension. Combining both dimensions, it is interesting to compare the expressive power of a weak query language using a strong modality, against that of a seemingly stronger query language but perhaps using a weaker modality. We present some initial results within this theme. The two main query languages to which we apply our framework are the algebra of binary relations, and the language of conjunctive queries.

Journal ArticleDOI
TL;DR: It is shown that Jeffrey conditionalisation fails to satisfy the traditional principles of Inclusion and Preservation for belief revision and the principle of Recovery for belief withdrawals, as well as the Levi and Harper identities.
Abstract: This paper is about the statics and dynamics of belief states that are represented by pairs consisting of an agent’s credences (represented by a subjective probability measure) and her categorical beliefs (represented by a set of possible worlds). Regarding the static side, we argue that the latter proposition should be coherent with respect to the probability measure and that its probability should reach a certain threshold value. On the dynamic side, we advocate Jeffrey conditionalisation as the principal mode of changing one’s belief state. This updating method fits the idea of the Lockean Thesis better than plain Bayesian conditionalisation, and it affords a flexible method for adding and withdrawing categorical beliefs. We show that it fails to satisfy the traditional principles of Inclusion and Preservation for belief revision and the principle of Recovery for belief withdrawals, as well as the Levi and Harper identities. We take this to be a problem for the latter principles rather than for the idea of coherent belief change.

Journal ArticleDOI
TL;DR: This work describes a solver that, at the time of its development (mid-2016), was able to solve harder problems better and faster than any other known ELP solver.
Abstract: As the practical use of answer set programming (ASP) has grown with the development of efficient solvers, we expect a growing interest in extensions of ASP as their semantics stabilize and solvers supporting them mature. Epistemic Specifications, which adds modal operators K and M to the language of ASP, is one such extension. We call a program in this language an epistemic logic program (ELP). Solvers have thus far been practical for only the simplest ELPs due to exponential growth of the search space. We describe a solver that, at the time of its development (mid-2016), was able to solve harder problems better (e.g., without exponentially-growing memory needs w.r.t. K and M occurrences) and faster than any other known ELP solver.