scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Artificial Intelligence Research in 2015"


Journal ArticleDOI
TL;DR: This article surveys the dynamical models that have been derived for various multi-agent reinforcement learning algorithms, making it possible to study and compare them qualitatively, and provides a roadmap on the progress that has been achieved in analysing the evolutionary dynamics of multi- agent learning.
Abstract: The interaction of multiple autonomous agents gives rise to highly dynamic and nondeterministic environments, contributing to the complexity in applications such as automated financial markets, smart grids, or robotics. Due to the sheer number of situations that may arise, it is not possible to foresee and program the optimal behaviour for all agents beforehand. Consequently, it becomes essential for the success of the system that the agents can learn their optimal behaviour and adapt to new situations or circumstances. The past two decades have seen the emergence of reinforcement learning, both in single and multi-agent settings, as a strong, robust and adaptive learning paradigm. Progress has been substantial, and a wide range of algorithms are now available. An important challenge in the domain of multi-agent learning is to gain qualitative insights into the resulting system dynamics. In the past decade, tools and methods from evolutionary game theory have been successfully employed to study multi-agent learning dynamics formally in strategic interactions. This article surveys the dynamical models that have been derived for various multi-agent reinforcement learning algorithms, making it possible to study and compare them qualitatively. Furthermore, new learning algorithms that have been introduced using these evolutionary game theoretic tools are reviewed. The evolutionary models can be used to study complex strategic interactions. Examples of such analysis are given for the domains of automated trading in stock markets and collision avoidance in multi-robot systems. The paper provides a roadmap on the progress that has been achieved in analysing the evolutionary dynamics of multi-agent learning by highlighting the main results and accomplishments.

262 citations


Journal ArticleDOI
TL;DR: This paper shows that for voters who follow the most central political-science model of electorates—single-peaked preferences—those protections vanish, and shows that NP-hard bribery problems—including those for Kemeny and Llull elections—fall to polynomial time.
Abstract: For many election systems, bribery (and related) attacks have been shown NP-hard using constructions on combinatorially rich structures such as partitions and covers. This paper shows that for voters who follow the most central political-science model of electorates-- single-peaked preferences--those hardness protections vanish. By using single-peaked preferences to simplify combinatorial covering challenges, we for the first time show that NP-hard bribery problems--including those for Kemeny and Llull elections--fall to polynomial time for single-peaked electorates. By using single-peaked preferences to simplify combinatorial partition challenges, we for the first time show that NP-hard partition-of-voters problems fall to polynomial time for single-peaked electorates. We show that for single-peaked electorates, the winner problems for Dodgson and Kemeny elections, though Θ2p-complete in the general case, fall to polynomial time. And we completely classify the complexity of weighted coalition manipulation for scoring protocols in single-peaked electorates.

135 citations


Journal ArticleDOI
TL;DR: AUTOFOLIO can significantly improve the performance of CLASPFOLIO 2 on 8 out of the 13 scenarios from the Algorithm Selection Library, leads to new state-of-the-art algorithm selectors for 7 of these scenarios, and matches state- of- the-art performance (statistically) on all other scenarios.
Abstract: Algorithm selection (AS) techniques - which involve choosing from a set of algorithms the one expected to solve a given problem instance most efficiently - have substantially improved the state of the art in solving many prominent AI problems, such as SAT, CSP, ASP, MAXSAT and QBF Although several AS procedures have been introduced, not too surprisingly, none of them dominates all others across all AS scenarios Furthermore, these procedures have parameters whose optimal values vary across AS scenarios This holds specifically for the machine learning techniques that form the core of current AS procedures, and for their hyperparameters Therefore, to successfully apply AS to new problems, algorithms and benchmark sets, two questions need to be answered: (i) how to select an AS approach and (ii) how to set its parameters effectively We address both of these problems simultaneously by using automated algorithm configuration Specifically, we demonstrate that we can automatically configure CLASPFOLIO 2, which implements a large variety of different AS approaches and their respective parameters in a single, highly-parameterized algorithm framework Our approach, dubbed AUTOFOLIO, allows researchers and practitioners across a broad range of applications to exploit the combined power of many different AS methods We demonstrate AUTOFOLIO can significantly improve the performance of CLASPFOLIO 2 on 8 out of the 13 scenarios from the Algorithm Selection Library, leads to new state-of-the-art algorithm selectors for 7 of these scenarios, and matches state-of-the-art performance (statistically) on all other scenarios Compared to the best single algorithm for each AS scenario, AUTOFOLIO achieves average speedup factors between 1:3 and 15:4

110 citations


Journal ArticleDOI
TL;DR: It turns out that the importance of applying the clause elimination procedures developed in this work is empirically emphasized in the context of state-of-the-art QSAT solving.
Abstract: The famous archetypical NP-complete problem of Boolean satisfiability (SAT) and its PSPACE-complete generalization of quantified Boolean satisfiability (QSAT) have become central declarative programming paradigms through which real-world instances of various computationally hard problems can be efficiently solved. This success has been achieved through several breakthroughs in practical implementations of decision procedures for SAT and QSAT, that is, in SAT and QSAT solvers. Here, simplification techniques for conjunctive normal form (CNF) for SAT and for prenex conjunctive normal form (PCNF) for QSAT--the standard input formats of SAT and QSAT solvers--have recently proven very effective in increasing solver efficiency when applied before (i.e., in preprocessing) or during (i.e., in inprocessing) satisfiability search. In this article, we develop and analyze clause elimination procedures for pre- and inprocessing. Clause elimination procedures form a family of (P)CNF formula simplification techniques which remove clauses that have specific (in practice polynomial-time) redundancy properties while maintaining the satisfiability status of the formulas. Extending known procedures such as tautology, subsumption, and blocked clause elimination, we introduce novel elimination procedures based on asymmetric variants of these techniques, and also develop a novel family of so-called covered clause elimination procedures, as well as natural liftings of the CNF-level procedures to PCNF. We analyze the considered clause elimination procedures from various perspectives. Furthermore, for the variants not preserving logical equivalence under clause elimination, we show how to reconstruct solutions to original CNFs from satisfying assignments to simplified CNFs, which is important for practical applications for the procedures. Complementing the more theoretical analysis, we present results on an empirical evaluation on the practical importance of the clause elimination procedures in terms of the effect on solver runtimes on standard real-world application benchmarks. It turns out that the importance of applying the clause elimination procedures developed in this work is empirically emphasized in the context of state-of-the-art QSAT solving.

75 citations


Journal ArticleDOI
TL;DR: A novel framework (Relational Event Model INDuction) for supervised relational learning of event models from large video datasets using ILP is presented and an extension to the framework is presented by integrating an abduction step that improves the learning performance when there is noise in the input data.
Abstract: Event models obtained automatically from video can be used in applications ranging from abnormal event detection to content based video retrieval. When multiple agents are involved in the events, characterizing events naturally suggests encoding interactions as relations. Learning event models from this kind of relational spatio-temporal data using relational learning techniques such as Inductive Logic Programming (ILP) hold promise, but have not been successfully applied to very large datasets which result from video data. In this paper, we present a novel framework remind (Relational Event Model INDuction) for supervised relational learning of event models from large video datasets using ILP. Efficiency is achieved through the learning from interpretations setting and using a typing system that exploits the type hierarchy of objects in a domain. The use of types also helps prevent over generalization. Furthermore, we also present a type-refining operator and prove that it is optimal. The learned models can be used for recognizing events from previously unseen videos. We also present an extension to the framework by integrating an abduction step that improves the learning performance when there is noise in the input data. The experimental results on several hours of video data from two challenging real world domains (an airport domain and a physical action verbs domain) suggest that the techniques are suitable to real world scenarios.

61 citations


Journal ArticleDOI
TL;DR: Although the main goal is to efficiently answer queries over OWL 2 ontologies and data, the technical results are very general and the approach is applicable to first-order knowledge representation languages that can be captured by rules allowing for existential quantification and disjunction in the head.
Abstract: Answering conjunctive queries over ontology-enriched datasets is a core reasoning task for many applications. Query answering is, however, computationally very expensive, which has led to the development of query answering procedures that sacrifice either expressive power of the ontology language, or the completeness of query answers in order to improve scalability. In this paper, we describe a hybrid approach to query answering over OWL 2 ontologies that combines a datalog reasoner with a fully-fledged OWL 2 reasoner in order to provide scalable 'pay-as-you-go' performance. The key feature of our approach is that it delegates the bulk of the computation to the datalog reasoner and resorts to expensive OWL 2 reasoning only as necessary to fully answer the query. Furthermore, although our main goal is to efficiently answer queries over OWL 2 ontologies and data, our technical results are very general and our approach is applicable to first-order knowledge representation languages that can be captured by rules allowing for existential quantification and disjunction in the head; our only assumption is the availability of a datalog reasoner and a fully-fledged reasoner for the language of interest, both of which are used as 'black boxes'. We have implemented our techniques in the PAGOdA system, which combines the datalog reasoner RDFox and the OWL 2 reasoner HermiT. Our extensive evaluation shows that PAGOdA succeeds in providing scalable pay-as-you-go query answering for a wide range of OWL 2 ontologies, datasets and queries.

53 citations


Journal ArticleDOI
TL;DR: In this paper, the complexity of controlling the outcome of weighted elections through adding and deleting voters was studied and polynomial-time algorithms, NP-completeness results, and for many NP-complete cases, approximation algorithms were obtained.
Abstract: Although manipulation and bribery have been extensively studied under weighted voting, there has been almost no work done on election control under weighted voting. This is unfortunate, since weighted voting appears in many important natural settings. In this paper, we study the complexity of controlling the outcome of weighted elections through adding and deleting voters. We obtain polynomial-time algorithms, NP-completeness results, and for many NP-complete cases, approximation algorithms. In particular, for scoring rules we completely characterize the complexity of weighted voter control. Our work shows that for quite a few important cases, either polynomial-time exact algorithms or polynomial-time approximation algorithms exist.

49 citations


Journal Article
TL;DR: It is shown that it is possible to adapt many existing online learning algorithms to the coactive framework and provide algorithms that achieve O(1/√T) average regret in terms of cardinal utility, even though the learning algorithm never observes cardinal utility values directly.
Abstract: We propose Coactive Learning as a model of interaction between a learning system and a human user, where both have the common goal of providing results of maximum utility to the user. Interactions in the Coactive Learning model take the following form: at each step, the system (e.g. search engine) receives a context (e.g. query) and predicts an object (e.g. ranking); the user responds by correcting the system if necessary, providing a slightly improved - but not necessarily optimal - object as feedback. We argue that such preference feedback can be inferred in large quantity from observable user behavior (e.g., clicks in web search), unlike the optimal feedback required in the expert model or the cardinal valuations required for bandit learning. Despite the relaxed requirements for the feedback, we show that it is possible to adapt many existing online learning algorithms to the coactive framework. In particular, we provide algorithms that achieve O(1/√T) average regret in terms of cardinal utility, even though the learning algorithm never observes cardinal utility values directly. We also provide an algorithm with O(log(T)/T) average regret in the case of λ-strongly convex loss functions. An extensive empirical study demonstrates the applicability of our model and algorithms on a movie recommendation task, as well as ranking for web search.

48 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a theoretical framework and an implementation in the context of the FO(ċ) knowledge representation language, where a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part are given.
Abstract: Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowledge representation languages like Answer Set Programming (ASP) and FO(ċ) and a CSP modeling language such as Zinc, is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory. An important bottleneck is the blow-up of the size of the theory caused by the grounding phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(ċ) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part, the justifications provide a recipe to construct a complete assignment that satisfies the non-grounded part. When a justification for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justified. Experimental results illustrate the power and generality of this approach.

43 citations


Journal ArticleDOI
TL;DR: It is demonstrated that these models faithfully represent the meanings of sentences and are sensitive to how the roles played by participants, their characteristics, the actions performed, the manner of such actions, and changing spatial relations between participants affect the meaning of a sentence and how it is grounded in video.
Abstract: We present an approach to simultaneously reasoning about a video clip and an entire natural-language sentence. The compositional nature of language is exploited to construct models which represent the meanings of entire sentences composed out of the meanings of the words in those sentences mediated by a grammar that encodes the predicate-argument relations. We demonstrate that these models faithfully represent the meanings of sentences and are sensitive to how the roles played by participants (nouns), their characteristics (adjectives), the actions performed (verbs), the manner of such actions (adverbs), and changing spatial relations between participants (prepositions) affect the meaning of a sentence and how it is grounded in video. We exploit this methodology in three ways. In the first, a video clip along with a sentence are taken as input and the participants in the event described by the sentence are highlighted, even when the clip depicts multiple similar simultaneous events. In the second, a video clip is taken as input without a sentence and a sentence is generated that describes an event in that clip. In the third, a corpus of video clips is paired with sentences which describe some of the events in those clips and the meanings of the words in those sentences are learned. We learn these meanings without needing to specify which attribute of the video clips each word in a given sentence refers to. The learned meaning representations are shown to be intelligible to humans.

41 citations


Journal ArticleDOI
TL;DR: Theoretical and experimental results demonstrate that options can play an important role in AVI by decreasing approximation error and inducing fast convergence, and suggests a new algorithm, Landmark-based AVI, that represents the value function only at the landmark states.
Abstract: Temporally extended actions have proven useful for reinforcement learning, but their duration also makes them valuable for efficient planning. The options framework provides a concrete way to implement and reason about temporally extended actions. Existing literature has demonstrated the value of planning with options empirically, but there is a lack of theoretical analysis formalizing when planning with options is more efficient than planning with primitive actions. We provide a general analysis of the convergence rate of a popular Approximate Value Iteration (AVI) algorithm called Fitted Value Iteration (FVI) with options. Our analysis reveals that longer duration options and a pessimistic estimate of the value function both lead to faster convergence. Furthermore, options can improve convergence even when they are suboptimal and sparsely distributed throughout the state-space. Next we consider the problem of generating useful options for planning based on a subset of landmark states. This suggests a new algorithm, Landmark-based AVI (LAVI), that represents the value function only at the landmark states. We analyze both FVI and LAVI using the proposed landmark-based options and compare the two algorithms. Our experimental results in three different domains demonstrate the key properties from the analysis. Our theoretical and experimental results demonstrate that options can play an important role in AVI by decreasing approximation error and inducing fast convergence.

Journal ArticleDOI
TL;DR: The results show that, by focusing on the convex coverage set (CCS), these methods achieve much better scalability in the number of agents than the current state of the art.
Abstract: In this article, we propose new algorithms for multi-objective coordination graphs (MO-CoGs). Key to the efficiency of these algorithms is that they compute a convex coverage set (CCS) instead of a Pareto coverage set (PCS). Not only is a CCS a sufficient solution set for a large class of problems, it also has important characteristics that facilitate more efficient solutions. We propose two main algorithms for computing a CCS in MO-CoGs. Convex multi-objective variable elimination (CMOVE) computes a CCS by performing a series of agent eliminations, which can be seen as solving a series of local multi-objective subproblems. Variable elimination linear support (VELS) iteratively identifies the single weight vector w that can lead to the maximal possible improvement on a partial CCS and calls variable elimination to solve a scalarized instance of the problem for w. VELS is faster than CMOVE for small and medium numbers of objectives and can compute an e-approximate CCS in a fraction of the runtime. In addition, we propose variants of these methods that employ AND/OR tree search instead of variable elimination to achieve memory efficiency. We analyze the runtime and space complexities of these methods, prove their correctness, and compare them empirically against a naive baseline and an existing PCS method, both in terms of memory-usage and runtime. Our results show that, by focusing on the CCS, these methods achieve much better scalability in the number of agents than the current state of the art.

Journal ArticleDOI
TL;DR: These algorithms extend all previous results for MaxSAT and #SAT achieved by dynamic programming along structural decompositions of the incidence graph of the input formula, as a proof of concept that warrants further research.
Abstract: We look at dynamic programming algorithms for propositional model counting, also called #SAT, and MaxSAT. Tools from graph structure theory, in particular treewidth, have been used to successfully identify tractable cases in many subfields of AI, including SAT, Constraint Satisfaction Problems (CSP), Bayesian reasoning, and planning. In this paper we attack #SAT and MaxSAT using similar, but more modern, graph structure tools. The tractable cases will include formulas whose class of incidence graphs have not only unbounded treewidth but also unbounded clique-width. We show that our algorithms extend all previous results for MaxSAT and #SAT achieved by dynamic programming along structural decompositions of the incidence graph of the input formula. We present some limited experimental results, comparing implementations of our algorithms to state-of-the-art #SAT and MaxSAT solvers, as a proof of concept that warrants further research.

Journal ArticleDOI
TL;DR: It is shown how the multiagent planning problem can be reformulated as inference in a mixture of dynamic Bayesian networks (DBNs) and paves the way for the application of efficient inference techniques in DBNs to multiagent decision making.
Abstract: Decentralized POMDPs provide an expressive framework for multiagent sequential decision making. However, the complexity of these models--NEXP-Complete even for two agents--has limited their scalability. We present a promising new class of approximation algorithms by developing novel connections between multiagent planning and machine learning. We show how the multiagent planning problem can be reformulated as inference in a mixture of dynamic Bayesian networks (DBNs). This planning-as-inference approach paves the way for the application of efficient inference techniques in DBNs to multiagent decision making. To further improve scalability, we identify certain conditions that are sufficient to extend the approach to multiagent systems with dozens of agents. Specifically, we show that the necessary inference within the expectation-maximization framework can be decomposed into processes that often involve a small subset of agents, thereby facilitating scalability. We further show that a number of existing multiagent planning models satisfy these conditions. Experiments on large planning benchmarks confirm the benefits of our approach in terms of runtime and scalability with respect to existing techniques.

Journal ArticleDOI
TL;DR: In the context of abstractions, the notion of additive abstractions for OSP is defined, the complexity of deriving effective abstractions from a rich space of hypotheses is studied, and some substantial, empirically relevant islands of tractability are revealed.
Abstract: While in classical planning the objective is to achieve one of the equally attractive goal states at as low total action cost as possible, the objective in deterministic oversubscription planning (OSP) is to achieve an as valuable as possible subset of goals within a fixed allowance of the total action cost. Although numerous applications in various fields share the latter objective, no substantial algorithmic advances have been made in deterministic OSP. Tracing the key sources of progress in classical planning, we identify a severe lack of effective domain-independent approximations for OSP. With our focus here on optimal planning, our goal is to bridge this gap. Two classes of approximation techniques have been found especially useful in the context of optimal classical planning: those based on state-space abstractions and those based on logical landmarks for goal reachability. The question we study here is whether some similar-in-spirit, yet possibly mathematically different, approximation techniques can be developed for OSP. In the context of abstractions, we define the notion of additive abstractions for OSP, study the complexity of deriving effective abstractions from a rich space of hypotheses, and reveal some substantial, empirically relevant islands of tractability. In the context of landmarks, we show how standard goal-reachability landmarks of certain classical planning tasks can be compiled into the OSP task of interest, resulting in an equivalent OSP task with a lower cost allowance, and thus with a smaller search space. Our empirical evaluation confirms the effectiveness of the proposed techniques, and opens a wide gate for further developments in oversubscription planning.

Journal ArticleDOI
TL;DR: The results reveal two-way conjunctive regular path queries as a promising language for querying data enriched by ontologies formulated in DLs of the DL-Lite and EL families or the corresponding OWL 2 QL and EL profiles.
Abstract: Conjunctive regular path queries are an expressive extension of the well-known class of conjunctive queries. Such queries have been extensively studied in the (graph) database community, since they support a controlled form of recursion and enable sophisticated path navigation. Somewhat surprisingly, there has been little work aimed at using such queries in the context of description logic (DL) knowledge bases, particularly for the lightweight DLs that are considered best suited for data-intensive applications. This paper aims to bridge this gap by providing algorithms and tight complexity bounds for answering two-way conjunctive regular path queries over DL knowledge bases formulated in lightweight DLs of the DL-Lite and EL families. Our results demonstrate that in data complexity, the cost of moving to this richer query language is as low as one could wish for: the problem is NL-complete for DL-Lite and P-complete for EL. The combined complexity of query answering increases from NP- to PSPACE-complete, but for two-way regular path queries (without conjunction), we show that query answering is tractable even with respect to combined complexity. Our results reveal two-way conjunctive regular path queries as a promising language for querying data enriched by ontologies formulated in DLs of the DL-Lite and EL families or the corresponding OWL 2 QL and EL profiles.

Journal ArticleDOI
TL;DR: In this article, the problem of computing possible and necessary winners for partially specified weighted and unweighted tournaments was studied and a number of well-known solution concepts, including uncovered set, Borda, ranked pairs, and maximin, were considered.
Abstract: We study the problem of computing possible and necessary winners for partially specified weighted and unweighted tournaments. This problem arises naturally in elections with incompletely specified votes, partially completed sports competitions, and more generally in any scenario where the outcome of some pairwise comparisons is not yet fully known. We specifically consider a number of well-known solution concepts--including the uncovered set, Borda, ranked pairs, and maximin--and show that for most of them, possible and necessary winners can be identified in polynomial time. These positive algorithmic results stand in sharp contrast to earlier results concerning possible and necessary winners given partially specified preference profiles.

Journal ArticleDOI
TL;DR: The empirical results show that not only does ITSAT outperform the state-of-the-art temporally expressive planners, it is also competitive with the fast temporal planners that cannot handle required concurrency.
Abstract: Planning as satisfiability is known as an efficient approach to deal with many types of planning problems. However, this approach has not been competitive with the state-space based methods in temporal planning. This paper describes ITSAT as an efficient SAT-based (satisfiability based) temporal planner capable of temporally expressive planning. The novelty of ITSAT lies in the way it handles temporal constraints of given problems without getting involved in the difficulties of introducing continuous variables into the corresponding satisfiability problems. We also show how, as in SAT-based classical planning, carefully devised preprocessing and encoding schemata can considerably improve the efficiency of SAT-based temporal planning. We present two preprocessing methods for mutex relation extraction and action compression. We also show that the separation of causal and temporal reasoning enables us to employ compact encodings that are based on the concept of parallel execution semantics. Although such encodings have been shown to be quite effective in classical planning, ITSAT is the first temporal planner utilizing this type of encoding. Our empirical results show that not only does ITSAT outperform the state-of-the-art temporally expressive planners, it is also competitive with the fast temporal planners that cannot handle required concurrency.

Journal ArticleDOI
TL;DR: It is shown that MAP can sometimes be efficiently expectation-approximated, even in instances with high tree-width, if the most probable explanation has a high probability, and introduced the complexity class FERT, analogous to the class FPT, to capture this notion of fixed-parameter expectation- approximability.
Abstract: The problem of finding the most probable explanation to a designated set of variables given partial evidence (the MAP problem) is a notoriously intractable problem in Bayesian networks, both to compute exactly and to approximate. It is known, both from theoretical considerations and from practical experience, that low tree-width is typically an essential prerequisite to efficient exact computations in Bayesian networks. In this paper we investigate whether the same holds for approximating MAP. We define four notions of approximating MAP (by value, structure, rank, and expectation) and argue that all of them are intractable in general. We prove that efficient value-approximations, structure-approximations, and rank-approximations of MAP instances with high tree-width will violate the Exponential Time Hypothesis. In contrast, we show that MAP can sometimes be efficiently expectation-approximated, even in instances with high tree-width, if the most probable explanation has a high probability. We introduce the complexity class FERT, analogous to the class FPT, to capture this notion of fixed-parameter expectation-approximability. We suggest a road-map to future research that yields fixed-parameter tractable results for expectation-approximate MAP, even in graphs with high tree-width.

Journal ArticleDOI
TL;DR: It is argued that bisimulations are useful tools to prove (non)expressivity results for the logics studied here, and this claim is substantiated with examples.
Abstract: We investigate model theoretic properties of XPath with data (in)equality tests over the class of data trees, i.e., the class of trees where each node contains a label from a finite alphabet and a data value from an infinite domain. We provide notions of (bi)simulations for XPath logics containing the child, parent, ancestor and descendant axes to navigate the tree. We show that these notions precisely characterize the equivalence relation associated with each logic. We study formula complexity measures consisting of the number of nested axes and nested subformulas in a formula; these notions are akin to the notion of quantifier rank in first-order logic. We show characterization results for fine grained notions of equivalence and (bi)simulation that take into account these complexity measures. We also prove that positive fragments of these logics correspond to the formulas preserved under (non-symmetric) simulations. We show that the logic including the child axis is equivalent to the fragment of first-order logic invariant under the corresponding notion of bisimulation. If upward navigation is allowed the characterization fails but a weaker result can still be established. These results hold both over the class of possibly infinite data trees and over the class of finite data trees. Besides their intrinsic theoretical value, we argue that bisimulations are useful tools to prove (non)expressivity results for the logics studied here, and we substantiate this claim with examples.

Journal ArticleDOI
TL;DR: A heuristic approximation procedure that is based on SVMs is considered, and it is shown empirically that this algorithm consistently outperforms a traditional rejection mechanism based on distance from decision boundary.
Abstract: A pointwise competitive classifier from class F is required to classify identically to the best classifier in hindsight from F. For noisy, agnostic settings we present a strategy for learning pointwise-competitive classifiers from a finite training sample provided that the classifier can abstain from prediction at a certain region of its choice. For some interesting hypothesis classes and families of distributions, the measure of this rejected region is shown to be diminishing at rate β1 ċ O((polylog(m) ċ log(1/δ)/m)beta;2/2), with high probability, where m is the sample size, δ is the standard confidence parameter, and β1, β2 are smoothness parameters of a Bernstein type condition of the associated excess loss class (related to F and the 0/1 loss). Exact implementation of the proposed learning strategy is dependent on an ERM oracle that is hard to compute in the agnostic case. We thus consider a heuristic approximation procedure that is based on SVMs, and show empirically that this algorithm consistently outperforms a traditional rejection mechanism based on distance from decision boundary.

Journal ArticleDOI
TL;DR: A novel approach to Compressed Path Databases, space efficient oracles used to very quickly identify the first edge on a shortest path, being significantly faster than state-of-the-art first-move oracles from the literature is introduced.
Abstract: We introduce a novel approach to Compressed Path Databases, space efficient oracles used to very quickly identify the first edge on a shortest path. Our algorithm achieves query running times on the 100 nanosecond scale, being significantly faster than state-of-the-art first-move oracles from the literature. Space consumption is competitive, due to a compression approach that rearranges rows and columns in a first-move matrix and then performs run length encoding (RLE) on the contents of the matrix. One variant of our implemented system was, by a convincing margin, the fastest entry in the 2014 Grid-Based Path Planning Competition. We give a first tractability analysis for the compression scheme used by our algorithm. We study the complexity of computing a database of minimum size for general directed and undirected graphs. We find that in both cases the problem is NP-complete. We also show that, for graphs which can be decomposed along articulation points, the problem can be decomposed into independent parts, with a corresponding reduction in its level of difficulty. In particular, this leads to simple and tractable algorithms with linear running time which yield optimal compression results for trees.

Journal ArticleDOI
Hannes Strass1
TL;DR: It is demonstrated that the languages of Brewka and Woltran's abstract dialectical frameworks for two-valued semantics form a neat hierarchy with respect to their expressiveness, and that this hierarchy collapses once it is allowed to introduce a linear number of new vocabulary elements.
Abstract: We analyse the expressiveness of Brewka and Woltran's abstract dialectical frameworks for two-valued semantics. By expressiveness we mean the ability to encode a desired set of two-valued interpretations over a given propositional vocabulary A using only atoms from A. We also compare ADFs' expressiveness with that of (the two-valued semantics of) abstract argumentation frameworks, normal logic programs and propositional logic. While the computational complexity of the two-valued model existence problem for all these languages is (almost) the same, we show that the languages form a neat hierarchy with respect to their expressiveness. We then demonstrate that this hierarchy collapses once we allow to introduce a linear number of new vocabulary elements. We finally also analyse and compare the representational succinctness of ADFs (for two-valued model semantics), that is, their capability to represent two-valued interpretation sets in a space-efficient manner.

Journal ArticleDOI
TL;DR: A classifier for predicting the singleton/coreferent distinction is built and the model's feature representations synthesize linguistic insights about the factors affecting discourse entity lifespans with existing results about the benefits of "surface" features for coreference resolution.
Abstract: A discourse typically involves numerous entities, but few are mentioned more than once. Distinguishing those that die out after just one mention (singleton) from those that lead longer lives (coreferent) would dramatically simplify the hypothesis space for coreference resolution models, leading to increased performance. To realize these gains, we build a classifier for predicting the singleton/coreferent distinction. The model's feature representations synthesize linguistic insights about the factors affecting discourse entity lifespans (especially negation, modality, and attitude predication) with existing results about the benefits of "surface" (part-of-speech and n-gram-based) features for coreference resolution. The model is effective in its own right, and the feature representations help to identify the anchor phrases in bridging anaphora as well. Furthermore, incorporating the model into two very different state-of-the-art coreference resolution systems, one rule-based and the other learning-based, yields significant performance improvements.

Journal ArticleDOI
TL;DR: This article proposes a solution to the problem of obtaining plausibility information, which is necessary to perform belief revision: given a sequence of revisions, together with their results, derive a possible initial order that has generated them; this is different from the usual assumption of starting from an all-equal initial order.
Abstract: This article proposes a solution to the problem of obtaining plausibility information, which is necessary to perform belief revision: given a sequence of revisions, together with their results, derive a possible initial order that has generated them; this is different from the usual assumption of starting from an all-equal initial order and modifying it by a sequence of revisions. Four semantics for iterated revision are considered: natural, restrained, lexicographic and reinforcement. For each, a necessary and sufficient condition to the existence of an order generating a given history of revisions and results is proved. Complexity is proved coNP complete in all cases but one (reinforcement revision with unbounded sequence length).

Journal ArticleDOI
TL;DR: In this paper, a modal logic, called CP, interpreted on the equivalence classes induced by finite sets of propositional atoms is studied and then used to embed three logics of strategic interaction.
Abstract: The article introduces a ceteris paribus modal logic, called CP, interpreted on the equivalence classes induced by finite sets of propositional atoms. This logic is studied and then used to embed three logics of strategic interaction, namely atemporal STIT, the coalition logic of propositional control (CL--PC) and the starless fragment of the dynamic logic of propositional assignments (DL--PA). The embeddings highlight a common ceteris paribus structure underpinning the key operators of all these apparently very different logics and show, we argue, remarkable similarities behind some of the most in uential formalisms for reasoning about strategic interaction.

Journal ArticleDOI
TL;DR: This paper considers the problem of placing as few loading stations as possible such that on any shortest path there are enough to guarantee sufficient energy supply, so that EV owners no longer have to plan their trips ahead incorporating loading station locations, and are no longer forced to accept long detours to reach their destinations.
Abstract: Compared to conventional cars, electric vehicles (EVs) still suffer from considerably shorter cruising ranges. Combined with the sparsity of battery loading stations, the complete transition to E-mobility still seems a long way to go. In this paper, we consider the problem of placing as few loading stations as possible so that on any shortest path there are sufficiently many not to run out of energy. We show how to model this problem and introduce heuristics which provide close-to-optimal solutions even in large road networks.

Journal ArticleDOI
TL;DR: A suite of such algorithms which includes a basic algorithm DMCS, an advanced version DMCSOPT that exploits topology-based optimizations, and a streaming algorithmDMCS-STREAMING that computes equilibria in packages of bounded size are presented.
Abstract: Multi-context Systems (MCSs) are a formalism for systems consisting of knowledge bases (possibly heterogeneous and non-monotonic) that are interlinked via bridge rules, where the global system semantics emerges from the local semantics of the knowledge bases (also called "contexts") in an equilibrium. While MCSs and related formalisms are inherently targeted for distributed settings, no truly distributed algorithms for their evaluation were available. We address this shortcoming and present a suite of such algorithms which includes a basic algorithm DMCS, an advanced version DMCSOPT that exploits topology-based optimizations, and a streaming algorithm DMCS-STREAMING that computes equilibria in packages of bounded size. The algorithms behave quite differently in several respects, as experienced in thorough experimental evaluation of a system prototype. From the experimental results, we derive a guideline for choosing the appropriate algorithm and running mode in particular situations, determined by the parameter settings.

Journal ArticleDOI
TL;DR: There is an infinity of inference systems that satisfy representation insensitivity--a strengthened version of Walley's representation invariance--and specificity, and it is proved that the latter produces the same posterior inferences as would be obtained using Haldane's improper prior.
Abstract: Coherent reasoning under uncertainty can be represented in a very general manner by coherent sets of desirable gambles. In a context that does not allow for indecision, this leads to an approach that is mathematically equivalent to working with coherent conditional probabilities. If we do allow for indecision, this leads to a more general foundation for coherent (imprecise-)probabilistic inference. In this framework, and for a given finite category set, coherent predictive inference under exchangeability can be represented using Bernstein coherent cones of multivariate polynomials on the simplex generated by this category set. This is a powerful generalisation of de Finetti's Representation Theorem allowing for both imprecision and indecision. We define an inference system as a map that associates a Bernstein coherent cone of polynomials with every finite category set. Many inference principles encountered in the literature can then be interpreted, and represented mathematically, as restrictions on such maps. We discuss, as particular examples, two important inference principles: representation insensitivity--a strengthened version of Walley's representation invariance--and specificity. We show that there is an infinity of inference systems that satisfy these two principles, amongst which we discuss in particular the skeptically cautious inference system, the inference systems corresponding to (a modified version of) Walley and Bernard's Imprecise Dirichlet Multinomial Models (IDMM), the skeptical IDMM inference systems, and the Haldane inference system. We also prove that the latter produces the same posterior inferences as would be obtained using Haldane's improper prior, implying that there is an infinity of proper priors that produce the same coherent posterior inferences as Haldane's improper one. Finally, we impose an additional inference principle that allows us to characterise uniquely the immediate predictions for the IDMM inference systems.

Journal ArticleDOI
TL;DR: This work investigates several enhancements to a leading real-time search algorithm, LSS-LRTA*, and shows experimentally that it is better to plan after each action or to use a dynamically sized lookahead, which can cause undesirable actions to be selected and on-line de-biasing of the heuristic can lead to improved performance.
Abstract: In real-time domains such as video games, planning happens concurrently with execution and the planning algorithm has a strictly bounded amount of time before it must return the next action for the agent to execute. We explore the use of real-time heuristic search in two benchmark domains inspired by video games. Unlike classic benchmarks such as grid pathfinding and the sliding tile puzzle, these new domains feature exogenous change and directed state space graphs. We consider the setting in which planning and acting are concurrent and we use the natural objective of minimizing goal achievement time. Using both the classic benchmarks and the new domains, we investigate several enhancements to a leading real-time search algorithm, LSS-LRTA*. We show experimentally that 1) it is better to plan after each action or to use a dynamically sized lookahead, 2) A*-based lookahead can cause undesirable actions to be selected, and 3) on-line de-biasing of the heuristic can lead to improved performance. We hope this work encourages future research on applying real-time search in dynamic domains.