scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Mathematics and Artificial Intelligence in 2011"


Journal ArticleDOI
TL;DR: In this paper, a multi-armed bandit episode consists of n trials, each allowing selection of one of K arms, resulting in payoff from a distribution over [0, 1] associated with that arm.
Abstract: A multi-armed bandit episode consists of n trials, each allowing selection of one of K arms, resulting in payoff from a distribution over [0,1] associated with that arm. We assume contextual side information is available at the start of the episode. This context enables an arm predictor to identify possible favorable arms, but predictions may be imperfect so that they need to be combined with further exploration during the episode. Our setting is an alternative to classical multi-armed bandits which provide no contextual side information, and is also an alternative to contextual bandits which provide new context each individual trial. Multi-armed bandits with episode context can arise naturally, for example in computer Go where context is used to bias move decisions made by a multi-armed bandit algorithm. The UCB1 algorithm for multi-armed bandits achieves worst-case regret bounded by $O\left(\sqrt{Kn\log(n)}\right)$ . We seek to improve this using episode context, particularly in the case where K is large. Using a predictor that places weight M i ?>?0 on arm i with weights summing to 1, we present the PUCB algorithm which achieves regret $O\left(\frac{1}{M_{\ast}}\sqrt{n\log(n)}\right)$ where M ??? is the weight on the optimal arm. We illustrate the behavior of PUCB with small simulation experiments, present extensions that provide additional capabilities for PUCB, and describe methods for obtaining suitable predictors for use with PUCB.

154 citations


Journal ArticleDOI
TL;DR: This work uses an evolutionary algorithm to evolve instances that are uniquely easy or hard for each algorithm, thus providing a more direct method for studying the relative strengths and weaknesses of each algorithm.
Abstract: The suitability of an optimisation algorithm selected from within an algorithm portfolio depends upon the features of the particular instance to be solved. Understanding the relative strengths and weaknesses of different algorithms in the portfolio is crucial for effective performance prediction, automated algorithm selection, and to generate knowledge about the ideal conditions for each algorithm to influence better algorithm design. Relying on well-studied benchmark instances, or randomly generated instances, limits our ability to truly challenge each of the algorithms in a portfolio and determine these ideal conditions. Instead we use an evolutionary algorithm to evolve instances that are uniquely easy or hard for each algorithm, thus providing a more direct method for studying the relative strengths and weaknesses of each algorithm. The proposed methodology ensures that the meta-data is sufficient to be able to learn the features of the instances that uniquely characterise the ideal conditions for each algorithm. A case study is presented based on a comprehensive study of the performance of two heuristics on the Travelling Salesman Problem. The results show that prediction of search effort as well as the best performing algorithm for a given instance can be achieved with high accuracy.

97 citations


Journal ArticleDOI
TL;DR: This paper proposes an approach that guarantees conflict-free extensions of argumentation framework and presents three dominance relations that generalize respectively stable, preferred and grounded semantics with preferences and retrieves the preferred sub-theories which were proposed in the context of handling inconsistency in weighted knowledge bases.
Abstract: Dung's argumentation framework consists of a set of arguments and an attack relation among them. Arguments are evaluated and acceptable sets of them, called extensions, are computed using a given semantics. Each extension represents a coherent position. In the literature, several proposals have extended this framework in order to take into account the strength of arguments. The basic idea is to ignore an attack if the attacked argument is stronger than (or preferred to) its attacker. Semantics are then applied using only the remaining attacks. In this paper, we show that those proposals behave correctly when the attack relation is symmetric. However, when it is asymmetric, conflicting extensions may be computed leading to unintended conclusions. We propose an approach that guarantees conflict-free extensions. This approach presents two novelties: the first one is that it takes into account preferences at the semantics level rather than the attack level. The idea is to extend existing semantics with preferences. In case preferences are not available or do not conflict with the attacks, the extensions of the new semantics coincide with those of the basic ones. The second novelty of our approach is that a semantics is defined as a dominance relation on the powerset of the set of arguments. The extensions (under a semantics) are the maximal elements of the dominance relation. Such an approach makes it possible not only to compute the extensions of a framework but also to compare its non-extensions. We start by proposing three dominance relations that generalize respectively stable, preferred and grounded semantics with preferences. Then, we focus on stable semantics and provide full characterizations of its dominance relations and those of its generalized versions. Complexity results are provided. Finally, we show that an instance of the proposed framework retrieves the preferred sub-theories which were proposed in the context of handling inconsistency in weighted knowledge bases.

81 citations


Journal ArticleDOI
TL;DR: This paper develops and evaluates algorithms for solving MOCO problems, defined on Boolean domains, and where the optimality criterion is lexicographic, and shows that lexicography optimization conditions are observed in the majority of the problem instances from the MaxSAT evaluations.
Abstract: Multi-Objective Combinatorial Optimization (MOCO) problems find a wide range of practical application problems, some of which involving Boolean variables and constraints. This paper develops and evaluates algorithms for solving MOCO problems, defined on Boolean domains, and where the optimality criterion is lexicographic. The proposed algorithms build on existing algorithms for either Maximum Satisfiability (MaxSAT), Pseudo-Boolean Optimization (PBO), or Integer Linear Programming (ILP). Experimental results, obtained on problem instances from haplotyping with pedigrees and software package dependencies, show that the proposed algorithms can provide significant performance gains over state of the art MaxSAT, PBO and ILP algorithms. Finally, the paper also shows that lexicographic optimization conditions are observed in the majority of the problem instances from the MaxSAT evaluations, motivating the development of dedicated algorithms that can exploit lexicographic optimization conditions in general MaxSAT problem instances.

76 citations


Journal ArticleDOI
TL;DR: An interface for connecting agent platforms to environments that provides generic functionality for executing actions and for perceiving changes in an agent’s environment and may be used as a standard that enables agents to control entities in environments.
Abstract: We introduce an interface for connecting agent platforms to environments. This interface provides generic functionality for executing actions and for perceiving changes in an agent's environment. It also provides support for managing an environment, e.g., for starting, pausing and terminating it. Among the benefits of such an interface are (1) standard functionality is provided by the interface implementation itself, and (2) agent platforms that support the interface can connect to any environment that implements the interface. This significantly reduces effort required from agent and environment programmers as the environment code needed to implement the interface needs to be written only once. We propose that the interface presented may be used as a standard that enables agents to control entities in environments. Our starting point for designing such a generic interface is based on a careful study of the various interfaces used by different agent programming languages to connect agent programs to environments. We discuss several case studies that use our interface (an elevator simulator, the well-known agent contest, and an implementation of the interface to connect agents to bots in Unreal Tournament 2004).

76 citations


Journal ArticleDOI
TL;DR: A new approach to database preference queries is presented, where preferences are represented in a possibilistic logic manner, using symbolic weights, and refinements of both Pareto ordering and minimum ordering are used.
Abstract: The paper presents a new approach to database preference queries, where preferences are represented in a possibilistic logic manner, using symbolic weights. The symbolic weights may be processed without assessing their precise value, which leaves the freedom for the user to not specify any priority among the preferences. The user may also enforce a (partial) ordering between them, if necessary. The approach can be related to the processing of fuzzy queries whose components are conditionally weighted in terms of importance. In this paper, importance levels are symbolically processed, and refinements of both Pareto ordering and minimum ordering are used. The representational power of the proposed setting is stressed, while the approach is compared with database Best operator-like methods and with the CP-net approach developed in artificial intelligence. The paper also provides a structured and rather broad overview of the different lines of research in the literature dealing with the handling of preferences in database queries.

53 citations


Journal ArticleDOI
TL;DR: The approach to integrating formal methods with SysML is illustrated with a typical macro-level aerospace design task and provides evidence that coupling formal methods can realistically be applied to solve aerospace development problems.
Abstract: Maintaining design consistency is a critical issue for macro-level aerospace development. The inability to maintain design consistency is a major contributor to cost and schedule overruns. By embedding The Systems Modeling Language (SysML) within a formal logic, formal methods can be used to maintain consistency as a design evolves. SysML, provided with a formal semantics, enables engineers to employ reasoning in the course of a typical model-based development process. Engineers can make use of formal methods within the context of current engineering practice and tools without needing to have special formal methods training. As component subsystems are introduced to refine a design, their assumptions are checked against current assumptions. If new assumptions do not introduce inconsistency, they are added to the model assumptions. If the assumptions render the design inconsistent, they are detected which minimizes potential rework. SysML has a demonstrated capability for top-to-bottom design refinement for large-scale aerospace systems. SysML does not have a formal logic-based semantics. The logical formalism within which SysML is embedded matches the informal semantic of SysML closely. The approach to integrating formal methods with SysML is illustrated with a typical macro-level aerospace design task. The design process produces a design solution which provably satisfies the top level requirements. The example provides evidence that coupling formal methods with SysML can realistically be applied to solve aerospace development problems. The approach results from a number of detailed design trades employing a model-based system development process which used SysML as the model integration framework.

39 citations


Journal ArticleDOI
TL;DR: Variants of TPLS that improve its “anytime” behavior by adaptively generating the sequence of weights while solving the problem are examined to fill the “largest gap” in the current approximation to the Pareto front.
Abstract: Algorithms based on the two-phase local search (TPLS) framework are a powerful method to efficiently tackle multi-objective combinatorial optimization problems. TPLS algorithms solve a sequence of scalarizations, that is, weighted sum aggregations, of the multi-objective problem. Each successive scalarization uses a different weight from a predefined sequence of weights. TPLS requires defining the stopping criterion (the number of weights) a priori, and it does not produce satisfactory results if stopped before completion. Therefore, TPLS has poor "anytime" behavior. This article examines variants of TPLS that improve its "anytime" behavior by adaptively generating the sequence of weights while solving the problem. The aim is to fill the "largest gap" in the current approximation to the Pareto front. The results presented here show that the best adaptive TPLS variants are superior to the "classical" TPLS strategies in terms of anytime behavior, matching, and often surpassing, them in terms of final quality, even if the latter run until completion.

39 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose three similarity measures based on existent set-based measures in addition to developing the completely novel zeros-induced measure and formally prove that all the measures are indeed similarity measures and investigate the computational complexity of computing them.
Abstract: Formal concept analysis (FCA) has been applied successively in diverse fields such as data mining, conceptual modeling, social networks, software engineering, and the semantic web. One shortcoming of FCA, however, is the large number of concepts that typically arise in dense datasets hindering typical tasks such as rule generation and visualization. To overcome this shortcoming, it is important to develop formalisms and methods to segment, categorize and cluster formal concepts. The first step in achieving these aims is to define suitable similarity and dissimilarity measures of formal concepts. In this paper we propose three similarity measures based on existent set-based measures in addition to developing the completely novel zeros-induced measure. Moreover, we formally prove that all the measures proposed are indeed similarity measures and investigate the computational complexity of computing them. Finally, an extensive empirical evaluation on real-world data is presented in which the utility and character of each similarity measure is tested and evaluated.

39 citations


Journal ArticleDOI
TL;DR: An agent language that combines agent functionality with a state transition theory and model-theoretic semantics, based on abductive logic programming, but employs a simplified state-free syntax, with an operational semantics that uses destructive updates to manipulate a database, which represents the current state of the environment.
Abstract: In this paper we present an agent language that combines agent functionality with a state transition theory and model-theoretic semantics. The language is based on abductive logic programming (ALP), but employs a simplified state-free syntax, with an operational semantics that uses destructive updates to manipulate a database, which represents the current state of the environment. The language builds upon the ALP combination of logic programs, to represent an agent's beliefs, and integrity constraints, to represent the agent's goals. Logic programs are used to define macro-actions, intensional predicates, and plans to reduce goals to sub-goals including actions. Integrity constraints are used to represent reactive rules, which are triggered by the current state of the database and recent agent actions and external events. The execution of actions and the assimilation of observations generate a sequence of database states. In the case of the successful solution of all goals, this sequence, taken as a whole, determines a model that makes the agent's goals and beliefs all true.

36 citations


Journal ArticleDOI
TL;DR: A trial solution of the model is formulated as an artificial feed-forward neural network containing unknown weights which are optimized in an unsupervised way based on the approximate solution of a well known Lane–Emden–Fowler (LEF) equation.
Abstract: The article is based on the approximate solution of a well known Lane---Emden---Fowler (LEF) equation. A trial solution of the model is formulated as an artificial feed-forward neural network containing unknown weights which are optimized in an unsupervised way. The proposed scheme is tested successfully on various test cases of initial value problems of LEF equations. The reliability and effectiveness is validated through comprehensive statistical analysis.

Journal ArticleDOI
TL;DR: This paper demonstrates the processing of counterfactual sentences on a classical example due to Ernest Adam and gives a panoramic view of several applications wherecounterfactual reasoning has benefited problem areas in the empirical sciences.
Abstract: Recent advances in causal reasoning have given rise to a computation model that emulates the process by which humans generate, evaluate and distinguish counterfactual sentences. Though compatible with the "possible worlds" account, this model enjoys the advantages of representational economy, algorithmic simplicity and conceptual clarity. Using this model, the paper demonstrates the processing of counterfactual sentences on a classical example due to Ernest Adam. It then gives a panoramic view of several applications where counterfactual reasoning has benefited problem areas in the empirical sciences.

Journal ArticleDOI
TL;DR: A simpler version of GAMBLETA is proposed, in which the allocators correspond to the arms, such that a single portfolio is selected for each instance, represented as a bandit problem with partial information, and an unknown bound on losses.
Abstract: We propose a method that learns to allocate computation time to a given set of algorithms, of unknown performance, with the aim of solving a given sequence of problem instances in a minimum time. Analogous meta-learning techniques are typically based on models of algorithm performance, learned during a separate offline training sequence, which can be prohibitively expensive. We adopt instead an online approach, named GAMBLETA, in which algorithm performance models are iteratively updated, and used to guide allocation on a sequence of problem instances. GAMBLETA is a general method for selecting among two or more alternative algorithm portfolios. Each portfolio has its own way of allocating computation time to the available algorithms, possibly based on performance models, in which case its performance is expected to improve over time, as more runtime data becomes available. The resulting exploration-exploitation trade-off is represented as a bandit problem. In our previous work, the algorithms corresponded to the arms of the bandit, and allocations evaluated by the different portfolios were mixed, using a solver for the bandit problem with expert advice, but this required the setting of an arbitrary bound on algorithm runtimes, invalidating the optimal regret of the solver. In this paper, we propose a simpler version of GAMBLETA, in which the allocators correspond to the arms, such that a single portfolio is selected for each instance. The selection is represented as a bandit problem with partial information, and an unknown bound on losses. We devise a solver for this game, proving a bound on its expected regret. We present experiments based on results from several solver competitions, in various domains, comparing GAMBLETA with another online method.

Journal ArticleDOI
TL;DR: This paper introduces a Normative Programming Language (NPL)—a language dedicated to the development of normative programs and presents the interpreter for such a language and shows how it can be used within an organisation management infrastructure.
Abstract: The specification of multi-agent organisations is typically based on high-level modelling languages so as to simplify the task of software designers. Interpreting such high-level specifications as part of the organisation management infrastructure (OMI) is a difficult and cumbersome task. Simpler and more efficient tools need to be used for this. Based on primitives such as norms and obligations, we introduce in this paper a Normative Programming Language (NPL)--a language dedicated to the development of normative programs. We present the interpreter for such a language and show how it can be used within an organisation management infrastructure. While designers and agents can still use a high-level organisational modelling language to specify and reason about the multi-agent organisation, the OMI interprets a simpler language. This is possible because the high-level specifications can be automatically translated into the simpler (normative) language. Our approach was used to develop an improved OMI for the Moise framework, as described in this paper. We also show how Moise's organisation modelling language (with primitives such as roles, groups, and goals) can be translated into NPL programs. Finally, we briefly describe how this all has been implemented on top of ORA4MAS, the distributed artifact-based organisation management infrastructure for Moise.

Journal ArticleDOI
TL;DR: This article focuses on combinations of (quantified) epistemic and doxastic logics and study their application for modeling and automating the reasoning of rational agents.
Abstract: Numerous classical and non-classical logics can be elegantly embedded in Church's simple type theory, also known as classical higher-order logic. Examples include propositional and quantified multimodal logics, intuitionistic logics, logics for security, and logics for spatial reasoning. Furthermore, simple type theory is sufficiently expressive to model combinations of embedded logics and it has a well understood semantics. Off-the-shelf reasoning systems for simple type theory exist that can be uniformly employed for reasoning within and about embedded logics and logics combinations. In this article we focus on combinations of (quantified) epistemic and doxastic logics and study their application for modeling and automating the reasoning of rational agents. We present illustrating example problems and report on experiments with off-the-shelf higher-order automated theorem provers.

Journal ArticleDOI
TL;DR: It is argued that it is inevitable that automated provers will be adopted as a practical tool for the working mathematician.
Abstract: In contrast to the widespread use of computer algebra systems in mathematics automated theorem provers have largely met with indifference. There are signs that this is at last beginning to change. We argue that it is inevitable that automated provers will be adopted as a practical tool for the working mathematician. Mathematical applications of automated provers raises profound challenges for their developers.

Journal ArticleDOI
TL;DR: The main verification algorithm and the structure of NeVer, the tool for checking safety of ANNs, are described and empirical results confirming the effectiveness of Ne Ver are presented on realistic case studies.
Abstract: The adoption of Artificial Neural Networks (ANNs) in safety-related applications is often avoided because it is difficult to rule out possible misbehaviors with traditional analytical or probabilistic techniques. In this paper we present NeVer, our tool for checking safety of ANNs. NeVer encodes the problem of verifying safety of ANNs into the problem of satisfying corresponding Boolean combinations of linear arithmetic constraints. We describe the main verification algorithm and the structure of NeVer. We present also empirical results confirming the effectiveness of NeVer on realistic case studies.

Journal ArticleDOI
TL;DR: In this article, the authors summarize the evidence for and against computational complexity being a barrier to manipulation of voting rules, and discuss some features that may change the computational complexity of computing a manipulation (for example, if votes are restricted to be single peaked).
Abstract: When agents are acting together, they may need a simple mechanism to decide on joint actions. One possibility is to have the agents express their preferences in the form of a ballot and use a voting rule to decide the winning action(s). Unfortunately, agents may try to manipulate such an election by mis-reporting their preferences. Fortunately, it has been shown that it is NP-hard to compute how to manipulate a number of different voting rules. However, NP-hardness only bounds the worst-case complexity. In this survey article, we summarize the evidence for and against computational complexity being a barrier to manipulation. We look both at techniques identified to increase complexity (for example, hybridizing together two or more voting rules), as well as other features that may change the computational complexity of computing a manipulation (for example, if votes are restricted to be single peaked then some of the complexity barriers fall away). We discuss recent theoretical results that consider the average case, as well as simple greedy and approximate methods. We also describe how computational “phase transitions”, which have been fruitful in identifying hard instances of propositional satisfiability and other NP-hard problems, have provided insight into the hardness of manipulating voting rules in practice. Finally, we consider manipulation of other related problems like stable marriage and tournament problems.

Journal ArticleDOI
TL;DR: This paper discusses three different approaches to test-case generation, their application to a separation assurance prototype, and their respective strengths and weaknesses, and presents an approach for statistical analysis of the large numbers of test results obtained from the framework.
Abstract: In order to address the rapidly increasing load of air traffic operations, innovative algorithms and software systems must be developed for the next generation air traffic control. Extensive verification of such novel algorithms is key for their adoption by industry. Separation assurance algorithms aim at predicting if two aircraft will get closer to each other than a minimum safe distance; if loss of separation is predicted, they also propose a change of course for the aircraft to resolve this potential conflict. In this paper, we report on our work towards developing an advanced testing framework for separation assurance. Our framework supports automated test case generation and testing, and defines test oracles that capture algorithm requirements. We discuss three different approaches to test-case generation, their application to a separation assurance prototype, and their respective strengths and weaknesses. We also present an approach for statistical analysis of the large numbers of test results obtained from our framework.

Journal ArticleDOI
TL;DR: ACOPlan as mentioned in this paper is a planner based on the ant colony optimization framework, in which a colony of planning ants searches for near optimal solution plans with respect to an overall plan cost metric.
Abstract: In this paper the system ACOPlan for planning with non uniform action cost is introduced and analyzed. ACOPlan is a planner based on the ant colony optimization framework, in which a colony of planning ants searches for near optimal solution plans with respect to an overall plan cost metric. This approach is motivated by the strong similarity between the process used by artificial ants to build solutions and the methods used by state---based planners to search solution plans. Planning ants perform a stochastic and heuristic based search by interacting through a pheromone model. The proposed heuristic and pheromone models are presented and compared through systematic experiments on benchmark planning domains. Experiments are also provided to compare the quality of ACOPlan solution plans with respect to state of the art satisficing planners. The analysis of the results confirm the good performance of the Action---Action pheromone model and points out the promising performance of the novel Fuzzy---Level---Action pheromone model. The analysis also suggests general principles for designing performant pheromone models for planning and further extensions of ACOPlan to other optimization models.

Journal ArticleDOI
TL;DR: An original hybrid approach to solve the Capacitated Vehicle Routing Problem (CVRP) is presented, which combines a Probabilistic Algorithm with Constraint Programming (CP) and Lagrangian Relaxation (LR) and a probabilistic Variable Neighbourhood Search (VNS) algorithm.
Abstract: This paper presents an original hybrid approach to solve the Capacitated Vehicle Routing Problem (CVRP) The approach combines a Probabilistic Algorithm with Constraint Programming (CP) and Lagrangian Relaxation (LR) After introducing the CVRP and reviewing the existing literature on the topic, the paper proposes an approach based on a probabilistic Variable Neighbourhood Search (VNS) algorithm Given a CVRP instance, this algorithm uses a randomized version of the classical Clarke and Wright Savings constructive heuristic to generate a starting solution This starting solution is then improved through a local search process which combines: (a) LR to optimise each individual route, and (b) CP to quickly verify the feasibility of new proposed solutions The efficiency of our approach is analysed after testing some well-known CVRP benchmarks Benefits of our hybrid approach over already existing approaches are also discussed In particular, the potential flexibility of our methodology is highlighted

Journal ArticleDOI
TL;DR: This work defines the language, and it is shown that it can be used to resolve inconsistencies and merge mappings from different matchers based on the level of confidence assigned to different rules, and shows that the well-founded semantics approximates the answer set semantics.
Abstract: Creating mappings between ontologies is a common way of approaching the semantic heterogeneity problem on the Semantic Web. To fit into the landscape of Semantic Web languages, a suitable, logic-based representation formalism for mappings is needed. We argue that such a formalism has to be able to deal with uncertainty and inconsistencies in automatically created mappings. We analyze the requirements for such a formalism, and we propose a novel approach to probabilistic description logic programs as such a formalism, which tightly combines normal logic programs under the well-founded semantics with both tractable ontology languages and Bayesian probabilities. We define the language, and we show that it can be used to resolve inconsistencies and merge mappings from different matchers based on the level of confidence assigned to different rules. Furthermore, we explore the semantic and computational aspects of probabilistic description logic programs under the well-founded semantics. In particular, we show that the well-founded semantics approximates the answer set semantics. We also describe algorithms for consistency checking and tight query processing, and we analyze the data and general complexity of these two central computational problems. As a crucial property, the novel tightly integrated probabilistic description logic programs under the well-founded semantics allow for tractable consistency checking and for tractable tight query processing in the data complexity, and they even have a first-order rewritable (and thus LogSpace data complexity) special case, which is especially interesting for representing ontology mappings.

Journal ArticleDOI
TL;DR: Inference systems for the combined class of functional and full hierarchical dependencies in relational databases are identified and a finite axiomatisation for the original notion of implication is established which clarifies the role of the complementation rule in the combined setting.
Abstract: We study inference systems for the combined class of functional and full hierarchical dependencies in relational databases. Two notions of implication are considered: the original notion in which a dependency is implied by a given set of dependencies and the underlying set of attributes, and the alternative notion in which a dependency is implied by a given set of dependencies alone. The first main result establishes a finite axiomatisation for the original notion of implication which clarifies the role of the complementation rule in the combined setting. In fact, we identify inference systems that are appropriate in the following sense: full hierarchical dependencies can be inferred without use of the complementation rule at all or with a single application of the complementation rule at the final step of the inference only; and functional dependencies can be inferred without any application of the complementation rule. The second main result establishes a finite axiomatisation for the alternative notion of implication. We further show how inferences of full hierarchical dependencies can be simulated by inferences of multivalued dependencies, and vice versa. This enables us to apply both of our main results to the combined class of functional and multivalued dependencies. Furthermore, we establish a novel axiomatisation for the class of non-trivial functional dependencies.

Journal ArticleDOI
TL;DR: The approach used to develop the multi-agent system of herders that competed as the Jason-DTU team at the Multi-Agent Programming Contest 2010 is described, which includes design and analysis of the system as well as the main features of the agent team strategy.
Abstract: We describe the approach used to develop the multi-agent system of herders that competed as the Jason-DTU team at the Multi-Agent Programming Contest 2010. We also participated in 2009 with a system developed in the agent-oriented programming language Jason which is an extension of AgentSpeak. We used the implementation from 2009 as a foundation and therefore much of the work done this year was on improving that implementation. We present a description which includes design and analysis of the system as well as the main features of our agent team strategy. In addition we discuss the technologies used to develop this system as well as our future goals in the area.

Journal ArticleDOI
TL;DR: A complete and decidable logical system that describes interdependencies that may exist on a fixed hypergraph and the axioms and inference rules in this system are shown to be independent in the standard logical sense.
Abstract: The article considers interdependencies between secrets in a multiparty system. Each secret is assumed to be known only to a certain fixed set of parties. These sets can be viewed as edges of a hypergraph whose vertices are the parties of the system. The properties of interdependencies are expressed through a multi-argument relation called independence, which is a generalization of a binary relation also known as nondeducibility. The main result is a complete and decidable logical system that describes interdependencies that may exist on a fixed hypergraph. Additionally, the axioms and inference rules in this system are shown to be independent in the standard logical sense.

Journal ArticleDOI
TL;DR: This paper addresses the existence and uniqueness of the solutions computed using the possibilistic counterparts of the so-called kinematics properties underlying Jeffrey’s rule of conditioning, and provides precise conditions where theiqueness of the revised possibility distribution exists.
Abstract: Conditioning, belief update and revision are important tasks for designing intelligent systems. Possibility theory is among the powerful uncertainty theories particularly suitable for representing and reasoning with uncertain and incomplete information. This paper addresses an important issue related to the possibilistic counterparts of Jeffrey's rule of conditioning. More precisely, it addresses the existence and uniqueness of the solutions computed using the possibilistic counterparts of the so-called kinematics properties underlying Jeffrey's rule of conditioning. We first point out that like the probabilistic framework, in the quantitative possibilistic setting, there exists a unique solution for revising a possibility distribution given the uncertainty bearing on a set of exhaustive and mutually exclusive events. However, in the qualitative possibilistic framework, the situation is different. In particular, the application of Jeffrey's rule of conditioning does not guarantee the existence of a solution. We provide precise conditions where the uniqueness of the revised possibility distribution exists.

Journal ArticleDOI
TL;DR: This paper proves that MRE is at least NP-hard and defines a subproblem of MRE called MREk that finds the most relevant k-ary explanation and proves that the decision problem of M REk is $NP^{\it PP}$-complete.
Abstract: Most Relevant Explanation (MRE) is the problem of finding a partial instantiation of a set of target variables that maximizes the generalized Bayes factor as the explanation for given evidence in a Bayesian network. MRE has a huge solution space and is extremely difficult to solve in large Bayesian networks. In this paper, we first prove that MRE is at least NP-hard. We then define a subproblem of MRE called MRE k that finds the most relevant k-ary explanation and prove that the decision problem of MRE k is $NP^{\it PP}$ -complete. Since MRE needs to find the best solution by MRE k over all k, and we can also show that MRE is in $NP^{\it PP}$ , we conjecture that a decision problem of MRE is $NP^{\it PP}$ -complete as well. Furthermore, we show that MRE remains in $NP^{\it PP}$ even if we restrict the number of target variables to be within a log factor of the number of all unobserved variables. These complexity results prompt us to develop a suite of approximation algorithms for solving MRE, One algorithm finds an MRE solution by integrating reversible-jump MCMC and simulated annealing in simulating a non-homogeneous Markov chain that eventually concentrates its mass on the mode of a distribution of the GBF scores of all solutions. The other algorithms are all instances of local search methods, including forward search, backward search, and tabu search. We tested these algorithms on a set of benchmark diagnostic Bayesian networks. Our empirical results show that these methods could find optimal MRE solutions for most of the test cases in our experiments efficiently.

Journal ArticleDOI
TL;DR: A verifiable multiple UAV system cooperatively monitoring a road network is presented in this paper, and Kripke modelling is used to formally model the distributed cooperative control strategy, and to verify correctness of the specifications.
Abstract: A verifiable multiple UAV system cooperatively monitoring a road network is presented in this paper. The focus is on formal modelling and verification which can guarantee correctness of concurrent reactive systems such as multi-UAV systems. Kripke modelling is used to formally model the distributed cooperative control strategy, and to verify correctness of the specifications. Desirable properties of the mission such as liveness are specified in Computation Tree Logic (CTL). Model checking technique is used to exhaustively explore the state space to verify whether the system behaviour, modelled by Kripke model, satisfies the specifications. Violation of a specification is analysed by means of the counter-example generated by SMV model checking tool.

Journal ArticleDOI
TL;DR: This work presents a novel sufficient condition for half positionality, more general than what was previously known, and compares it with several others, proposed in the recent literature, outlining an intricate network of relationships, where only few combinations are sufficient.
Abstract: Half positionality is the property of a language of infinite words to admit positional winning strategies, when interpreted as the goal of a two-player game on a graph. Such problem applies to the automatic synthesis of controllers, where positional strategies represent efficient controllers. As our main result, we present a novel sufficient condition for half positionality, more general than what was previously known. Moreover, we compare our proposed condition with several others, proposed in the recent literature, outlining an intricate network of relationships, where only few combinations are sufficient for half positionality.

Journal ArticleDOI
TL;DR: The efficacy of the overall heuristic algorithm is demonstrated empirically both on a set of previously studied job-shop scheduling benchmark problems with sequence dependent setup times and by introducing a new benchmark with setups and generalized precedence constraints.
Abstract: This paper presents a heuristic algorithm for solving a job-shop scheduling problem with sequence dependent setup times and min/max separation constraints among the activities (SDST-JSSP/max). The algorithm relies on a core constraint-based search procedure, which generates consistent orderings of activities that require the same resource by incrementally imposing precedence constraints on a temporally feasible solution. Key to the effectiveness of the search procedure is a conflict sampling method biased toward selection of most critical conflicts and coupled with a non-deterministic choice heuristic to guide the base conflict resolution process. This constraint-based search is then embedded within a larger iterative-sampling search framework to broaden search space coverage and promote solution optimization. The efficacy of the overall heuristic algorithm is demonstrated empirically both on a set of previously studied job-shop scheduling benchmark problems with sequence dependent setup times and by introducing a new benchmark with setups and generalized precedence constraints.