scispace - formally typeset
Search or ask a question

Showing papers by "Natasha Alechina published in 2017"


Journal ArticleDOI
TL;DR: This study aims to evaluate MatchMaps with respect to the amount of human effort required for matching, and compare it with a fully manual matching process.
Abstract: A method for matching crowd-sourced and authoritative geospatial data is presented. A level of tolerance is defined as an input parameter as some difference in the geometry representation of a spatial object is to be expected. The method generates matches between spatial objects using location information and lexical information, such as names and types, and verifies consistency of matches using reasoning in qualitative spatial logic and description logic. We test the method by matching geospatial data from OpenStreetMap and the national mapping agencies of Great Britain and France. We also analyze how the level of tolerance affects the precision and recall of matching results for the same geographic area using 12 different levels of tolerance within a range of 1 to 80 meters. The generated matches show potential in helping enrich and update geospatial data.

24 citations


Proceedings ArticleDOI
08 May 2017
TL;DR: It is shown how team plans can be represented in terms of structural equations, and how the definitions of causality and degree of responsibility and blame are applied to determine the agent(s) who caused the failure and what their degree of Responsibility/blame is.
Abstract: Many objectives can be achieved (or may be achieved more effectively) only by a group of agents executing a team plan. If a team plan fails, it is often of interest to determine what caused the failure, the degree of responsibility of each agent for the failure, and the degree of blame attached to each agent. We show how team plans can be represented in terms of structural equations, and then apply the definitions of causality introduced by Halpern [2015] and degree of responsibility and blame introduced by Chockler and Halpern [2004] to determine the agent(s) who caused the failure and what their degree of responsibility/blame is. We also prove new results on the complexity of computing causality and degree of responsibility and blame, showing that they can be determined in polynomial time for many team plans of interest.

20 citations


Journal ArticleDOI
TL;DR: It is shown that the model-checking problem for RB-ATL with unbounded production and con- sumption of resources is decidable but EXPSPACE-hard, and some tractable cases are investigated and a detailed comparison to a variant of the resource logic RAL is provided.

20 citations


Journal ArticleDOI
TL;DR: This work considers Resource Agent Logic (RAL), which extends ATL to allow the verification of properties of systems where agents act under resource constraints, and identifies a significant new fragment of RAL for which model checking is decidable.

16 citations


Journal ArticleDOI
TL;DR: This paper considers a combination of these logics – Coalition and Group Announcement Logic and provides its complete axiomatisation and partially answer the question of how group and coalition announcement operators interact, and settle some other open problems.
Abstract: Dynamic epistemic logics which model abilities of agents to make various announcements and influence each other's knowledge have been studied extensively in recent years. Two notable examples of such logics are Group Announcement Logic and Coalition Announcement Logic. They allow us to reason about what groups of agents can achieve through joint announcements in non-competitive and competitive environments. In this paper, we consider a combination of these logics -- Coalition and Group Announcement Logic and provide its complete axiomatisation. Moreover, we partially answer the question of how group and coalition announcement operators interact, and settle some other open problems.

9 citations


Proceedings Article
01 Jul 2017
TL;DR: In this article, a combination of group announcement logic and coalition announcement logic is considered, and its complete axiomatisation is provided, partially answering the question of how group and coalition operators interact, and settling some open problems.
Abstract: Dynamic epistemic logics which model abilities of agents to make various announcements and influence each other's knowledge have been studied extensively in recent years. Two notable examples of such logics are Group Announcement Logic and Coalition Announcement Logic. They allow us to reason about what groups of agents can achieve through joint announcements in non-competitive and competitive environments. In this paper, we consider a combination of these logics -- Coalition and Group Announcement Logic and provide its complete axiomatisation. Moreover, we partially answer the question of how group and coalition announcement operators interact, and settle some other open problems.

7 citations


Journal ArticleDOI
24 Jul 2017
TL;DR: In this paper, a combination of group announcement logic and coalition announcement logic is considered, and its complete axiomatisation is provided, partially answering the question of how group and coalition operators interact, and settling some other open problems.
Abstract: Dynamic epistemic logics which model abilities of agents to make various announcements and influence each other’s knowledge have been studied extensively in recent years. Two notable examples of such logics are Group Announcement Logic and Coalition Announcement Logic. They allow us to reason about what groups of agents can achieve through joint announcements in non-competitive and competitive environments. In this paper, we consider a combination of these logics – Coalition and Group Announcement Logic and provide its complete axiomatisation. Moreover, we partially answer the question of how group and coalition announcement operators interact, and settle some other open problems.

7 citations


Proceedings Article
04 Feb 2017
TL;DR: Using ideas from scrip systems and peer prediction, it is shown how to design a mechanism that incentivises agents to monitor each other’s behaviour for norm violations, and is robust against collusion by the monitoring agents.
Abstract: We present an approach to incentivising monitoring for norm violations in open multi-agent systems such as Wikipedia. In such systems, there is no crisp definition of a norm violation; rather, it is a matter of judgement whether an agent’s behaviour conforms to generally accepted standards of behaviour. Agents may legitimately disagree about borderline cases. Using ideas from scrip systems and peer prediction, we show how to design a mechanism that incentivises agents to monitor each other’s behaviour for norm violations. The mechanism keeps the probability of undetected violations (submissions that the majority of the community would consider not conforming to standards) low, and is robust against collusion by the monitoring agents.

4 citations


Journal ArticleDOI
TL;DR: This work considers the problem of decomposing a group norm into a set of individual obligations for the agents comprising the group, such that if the individual obligations are fulfilled, the group obligation is fulfilled.
Abstract: We consider the problem of decomposing a group norm into a set of individual obligations for the agents comprising the group, such that if the individual obligations are fulfilled, the group obligation is fulfilled. Such an assignment of tasks to agents is often subject to additional social or organisational norms that specify permissible ways in which tasks can be assigned. An important role of social norms is that they can be used to impose ‘fairness constraints’, which seek to distribute individual responsibility for discharging the group norm in a ‘fair’ or ‘equitable’ way. We propose a simple language for this kind of fairness constraints and analyse the problem of computing a fair decomposition of a group obligation, both for non-repeating and for repeating group obligations.

3 citations