scispace - formally typeset
Search or ask a question

Showing papers by "Natasha Alechina published in 2022"


Journal ArticleDOI
TL;DR: In this article , the authors study the extension of GAL with distributed knowledge, and in particular possible interaction properties between GAL operators and distributed knowledge and show that there are no interaction properties, contrary to intuition.
Abstract: Abstract Public announcement logic (PAL) is an extension of epistemic logic with dynamic operators that model the effects of all agents simultaneously and publicly acquiring the same piece of information. One of the extensions of PAL, group announcement logic (GAL), allows quantification over (possibly joint) announcements made by agents. In GAL, it is possible to reason about what groups can achieve by making such announcements. It seems intuitive that this notion of coalitional ability should be closely related to the notion of distributed knowledge, the implicit knowledge of a group. Thus, we study the extension of GAL with distributed knowledge, and in particular possible interaction properties between GAL operators and distributed knowledge. The perhaps surprising result is that, in fact, there are no interaction properties, contrary to intuition. We make this claim precise by providing a sound and complete axiomatisation of GAL with distributed knowledge. We also consider several natural variants of GAL with distributed knowledge, as well as some other related logic, and compare their expressive power.

5 citations


Proceedings ArticleDOI
01 Jul 2022
TL;DR: This paper presents a new approach to multi-agent intention scheduling in which agents predict the actions of other agents based on a high-level specification of the tasks performed by an agent in the form of a reward machine rather than on its (assumed) program.
Abstract: Recent work in multi-agent intention scheduling has shown that enabling agents to predict the actions of other agents when choosing their own actions can be beneficial. However existing approaches to 'intention-aware' scheduling assume that the programs of other agents are known, or are "similar" to that of the agent making the prediction. While this assumption is reasonable in some circumstances, it is less plausible when the agents are not co-designed. In this paper, we present a new approach to multi-agent intention scheduling in which agents predict the actions of other agents based on a high-level specification of the tasks performed by an agent in the form of a reward machine (RM) rather than on its (assumed) program. We show how a reward machine can be used to generate tree and rollout policies for an MCTS-based scheduler. We evaluate our approach in a range of multi-agent environments, and show that RM-based scheduling out-performs previous intention-aware scheduling approaches in settings where agents are not co-designed

3 citations


Journal ArticleDOI
TL;DR: In this article , the relative expressivity of group announcement logic and coalition announcement logic is studied and some results involving their more well-known sibling APAL are discussed, as well as how the presence of memory alters the relationship between groups and coalition.
Abstract: Group announcement logic (GAL) and coalition announcement logic (CAL) allow us to reason about whether it is possible for groups and coalitions of agents to achieve their desired epistemic goals through truthful public communication. The difference between groups and coalitions in such a context is that the latter make their announcements in the presence of possible adversarial counter-announcements. As epistemic goals may involve some agents remaining ignorant, counter-announcements may preclude coalitions from reaching their goals. We study the relative expressivity of GAL and CAL and provide some results involving their more well-known sibling APAL. We also discuss how the presence of memory alters the relationship between groups and coalition.

2 citations


Proceedings ArticleDOI
01 Jul 2022
TL;DR: This work considers the problem of synthesising and revising the set of norms in a normative MAS to satisfy a design objective expressed in Alternating Time Temporal Logic (ATL*), and shows that synthesising dynamic norms is (k + 1)-EXPTIME, where k is the alternation depth of quantifiers in the ATL* specification.
Abstract: Norms have been widely proposed to coordinate and regulate multi-agent systems (MAS) behaviour. We consider the problem of synthesising and revising the set of norms in a normative MAS to satisfy a design objective expressed in Alternating Time Temporal Logic (ATL*). ATL* is a well-established language for strategic reasoning, which allows the specification of norms that constrain the strategic behaviour of agents. We focus on dynamic norms, that is, norms corresponding to Mealy machines, that allow us to place different constraints on the agents' behaviour depending on the state of the norm and the state of the underlying MAS. We show that synthesising dynamic norms is (k + 1)-EXPTIME, where k is the alternation depth of quantifiers in the ATL* specification. Note that for typical cases of interest, k is either 1 or 2. We also study the problem of removing existing norms to satisfy a new objective, which we show to be 2EXPTIME-complete.

1 citations


Journal ArticleDOI
TL;DR: In this paper , the authors investigate the use of system execution data to automatically synthesise and revise conditional prohibitions with deadlines, a type of norms aimed at prohibiting agents from exhibiting certain patterns of behaviors.
Abstract: In multi-agent systems, norm enforcement is a mechanism for steering the behavior of individual agents in order to achieve desired system-level objectives. Due to the dynamics of multi-agent systems, however, it is hard to design norms that guarantee the achievement of the objectives in every operating context. Also, these objectives may change over time, thereby making previously defined norms ineffective. In this paper, we investigate the use of system execution data to automatically synthesise and revise conditional prohibitions with deadlines, a type of norms aimed at prohibiting agents from exhibiting certain patterns of behaviors. We propose DDNR (Data-Driven Norm Revision), a data-driven approach to norm revision that synthesises revised norms with respect to a data set of traces describing the behavior of the agents in the system. We evaluate DDNR using a state-of-the-art, off-the-shelf urban traffic simulator. The results show that DDNR synthesises revised norms that are significantly more accurate than the original norms in distinguishing adequate and inadequate behaviors for the achievement of the system-level objectives.