F
Filippos Christianos
Researcher at University of Edinburgh
Publications - 24
Citations - 259
Filippos Christianos is an academic researcher from University of Edinburgh. The author has contributed to research in topics: Reinforcement learning & Computer science. The author has an hindex of 4, co-authored 18 publications receiving 102 citations. Previous affiliations of Filippos Christianos include Technical University of Crete.
Papers
More filters
Posted Content
Dealing with Non-Stationarity in Multi-Agent Deep Reinforcement Learning.
TL;DR: This paper surveys recent works that address the non-stationarity problem in multi-agent deep reinforcement learning, and methods range from modifications in the training procedure, to learning representations of the opponent's policy, meta-learning, communication, and decentralized learning.
Posted Content
Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning
TL;DR: This work proposes a general method for efficient exploration by sharing experience amongst agents by applying experience sharing in an actor-critic framework and finds that it consistently outperforms two baselines and two state-of-the-art algorithms by learning in fewer steps and converging to higher returns.
Posted Content
Open Ad Hoc Teamwork using Graph-based Policy Learning
TL;DR: This work considers open teams by allowing agents of varying types to enter and leave the team without prior notification, which results in agent policies which can robustly adapt to dynamic team composition, and is able to effectively generalize to larger teams than were seen during training.
Posted Content
Comparative Evaluation of Multi-Agent Deep Reinforcement Learning Algorithms.
TL;DR: This work evaluates and compares three different classes of MARL algorithms in a diverse range of multi-agent learning tasks and shows that algorithm performance depends strongly on environment properties and no algorithm learns efficiently across all learning tasks.
Proceedings Article
Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning
TL;DR: In this paper, the authors proposed a shared experience actor-critic (SEAC) algorithm, which applies experience sharing in an actor critic framework to explore sparse reward multi-agent environments.