Open AccessProceedings Article
QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
Tabish Rashid,Mikayel Samvelyan,Christian Schroeder,Gregory Farquhar,Jakob Foerster,Shimon Whiteson +5 more
- pp 4292-4301
TLDR
QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations, and structurally enforce that the joint-action value is monotonic in the per- agent values, which allows tractable maximisation of the jointaction-value in off-policy learning.About:
This article is published in International Conference on Machine Learning.The article was published on 2018-07-03 and is currently open access. It has received 505 citations till now. The article focuses on the topics: Reinforcement learning & Monotonic function.read more
Citations
More filters
Posted Content
Dealing with Non-Stationarity in Multi-Agent Deep Reinforcement Learning.
TL;DR: This paper surveys recent works that address the non-stationarity problem in multi-agent deep reinforcement learning, and methods range from modifications in the training procedure, to learning representations of the opponent's policy, meta-learning, communication, and decentralized learning.
Posted Content
Weighted QMIX: Expanding Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
TL;DR: Two scalable versions of QMIX are introduced and it is shown that these versions demonstrate improved performance on both predator-prey and challenging multi-agent StarCraft benchmark tasks.
Posted Content
An Overview of Multi-Agent Reinforcement Learning from Game Theoretical Perspective
Yaodong Yang,Jun Wang +1 more
TL;DR: This work provides a self-contained assessment of the current state-of-the-art MARL techniques from a game theoretical perspective and expects this work to serve as a stepping stone for both new researchers who are about to enter this fast-growing domain and existing domain experts who want to obtain a panoramic view and identify new directions based on recent advances.
Posted Content
SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving.
Ming Zhou,Jun Luo,Julian Villela,Yaodong Yang,David Rusu,Jiayu Miao,Weinan Zhang,Montgomery Alban,Iman Fadakar,Zheng Chen,Aurora Chongxi Huang,Ying Wen,Kimia Hassanzadeh,Daniel Graves,Dong Chen,Zhengbang Zhu,Nhat M. Nguyen,Mohamed A. Elsayed,Kun Shao,Sanjeevan Ahilan,Baokuan Zhang,Jiannan Wu,Zhengang Fu,Kasra Rezaee,Peyman Yadmellat,Mohsen Rohani,Nicolas Perez Nieves,Yihan Ni,Seyedershad Banijamali,Alexander Imani Cowen-Rivers,Zheng Tian,Daniel Palenicek,Haitham Bou-Ammar,Hongbo Zhang,Wulong Liu,Jianye Hao,Jun Wang +36 more
TL;DR: The design goals of SMARTS (Scalable Multi-Agent RL Training School) are described, its basic architecture and its key features are explained, and its use is illustrated through concrete multi-agent experiments on interactive scenarios.
Journal ArticleDOI
Multi-Agent Reinforcement Learning: A Review of Challenges and Applications
Lorenzo Canese,Gian Carlo Cardarilli,Luca Di Nunzio,Rocco Fazzolari,Daniele Giardino,Marco Re,Sergio Spanò +6 more
TL;DR: A detailed taxonomy of the main multi-agent approaches proposed in the literature, focusing on their related mathematical models is presented, including nonstationarity, scalability, and observability.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI
Human-level control through deep reinforcement learning
Volodymyr Mnih,Koray Kavukcuoglu,David Silver,Andrei Rusu,Joel Veness,Marc G. Bellemare,Alex Graves,Martin Riedmiller,Andreas K. Fidjeland,Georg Ostrovski,Stig Petersen,Charles Beattie,Amir Sadik,Ioannis Antonoglou,Helen King,Dharshan Kumaran,Daan Wierstra,Shane Legg,Demis Hassabis +18 more
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Posted Content
Empirical evaluation of gated recurrent neural networks on sequence modeling
TL;DR: These advanced recurrent units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU), are found to be comparable to LSTM.
Deep reinforcement learning with double Q-learning
TL;DR: In this article, the authors show that the DQN algorithm suffers from substantial overestimation in some games in the Atari 2600 domain, and they propose a specific adaptation to the algorithm and show that this algorithm not only reduces the observed overestimations, but also leads to much better performance on several games.
Journal ArticleDOI
Learning from delayed rewards
TL;DR: The invention relates to a circuit for use in a receiver which can receive two-tone/stereo signals which is intended to make a choice between mono or stereo reproduction of signal A or of signal B and vice versa.