Open AccessProceedings Article
QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
Tabish Rashid,Mikayel Samvelyan,Christian Schroeder,Gregory Farquhar,Jakob Foerster,Shimon Whiteson +5 more
- pp 4292-4301
TLDR
QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations, and structurally enforce that the joint-action value is monotonic in the per- agent values, which allows tractable maximisation of the jointaction-value in off-policy learning.About:
This article is published in International Conference on Machine Learning.The article was published on 2018-07-03 and is currently open access. It has received 505 citations till now. The article focuses on the topics: Reinforcement learning & Monotonic function.read more
Citations
More filters
Proceedings Article
QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning.
TL;DR: In this article, value-based solutions for multi-agent reinforcement learning (MARL) tasks in the centralized training with decentralized execution (CTDE) regime popularized recently are explored.
Proceedings Article
The StarCraft Multi-Agent Challenge
Mikayel Samvelyan,Tabish Rashid,Christian Schroeder de Witt,Gregory Farquhar,Nantas Nardelli,Tim G. J. Rudner,Chia-Man Hung,Philip H. S. Torr,Jakob Foerster,Shimon Whiteson +9 more
TL;DR: The StarCraft Multi-Agent Challenge (SMAC), based on the popular real-time strategy game StarCraft II, is proposed as a benchmark problem and an open-source deep multi-agent RL learning framework including state-of-the-art algorithms is opened.
Journal ArticleDOI
A survey and critique of multiagent deep reinforcement learning
TL;DR: In this paper, the authors provide a clear overview of current multi-agent deep reinforcement learning (MDRL) literature, and provide general guidelines to new practitioners in the area: describing lessons learned from MDRL works, pointing to recent benchmarks, and outlining open avenues of research.
Posted Content
Value-Decomposition Networks For Cooperative Multi-Agent Learning
Peter Sunehag,Guy Lever,Audrunas Gruslys,Wojciech Marian Czarnecki,Vinicius Zambaldi,Max Jaderberg,Marc Lanctot,Nicolas Sonnerat,Joel Z. Leibo,Karl Tuyls,Thore Graepel +10 more
TL;DR: In this paper, a value decomposition network is proposed to decompose the team value function into agent-wise value functions, which leads to superior results when combined with weight sharing, role information and information channels.
Posted Content
Deep reinforcement learning
Felix Wittstock,Yuxi Li +1 more
TL;DR: This work discusses deep reinforcement learning in an overview style, focusing on contemporary work, and in historical contexts, with background of artificial intelligence, machine learning, deep learning, and reinforcement learning (RL), with resources.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI
Human-level control through deep reinforcement learning
Volodymyr Mnih,Koray Kavukcuoglu,David Silver,Andrei Rusu,Joel Veness,Marc G. Bellemare,Alex Graves,Martin Riedmiller,Andreas K. Fidjeland,Georg Ostrovski,Stig Petersen,Charles Beattie,Amir Sadik,Ioannis Antonoglou,Helen King,Dharshan Kumaran,Daan Wierstra,Shane Legg,Demis Hassabis +18 more
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Posted Content
Empirical evaluation of gated recurrent neural networks on sequence modeling
TL;DR: These advanced recurrent units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU), are found to be comparable to LSTM.
Deep reinforcement learning with double Q-learning
TL;DR: In this article, the authors show that the DQN algorithm suffers from substantial overestimation in some games in the Atari 2600 domain, and they propose a specific adaptation to the algorithm and show that this algorithm not only reduces the observed overestimations, but also leads to much better performance on several games.
Journal ArticleDOI
Learning from delayed rewards
TL;DR: The invention relates to a circuit for use in a receiver which can receive two-tone/stereo signals which is intended to make a choice between mono or stereo reproduction of signal A or of signal B and vice versa.