scispace - formally typeset
Open AccessProceedings Article

QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

TLDR
QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations, and structurally enforce that the joint-action value is monotonic in the per- agent values, which allows tractable maximisation of the jointaction-value in off-policy learning.
About
This article is published in International Conference on Machine Learning.The article was published on 2018-07-03 and is currently open access. It has received 505 citations till now. The article focuses on the topics: Reinforcement learning & Monotonic function.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Distributed Heuristic Multi-Agent Path Finding with Communication

TL;DR: In this paper, the authors combine communication with deep Q-learning to provide a novel learning based method for multi-agent path finding, where agents achieve cooperation via graph convolution and embed the potential choices of shortest paths from single source as heuristic guidance instead of using a specific path as in most existing works.
Posted Content

Deep reinforcement learning for large-scale epidemic control

TL;DR: This experiment shows that deep reinforcement learning can be used to learn mitigation policies in complex epidemiological models with a large state space, and demonstrates that there can be an advantage to consider collaboration between districts when designing prevention strategies.
Posted Content

Parameter Sharing is Surprisingly Useful for Multi-Agent Deep Reinforcement Learning.

TL;DR: The MAILP model is used to show that increasing training centralization arbitrarily mitigates the slowing of convergence due to nonstationarity, and a formal proof of a set of methods that allow parameter sharing to serve in environments with heterogeneous agents is offered.
Posted Content

Comparative Evaluation of Multi-Agent Deep Reinforcement Learning Algorithms.

TL;DR: This work evaluates and compares three different classes of MARL algorithms in a diverse range of multi-agent learning tasks and shows that algorithm performance depends strongly on environment properties and no algorithm learns efficiently across all learning tasks.
Journal ArticleDOI

Multiagent Adversarial Collaborative Learning via Mean-Field Theory

TL;DR: This work proposes an adversarial collaborative learning method in a mixed cooperative–competitive environment, exploiting friend-or-foe Q-learning and mean-field theory, and simplifies the interactions as those between a single agent and the mean effects of friends and opponents.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

Human-level control through deep reinforcement learning

TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Posted Content

Empirical evaluation of gated recurrent neural networks on sequence modeling

TL;DR: These advanced recurrent units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU), are found to be comparable to LSTM.

Deep reinforcement learning with double Q-learning

TL;DR: In this article, the authors show that the DQN algorithm suffers from substantial overestimation in some games in the Atari 2600 domain, and they propose a specific adaptation to the algorithm and show that this algorithm not only reduces the observed overestimations, but also leads to much better performance on several games.
Journal ArticleDOI

Learning from delayed rewards

TL;DR: The invention relates to a circuit for use in a receiver which can receive two-tone/stereo signals which is intended to make a choice between mono or stereo reproduction of signal A or of signal B and vice versa.
Related Papers (5)