scispace - formally typeset
Open AccessProceedings Article

QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

TLDR
QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations, and structurally enforce that the joint-action value is monotonic in the per- agent values, which allows tractable maximisation of the jointaction-value in off-policy learning.
About
This article is published in International Conference on Machine Learning.The article was published on 2018-07-03 and is currently open access. It has received 505 citations till now. The article focuses on the topics: Reinforcement learning & Monotonic function.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Adaptive Advantage Estimation for Actor-Critic Algorithms

TL;DR: In this article, the adaptive advantage estimator (GAE) is applied to estimating policy gradients in the actor-critic framework, where an appropriate critic is supposed to balance variance from sample returns and bias introduced by parameterized value functions.
Posted Content

Divergence-Regularized Multi-Agent Actor-Critic.

TL;DR: In this article, a divergence regularized multi-agent actor-critic (DMAC) is proposed to solve the divergence regularization problem in cooperative MARL, which can be combined with many existing MARL algorithms.
Book ChapterDOI

Learning to Communicate for Mobile Sensing with Multi-agent Reinforcement Learning

TL;DR: In this paper, a multi-agent reinforcement learning (MRL) based cooperative mobile sensing algorithm is proposed, in which the vehicles can learn to communicate for cooperation via MRL and the message is also learned via reinforcement learning.
Posted Content

Soft Hierarchical Graph Recurrent Networks for Many-Agent Partially Observable Environments.

Abstract: The recent progress in multi-agent deep reinforcement learning(MADRL) makes it more practical in real-world tasks, but its relatively poor scalability and the partially observable constraints raise challenges to its performance and deployment. Based on our intuitive observation that the human society could be regarded as a large-scale partially observable environment, where each individual has the function of communicating with neighbors and remembering its own experience, we propose a novel network structure called hierarchical graph recurrent network(HGRN) for multi-agent cooperation under partial observability. Specifically, we construct the multi-agent system as a graph, use the hierarchical graph attention network(HGAT) to achieve communication between neighboring agents, and exploit GRU to enable agents to record historical information. To encourage exploration and improve robustness, we design a maximum-entropy learning method to learn stochastic policies of a configurable target action entropy. Based on the above technologies, we proposed a value-based MADRL algorithm called Soft-HGRN and its actor-critic variant named SAC-HRGN. Experimental results based on three homogeneous tasks and one heterogeneous environment not only show that our approach achieves clear improvements compared with four baselines, but also demonstrates the interpretability, scalability, and transferability of the proposed model. Ablation studies prove the function and necessity of each component.
Proceedings ArticleDOI

Ranked Communication Channel Confidence for Multi-Agent Reinforcement Learning

TL;DR: In this paper, the authors proposed a method called Ranked Communication Channel Confidence Multi-agent Reinforcement Learning (RC3MARL), which exploits a non-uniform information encoding mechanism to discriminate the message in finer granularity.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

Human-level control through deep reinforcement learning

TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Posted Content

Empirical evaluation of gated recurrent neural networks on sequence modeling

TL;DR: These advanced recurrent units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU), are found to be comparable to LSTM.

Deep reinforcement learning with double Q-learning

TL;DR: In this article, the authors show that the DQN algorithm suffers from substantial overestimation in some games in the Atari 2600 domain, and they propose a specific adaptation to the algorithm and show that this algorithm not only reduces the observed overestimations, but also leads to much better performance on several games.
Journal ArticleDOI

Learning from delayed rewards

TL;DR: The invention relates to a circuit for use in a receiver which can receive two-tone/stereo signals which is intended to make a choice between mono or stereo reproduction of signal A or of signal B and vice versa.
Related Papers (5)