scispace - formally typeset
Open AccessProceedings Article

QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

TLDR
QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations, and structurally enforce that the joint-action value is monotonic in the per- agent values, which allows tractable maximisation of the jointaction-value in off-policy learning.
About
This article is published in International Conference on Machine Learning.The article was published on 2018-07-03 and is currently open access. It has received 505 citations till now. The article focuses on the topics: Reinforcement learning & Monotonic function.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Research on the Multiagent Joint Proximal Policy Optimization Algorithm Controlling Cooperative Fixed-Wing UAV Obstacle Avoidance.

TL;DR: The paper presents an improved multiagent reinforcement learning algorithm—the multiagent joint proximal policy optimization (MAJPPO) algorithm with the centralized learning and decentralized execution, which enhances the collaboration and increases the sum of reward values obtained by the multiagent system.
Journal ArticleDOI

Resource Management in Wireless Networks via Multi-Agent Deep Reinforcement Learning

TL;DR: Simulation results demonstrate the superiority of the proposed approach compared to decentralized baselines in terms of the tradeoff between average and 5th percentile user rates, while achieving performance close to, and even in certain cases outperforming, that of a centralized information-theoretic baseline.
Posted Content

PIC: Permutation Invariant Critic for Multi-Agent Deep Reinforcement Learning

TL;DR: In this paper, the authors propose a permutation invariant critic, which yields identical output irrespective of the agent permutation, which enables the model to scale to 30 times more agents and achieve improvements of test episode reward between 15% to 50% on the challenging multi-agent particle environment.
Journal ArticleDOI

SMIX(λ): Enhancing Centralized Value Functions for Cooperative Multi-Agent Reinforcement Learning

TL;DR: Experiments on the StarCraft Multi-Agent Challenge (SMAC) benchmark show that the proposed SMIX(λ) algorithm outperforms several state-of-the-art MARL methods by a large margin, and can be used as a general tool to improve the overall performance of a CTDE-type method by enhancing the evaluation quality of its CVF.
Posted Content

Action Semantics Network: Considering the Effects of Actions in Multiagent Systems

TL;DR: A novel network architecture, named Action Semantics Network (ASN), is proposed that characterizes different actions' influence on other agents using neural networks based on the action semantics between them and can be easily combined with existing deep reinforcement learning algorithms to boost their performance.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

Human-level control through deep reinforcement learning

TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Posted Content

Empirical evaluation of gated recurrent neural networks on sequence modeling

TL;DR: These advanced recurrent units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU), are found to be comparable to LSTM.

Deep reinforcement learning with double Q-learning

TL;DR: In this article, the authors show that the DQN algorithm suffers from substantial overestimation in some games in the Atari 2600 domain, and they propose a specific adaptation to the algorithm and show that this algorithm not only reduces the observed overestimations, but also leads to much better performance on several games.
Journal ArticleDOI

Learning from delayed rewards

TL;DR: The invention relates to a circuit for use in a receiver which can receive two-tone/stereo signals which is intended to make a choice between mono or stereo reproduction of signal A or of signal B and vice versa.
Related Papers (5)