scispace - formally typeset
Open AccessProceedings Article

QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

TLDR
QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations, and structurally enforce that the joint-action value is monotonic in the per- agent values, which allows tractable maximisation of the jointaction-value in off-policy learning.
About
This article is published in International Conference on Machine Learning.The article was published on 2018-07-03 and is currently open access. It has received 505 citations till now. The article focuses on the topics: Reinforcement learning & Monotonic function.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Battlesnake challenge: A multi-agent reinforcement learning playground with human-in-the-loop

TL;DR: The results show that agents with the proposed HILL methods consistently outperform agents without HILL, and heuristics of reward manipulation had the best performance in the online competition.
Posted Content

Moving Forward in Formation: A Decentralized Hierarchical Learning Approach to Multi-Agent Moving Together.

TL;DR: This paper proposes a novel decentralized partially observable RL algorithm that uses a hierarchical structure to decompose the multi-objective task into unrelated ones and calculates a theoretical weight that makes each tasks reward has equal influence on the final RL value function.
Posted Content

Local Advantage Actor-Critic for Robust Multi-Agent Deep Reinforcement Learning.

TL;DR: In this article, the authors propose a robust local advantage (ROLA) actor-critic, which allows each agent to learn an individual action-value function as a local critic as well as ameliorate environment non-stationarity via a novel centralized training approach based on a centralized critic.
Posted Content

Learning Complex Multi-Agent Policies in Presence of an Adversary.

TL;DR: This work considers the scenario of multi-agent deception in which multiple agents need to learn to cooperate and communicate in order to deceive an adversary, and employs a two-stage learning process to get the cooperating agents to learn such deceptive behaviors.
Posted Content

Inducing Cooperation via Team Regret Minimization based Multi-Agent Deep Reinforcement Learning.

TL;DR: A novel team RM based Bayesian MARL is proposed with three key contributions: a novel RM method to train cooperative agents as a team and obtain a team regret-based policy for that team; a novel method to de-compose the team regret to generate the policy for each agent for decentralized execution; and to further improve the perfor-mance, a differential particle filter network to get an accurate estimation of the state of each agent.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

Human-level control through deep reinforcement learning

TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Posted Content

Empirical evaluation of gated recurrent neural networks on sequence modeling

TL;DR: These advanced recurrent units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU), are found to be comparable to LSTM.

Deep reinforcement learning with double Q-learning

TL;DR: In this article, the authors show that the DQN algorithm suffers from substantial overestimation in some games in the Atari 2600 domain, and they propose a specific adaptation to the algorithm and show that this algorithm not only reduces the observed overestimations, but also leads to much better performance on several games.
Journal ArticleDOI

Learning from delayed rewards

TL;DR: The invention relates to a circuit for use in a receiver which can receive two-tone/stereo signals which is intended to make a choice between mono or stereo reproduction of signal A or of signal B and vice versa.
Related Papers (5)