Open AccessProceedings Article
QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
Tabish Rashid,Mikayel Samvelyan,Christian Schroeder,Gregory Farquhar,Jakob Foerster,Shimon Whiteson +5 more
- pp 4292-4301
TLDR
QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations, and structurally enforce that the joint-action value is monotonic in the per- agent values, which allows tractable maximisation of the jointaction-value in off-policy learning.About:
This article is published in International Conference on Machine Learning.The article was published on 2018-07-03 and is currently open access. It has received 505 citations till now. The article focuses on the topics: Reinforcement learning & Monotonic function.read more
Citations
More filters
Posted Content
Hierarchical Deep Multiagent Reinforcement Learning
Hongyao Tang,Jianye Hao,Tangjie Lv,Yingfeng Chen,Zongzhang Zhang,Hangtian Jia,Chunxu Ren,Yan Zheng,Changjie Fan,Li Wang +9 more
TL;DR: This paper decomposes the original MARL problem into hierarchies and investigates how effective policies can be learned hierarchically in synchronous/asynchronous hierarchical MARL frameworks, and proposes a new experience replay mechanism, named as Augmented Concurrent Experience Replay (ACER).
Posted Content
Modelling the Dynamic Joint Policy of Teammates with Attention Multi-agent DDPG
TL;DR: Attention Multi-Agent Deep Deterministic Policy Gradient (ATT-MADDPG) as discussed by the authors extends DDPG with an attention mechanism to model the dynamic joint policy of teammates, making sure that the collected information can be processed efficiently.
Proceedings ArticleDOI
Learning Multi-Robot Decentralized Macro-Action-Based Policies via a Centralized Q-Net
TL;DR: A macro-action-based decentralized multi-agent double deep recurrent Q- net (MacDec-MADDRQN) which trains each decentralized Q-net using a centralized Q-nets for action selection is proposed.
Book ChapterDOI
Multi-Agent Hierarchical Reinforcement Learning with Dynamic Termination
TL;DR: In this article, the authors propose a dynamic termination Bellman equation that allows the agents to flexibly terminate their options in order to balance flexibility and predictability in multi-agent systems.
Proceedings ArticleDOI
Attention-based Reinforcement Learning for Real-Time UAV Semantic Communication
TL;DR: In this paper, a graph attention exchange network (GAXNet) is proposed to improve the reliability of air-to-ground UAV communication in terms of latency and error rate.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI
Human-level control through deep reinforcement learning
Volodymyr Mnih,Koray Kavukcuoglu,David Silver,Andrei Rusu,Joel Veness,Marc G. Bellemare,Alex Graves,Martin Riedmiller,Andreas K. Fidjeland,Georg Ostrovski,Stig Petersen,Charles Beattie,Amir Sadik,Ioannis Antonoglou,Helen King,Dharshan Kumaran,Daan Wierstra,Shane Legg,Demis Hassabis +18 more
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Posted Content
Empirical evaluation of gated recurrent neural networks on sequence modeling
TL;DR: These advanced recurrent units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU), are found to be comparable to LSTM.
Deep reinforcement learning with double Q-learning
TL;DR: In this article, the authors show that the DQN algorithm suffers from substantial overestimation in some games in the Atari 2600 domain, and they propose a specific adaptation to the algorithm and show that this algorithm not only reduces the observed overestimations, but also leads to much better performance on several games.
Journal ArticleDOI
Learning from delayed rewards
TL;DR: The invention relates to a circuit for use in a receiver which can receive two-tone/stereo signals which is intended to make a choice between mono or stereo reproduction of signal A or of signal B and vice versa.