scispace - formally typeset
Search or ask a question
Author

Muning Wen

Bio: Muning Wen is an academic researcher. The author has contributed to research in topics: Reinforcement learning & Task (computing). The author has an hindex of 2, co-authored 4 publications receiving 6 citations.

Papers
More filters
Posted Content
TL;DR: In this article, the authors extend the theory of trust region learning to multi-agent reinforcement learning (MARL) and develop Heterogeneous-Agent Trust Region Policy Optimization (HATPRO) and Heterogenous-Agent Proximal Policy Optimisation (HAPPO) algorithms.
Abstract: Trust region methods rigorously enabled reinforcement learning (RL) agents to learn monotonically improving policies, leading to superior performance on a variety of tasks. Unfortunately, when it comes to multi-agent reinforcement learning (MARL), the property of monotonic improvement may not simply apply; this is because agents, even in cooperative games, could have conflicting directions of policy updates. As a result, achieving a guaranteed improvement on the joint policy where each agent acts individually remains an open challenge. In this paper, we extend the theory of trust region learning to MARL. Central to our findings are the multi-agent advantage decomposition lemma and the sequential policy update scheme. Based on these, we develop Heterogeneous-Agent Trust Region Policy Optimisation (HATPRO) and Heterogeneous-Agent Proximal Policy Optimisation (HAPPO) algorithms. Unlike many existing MARL algorithms, HATRPO/HAPPO do not need agents to share parameters, nor do they need any restrictive assumptions on decomposibility of the joint value function. Most importantly, we justify in theory the monotonic improvement property of HATRPO/HAPPO. We evaluate the proposed methods on a series of Multi-Agent MuJoCo and StarCraftII tasks. Results show that HATRPO and HAPPO significantly outperform strong baselines such as IPPO, MAPPO and MADDPG on all tested tasks, therefore establishing a new state of the art.

2 citations

Posted Content
TL;DR: In this article, the authors quantified the contributions of the number of agents and agents' explorations to the variance of MAPG estimators and derived the optimal baseline (OB) that achieves the minimal variance.
Abstract: Policy gradient (PG) methods are popular reinforcement learning (RL) methods where a baseline is often applied to reduce the variance of gradient estimates. In multi-agent RL (MARL), although the PG theorem can be naturally extended, the effectiveness of multi-agent PG (MAPG) methods degrades as the variance of gradient estimates increases rapidly with the number of agents. In this paper, we offer a rigorous analysis of MAPG methods by, firstly, quantifying the contributions of the number of agents and agents' explorations to the variance of MAPG estimators. Based on this analysis, we derive the optimal baseline (OB) that achieves the minimal variance. In comparison to the OB, we measure the excess variance of existing MARL algorithms such as vanilla MAPG and COMA. Considering using deep neural networks, we also propose a surrogate version of OB, which can be seamlessly plugged into any existing PG methods in MARL. On benchmarks of Multi-Agent MuJoCo and StarCraft challenges, our OB technique effectively stabilises training and improves the performance of multi-agent PPO and COMA algorithms by a significant margin.

2 citations

Posted Content
TL;DR: MALib as mentioned in this paper is a scalable and efficient computing framework for population-based multi-agent reinforcement learning (PB-MARL) which is composed of a centralized task dispatching model and a programming architecture named Actor-Evaluator-Learner, which achieves high parallelism for both training and sampling.
Abstract: Population-based multi-agent reinforcement learning (PB-MARL) refers to the series of methods nested with reinforcement learning (RL) algorithms, which produces a self-generated sequence of tasks arising from the coupled population dynamics. By leveraging auto-curricula to induce a population of distinct emergent strategies, PB-MARL has achieved impressive success in tackling multi-agent tasks. Despite remarkable prior arts of distributed RL frameworks, PB-MARL poses new challenges for parallelizing the training frameworks due to the additional complexity of multiple nested workloads between sampling, training and evaluation involved with heterogeneous policy interactions. To solve these problems, we present MALib, a scalable and efficient computing framework for PB-MARL. Our framework is comprised of three key components: (1) a centralized task dispatching model, which supports the self-generated tasks and scalable training with heterogeneous policy combinations; (2) a programming architecture named Actor-Evaluator-Learner, which achieves high parallelism for both training and sampling, and meets the evaluation requirement of auto-curriculum learning; (3) a higher-level abstraction of MARL training paradigms, which enables efficient code reuse and flexible deployments on different distributed computing paradigms. Experiments on a series of complex tasks such as multi-agent Atari Games show that MALib achieves throughput higher than 40K FPS on a single machine with $32$ CPU cores; 5x speedup than RLlib and at least 3x speedup than OpenSpiel in multi-agent training tasks. MALib is publicly available at this https URL.

2 citations

Posted Content
TL;DR: In this article, the authors formulate the safe multi-agent learning problem as a constrained Markov game and solve it with policy optimisation methods such as Multi-Agent Constrained Policy Optimization (MACPO) and MAPPO-Lagrangian.
Abstract: Developing reinforcement learning algorithms that satisfy safety constraints is becoming increasingly important in real-world applications. In multi-agent reinforcement learning (MARL) settings, policy optimisation with safety awareness is particularly challenging because each individual agent has to not only meet its own safety constraints, but also consider those of others so that their joint behaviour can be guaranteed safe. Despite its importance, the problem of safe multi-agent learning has not been rigorously studied; very few solutions have been proposed, nor a sharable testing environment or benchmarks. To fill these gaps, in this work, we formulate the safe MARL problem as a constrained Markov game and solve it with policy optimisation methods. Our solutions -- Multi-Agent Constrained Policy Optimisation (MACPO) and MAPPO-Lagrangian -- leverage the theories from both constrained policy optimisation and multi-agent trust region learning. Crucially, our methods enjoy theoretical guarantees of both monotonic improvement in reward and satisfaction of safety constraints at every iteration. To examine the effectiveness of our methods, we develop the benchmark suite of Safe Multi-Agent MuJoCo that involves a variety of MARL baselines. Experimental results justify that MACPO/MAPPO-Lagrangian can consistently satisfy safety constraints, meanwhile achieving comparable performance to strong baselines.

Cited by
More filters
Posted Content
TL;DR: In this paper, the authors quantify the non-transitivity in chess through real-world data from human players and find that high degrees of nontransitivity tend to prevent human players from making progress on their Elo rating, whereas progressions are easier to make at the level of ratings where the degree of non transitivity is lower.
Abstract: It has long been believed that Chess is the \emph{Drosophila} of Artificial Intelligence (AI). Studying Chess can productively provide valid knowledge about complex systems. Although remarkable progress has been made on solving Chess, the geometrical landscape of Chess in the strategy space is still mysterious. Judging on AI-generated strategies, researchers hypothesised that the strategy space of Chess possesses a spinning top geometry, with the upright axis representing the \emph{transitive} dimension (e.g., A beats B, B beats C, A beats C), and the radial axis representing the \emph{non-transitive} dimension (e.g., A beats B, B beats C, C beats A). However, it is unclear whether such a hypothesis holds for real-world strategies. In this paper, we quantify the non-transitivity in Chess through real-world data from human players. Specifically, we performed two ways of non-transitivity quantifications -- Nash Clustering and counting the number of Rock-Paper-Scissor cycles -- on over one billion match data from Lichess and FICS. Our findings positively indicate that the strategy space occupied by real-world Chess strategies demonstrates a spinning top geometry, and more importantly, there exists a strong connection between the degree of non-transitivity and the progression of a Chess player's rating. In particular, high degrees of non-transitivity tend to prevent human players from making progress on their Elo rating, whereas progressions are easier to make at the level of ratings where the degree of non-transitivity is lower. Additionally, we also investigate the implication of the degree of non-transitivity for population-based training methods. By considering \emph{fixed-memory Fictitious Play} as a proxy, we reach the conclusion that maintaining large-size and diverse populations of strategies is imperative to training effective AI agents in solving Chess types of games.

5 citations

Posted Content
TL;DR: In this article, the authors extend the theory of trust region learning to multi-agent reinforcement learning (MARL) and develop Heterogeneous-Agent Trust Region Policy Optimization (HATPRO) and Heterogenous-Agent Proximal Policy Optimisation (HAPPO) algorithms.
Abstract: Trust region methods rigorously enabled reinforcement learning (RL) agents to learn monotonically improving policies, leading to superior performance on a variety of tasks. Unfortunately, when it comes to multi-agent reinforcement learning (MARL), the property of monotonic improvement may not simply apply; this is because agents, even in cooperative games, could have conflicting directions of policy updates. As a result, achieving a guaranteed improvement on the joint policy where each agent acts individually remains an open challenge. In this paper, we extend the theory of trust region learning to MARL. Central to our findings are the multi-agent advantage decomposition lemma and the sequential policy update scheme. Based on these, we develop Heterogeneous-Agent Trust Region Policy Optimisation (HATPRO) and Heterogeneous-Agent Proximal Policy Optimisation (HAPPO) algorithms. Unlike many existing MARL algorithms, HATRPO/HAPPO do not need agents to share parameters, nor do they need any restrictive assumptions on decomposibility of the joint value function. Most importantly, we justify in theory the monotonic improvement property of HATRPO/HAPPO. We evaluate the proposed methods on a series of Multi-Agent MuJoCo and StarCraftII tasks. Results show that HATRPO and HAPPO significantly outperform strong baselines such as IPPO, MAPPO and MADDPG on all tested tasks, therefore establishing a new state of the art.

2 citations

Posted Content
TL;DR: In this article, the authors formulate the safe multi-agent learning problem as a constrained Markov game and solve it with policy optimisation methods such as Multi-Agent Constrained Policy Optimization (MACPO) and MAPPO-Lagrangian.
Abstract: Developing reinforcement learning algorithms that satisfy safety constraints is becoming increasingly important in real-world applications. In multi-agent reinforcement learning (MARL) settings, policy optimisation with safety awareness is particularly challenging because each individual agent has to not only meet its own safety constraints, but also consider those of others so that their joint behaviour can be guaranteed safe. Despite its importance, the problem of safe multi-agent learning has not been rigorously studied; very few solutions have been proposed, nor a sharable testing environment or benchmarks. To fill these gaps, in this work, we formulate the safe MARL problem as a constrained Markov game and solve it with policy optimisation methods. Our solutions -- Multi-Agent Constrained Policy Optimisation (MACPO) and MAPPO-Lagrangian -- leverage the theories from both constrained policy optimisation and multi-agent trust region learning. Crucially, our methods enjoy theoretical guarantees of both monotonic improvement in reward and satisfaction of safety constraints at every iteration. To examine the effectiveness of our methods, we develop the benchmark suite of Safe Multi-Agent MuJoCo that involves a variety of MARL baselines. Experimental results justify that MACPO/MAPPO-Lagrangian can consistently satisfy safety constraints, meanwhile achieving comparable performance to strong baselines.
Posted Content
TL;DR: In this paper, it was shown that Independent Natural Policy Gradient always converges in the last iterate using constant learning rates in Markov Potential Games (MPGs) with Markov potential games as a special case.
Abstract: Multi-agent reinforcement learning has been successfully applied to fully-cooperative and fully-competitive environments, but little is currently known about mixed cooperative/competitive environments. In this paper, we focus on a particular class of multi-agent mixed cooperative/competitive stochastic games called Markov Potential Games (MPGs), which include cooperative games as a special case. Recent results have shown that independent policy gradient converges in MPGs but it was not known whether Independent Natural Policy Gradient converges in MPGs as well. We prove that Independent Natural Policy Gradient always converges in the last iterate using constant learning rates. The proof deviates from the existing approaches and the main challenge lies in the fact that Markov Potential Games do not have unique optimal values (as single-agent settings exhibit) so different initializations can lead to different limit point values. We complement our theoretical results with experiments that indicate that Natural Policy Gradient outperforms Policy Gradient in routing games and congestion games.