Title:
Adaptable and Scalable Multi-Agent Graph-Attention Communication
Adaptable and Scalable Multi-Agent Graph-Attention Communication
Author(s)
Niu, Yaru
Advisor(s)
Gombolay, Matthew
Editor(s)
Collections
Supplementary to
Permanent Link
Abstract
High-performing teams learn effective communication strategies to judiciously share information and reduce the cost of communication overhead. Within multi-agent reinforcement learning, synthesizing effective policies requires reasoning about when to communicate, whom to communicate with, and how to process messages. Meanwhile, in real-world problems, training policies and communication strategies that are able to generalize to multiple tasks, and adapt to unseen tasks, can improve the learning efficiency in multi-agent systems. However, many methods in current literature suffer from efficiently learning a dynamic communication topology. At the same time, learning adaptable and scalable multi-agent communication remains to be a challenge. This thesis develops algorithms to tackle these two problems.
First, I propose a novel multi-agent reinforcement learning algorithm, Multi-Agent Graph-attention Communication (MAGIC), with a graph-attention communication protocol in which we learn 1) a Scheduler to help with the problems of when to communicate and whom to address messages to, and 2) a Message Processor using Graph Attention Networks (GATs) with dynamic graphs to deal with communication signals. The Scheduler consists of a graph attention encoder and a differentiable attention mechanism, which outputs dynamic, differentiable graphs to the Message Processor, which enables the Scheduler and Message Processor to be trained end-to-end. We evaluate our approach on a variety of cooperative tasks, including Google Research Football. Our method outperforms baselines across all domains, achieving approximately 10.5% increase in reward in the most challenging domain. We also show MAGIC communicates 27.4% more efficiently on average than baselines, is robust to stochasticity, and scales to larger state-action spaces. Finally, we demonstrate MAGIC on a physical, multi-robot testbed.
Second, based on MAGIC, I present a multi-agent multi-task reinforcement training scheme, MT-MAGIC, and develop a multi-agent meta-reinforcement learning framework, Meta-MAGIC. Both methods can generalize and adapt to unseen tasks with different team sizes. Meta-MAGIC initiatively explores using the RNN architecture to perform the adaptation process in multi-agent meta-reinforcement learning. Through experiments, we find that Meta-MAGIC and MT-MAGIC can beat the baseline by a notable margin in multi-task training and generalize well to new tasks. Meta-MAGIC is able to adapt quickly to new tasks and keeps an upper bound of the performance of all methods through the interactions with unseen scenarios in Predator-Prey. Fine-tuning from pre-trained models by MT-MAGIC quickly achieves better performance on new tasks compared to training from scratch, with only 11.28% of training epochs.
Sponsor
Date Issued
2022-05-04
Extent
Resource Type
Text
Resource Subtype
Thesis