Leveraging Monte Carlo Tree Search to Improve Teaming Performance in Multi-Agent Adversarial Environments

Author(s)
Connelly, Matthew Ryan
Editor(s)
Associated Organization(s)
Series
Supplementary to:
Abstract
This thesis proposes three key advancements for planning in multi-agent adversarial environments. The first contribution is two multi-agent Monte Carlo Tree Search frameworks known as Grouped Action Multi-Agent Monte Carlo Tree Search (GAMA-MCTS) and Split Action Multi-Agent Monte Carlo Tree Search (SAMA-MCTS) which incorporate inherent strategy and coordination for planning in multi-agent adversarial environments with continuous state and action spaces. The second contribution is a root parallelization method known as Root Parallelization with Varying Agent Ordering (RP-VAO) which overcomes scalability issues found in our SAMA-MCTS framework that arise when increasing the number of players. As the number of players increases in SAMA-MCTS, more simulations are required to achieve consistent results. This is not possible in real-time settings. By varying the ordering of the agents over multiple trees, these challenges are mitigated. The third contribution is the insertion of closed-loop or feedback actions into our default action set. In doing so, we are introducing a level of adaptability, responsiveness, and strategy into the decision making process. We demonstrate the improvements caused by our frameworks and associated methods by running numerous tests where a team of agents utilizing MCTS engage a team of higher-performing baseline adversaries.
Sponsor
Date
2024-05-16
Extent
Resource Type
Text
Resource Subtype
Thesis
Rights Statement
Rights URI