Title:
Learning Nash equilibria in zero-sum stochastic games via entropy-regularized policy approximation
Learning Nash equilibria in zero-sum stochastic games via entropy-regularized policy approximation
Authors
Zhang, Qifan
Advisors
Tsiotras, Panagiotis
Howard, Ayanna M.
Howard, Ayanna M.
Collections
Supplementary to
Permanent Link
Abstract
In this thesis, we explore the use of policy approximation for reducing the computational cost of learning Nash Equilibria in Multi-Agent Reinforcement Learning. Existing multi-agent reinforcement learning methods are either computationally demanding or do not necessarily converge to a Nash Equilibrium without additional stringent assumptions. We propose a new algorithm for zero-sum stochastic games in which each agent simultaneously learns a Nash policy and an entropy-regularized policy.The two policies help each other towards convergence: the former guides the latter to the desired Nash equilibrium, and the latter serves as an efficient approximation of the former.
We demonstrate the possibility of transferring previous training experience to a different environment, which enables the agents to adapt quickly. We also provide a dynamic hyper-parameter scheduling scheme for further expedited convergence. Empirical results applied to a number of stochastic games show that the proposed algorithm converges to the Nash equilibrium while exhibiting an order of magnitude speed-up over existing algorithms.
Sponsor
Date Issued
2020-07-27
Extent
Resource Type
Text
Resource Subtype
Thesis