Title:
Mastering Reconnaissance Blind Chess with Reinforcement Learning

dc.contributor.advisor Gombolay, Matthew
dc.contributor.author Savelyev, Sergey
dc.contributor.committeeMember Natarajan, Manisha
dc.contributor.committeeMember Paleja, Rohan R
dc.contributor.department Computer Science
dc.contributor.department Computer Science
dc.date.accessioned 2020-11-09T17:00:51Z
dc.date.available 2020-11-09T17:00:51Z
dc.date.created 2020-05
dc.date.issued 2020-05
dc.date.submitted May 2020
dc.date.updated 2020-11-09T17:00:52Z
dc.description.abstract Research within Artificial Intelligence has often set goals of being able to autonomously play games (e.g., Chess or Go) at or above human level. Novel machine learning-based agents have recently made advances in the state-of-the-art by achieving superhuman performance in increasingly complicated games. We believe that solving imperfect information games (i.e., games where you do not have full knowledge of the opponent's activities) should be the next goal in Artificial Intelligence research. We study Reconnaissance Blind Multi-Chess (RBMC), an imperfect information variant of Chess, which comes with a novel set of challenges that must be overcome before a computer can attain superhuman performance. Prior works have largely focused on reducing the problem to a game of standard Chess (i.e., with perfect information) by attempting to determine the true state of the Chessboard. This procedure separates the problem of acquiring and applying gathered information from the move policy, allowing existing Chess agents to be used to choose nearly optimal moves. In contrast, our method trains a triple-headed neural network through self-play reinforcement learning, handling the information-gathering process, and move process within one model. Since this agent does not attempt to solve a restricted version of the problem, the algorithm is able to execute strategies based on the imperfect information aspect of the game. We believe that such a learning method, given enough training time, should be able to outperform agents that simply reduce the problem to a standard game of Chess. In this thesis, we explore this hypothesis and algorithms for playing RBMC.
dc.description.degree Undergraduate
dc.format.mimetype application/pdf
dc.identifier.uri http://hdl.handle.net/1853/63890
dc.language.iso en_US
dc.publisher Georgia Institute of Technology
dc.subject Reconnaissance blind multi-chess
dc.subject Reinforcement learning
dc.title Mastering Reconnaissance Blind Chess with Reinforcement Learning
dc.type Text
dc.type.genre Undergraduate Thesis
dspace.entity.type Publication
local.contributor.corporatename College of Computing
local.contributor.corporatename School of Computer Science
local.contributor.corporatename Undergraduate Research Opportunities Program
local.relation.ispartofseries Undergraduate Research Option Theses
relation.isOrgUnitOfPublication c8892b3c-8db6-4b7b-a33a-1b67f7db2021
relation.isOrgUnitOfPublication 6b42174a-e0e1-40e3-a581-47bed0470a1e
relation.isOrgUnitOfPublication 0db885f5-939b-4de1-807b-f2ec73714200
relation.isSeriesOfPublication e1a827bd-cf25-4b83-ba24-70848b7036ac
thesis.degree.level Undergraduate
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
SAVELYEV-UNDERGRADUATERESEARCHOPTIONTHESIS-2020.pdf
Size:
718.11 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
3.87 KB
Format:
Plain Text
Description: