Title:
State Space Decomposition in Reinforcement Learning

Thumbnail Image
Author(s)
Kumar, Saurabh
Authors
Advisor(s)
Isbell, Charles L.
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Supplementary to
Abstract
Typical reinforcement learning (RL) agents learn to complete tasks specified by reward functions tailored to their domain. As such, the policies they learn do not generalize even to similar domains. To address this issue, we develop a framework through which a deep RL agent learns to generalize policies from smaller, simpler domains to more complex ones using a recurrent attention mechanism. The task is presented to the agent as an image and an instruction specifying the goal. This meta-controller guides the agent towards its goal by designing a sequence of smaller subtasks on the part of the state space within the attention, effectively decomposing it. As a baseline, we consider a setup without attention as well. Our experiments show that the meta-controller learns to create sub-goals within the attention. These results have implications for human-robot interactive applications, in which a robot can transfer skills it has learned in one task to another one and be robust to variability in its environment.
Sponsor
Date Issued
2018-05
Extent
Resource Type
Text
Resource Subtype
Undergraduate Thesis
Rights Statement
Rights URI