Gaussian Mixture Belief Space Reinforcement Learning

Author(s)
Nakajima An, Gabriel Nakajima
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
School of Computer Science
School established in 2007
Supplementary to:
Abstract
In reinforcement learning and optimal control, one successful approach to address system stochasticity and epistemic uncertainty in the dynamics model is to consider, rather than a single state, a distribution over the states, i.e. a belief. This belief is usually described using a Gaussian distribution; however, this representation may be very limited due its significant approximation error when describing multimodal distributions. This thesis addresses this problem by representing the belief with a mixture of Gaussians and presenting a reinforcement learning algorithm that optimizes a sequence of controls with this particular belief space representation. We show that this method can propagate belief much more accurately and we apply this algorithm to trajectory tracking tasks with stochastic dynamics.
Sponsor
Date
2018-12
Extent
Resource Type
Text
Resource Subtype
Undergraduate Thesis
Rights Statement
Rights URI