Title:
Co-evolution of shaping rewards and meta-parameters in reinforcement learning

Thumbnail Image
Author(s)
Elfwing, Stefan
Uchibe, Eiji
Doya, Kenji
Christensen, Henrik I.
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Series
Supplementary to
Abstract
In this article, we explore an evolutionary approach to the optimization of potential-based shaping rewards and meta-parameters in reinforcement learning. Shaping rewards is a frequently used approach to increase the learning performance of reinforcement learning, with regards to both initial performance and convergence speed. Shaping rewards provide additional knowledge to the agent in the form of richer reward signals, which guide learning to high-rewarding states. Reinforcement learning depends critically on a few meta-parameters that modulate the learning updates or the exploration of the environment, such as the learning rate α, the discount factor of future rewards γ, and the temperature τ that controls the trade-off between exploration and exploitation in softmax action selection. We validate the proposed approach in simulation using the mountain-car task. We also transfer shaping rewards and meta-parameters, evolutionarily obtained in simulation, to hardware, using a robotic foraging task.
Sponsor
Date Issued
2008-12
Extent
Resource Type
Text
Resource Subtype
Article
Rights Statement
Rights URI