Title:
Learning to walk using deep reinforcement learning and transfer learning

Thumbnail Image
Author(s)
Yu, Wenhao
Authors
Advisor(s)
Turk, Greg
Liu, Cheng-Yun Karen
Advisor(s)
Person
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Series
Supplementary to
Abstract
We seek to develop computational tools to reproduce the locomotion of humans and animals in complex and unpredictable environments. Such tools can have significant impact in computer graphics, robotics, machine learning, and biomechanics. However, there are two main hurdles in achieving this goal. First, synthesizing a successful locomotion policy requires precise control of a high-dimensional under-actuated system and striking a balance among a set of conflicting goals such as walking forward, energy efficiency, and keeping balance. Second, the synthesized locomotion policy needs to generalize to new environments that were not present during optimization and training in order to cope with novel situations during execution. In this thesis, we introduce a set of learning-based algorithms to tackle these challenges and make progress towards achieving automated and generalizable motor learning. We demonstrate our methods on training simulated characters and robots to learn locomotion skills without using motion data, and on transferring the simulation-trained locomotion controllers to real robotic platforms. We first introduce a Deep Reinforcement Learning (DRL) approach for learning locomotion controllers for simulated legged creatures without using motion data. We propose a loss term in DRL objective that encourages the agent to exhibit symmetric behavior and a curriculum learning approach that provides modulated physical assistance in order to achieve successful training of energy-efficient controllers. We demonstrate the results of this approach across a variety of simulated characters that, when we combine the two proposed ideas, achieve low-energy and symmetric locomotion gaits that are closer to those seen in real animals than alternative DRL methods. Next, we introduce a set of Transfer Learning (TL) algorithms that generalize the learned locomotion controllers to novel environments. Specifically, we focus on the problem of transferring a simulation-trained locomotion controller to a real legged robot, also known as the Sim-to-Real transfer problem. Addressing the Sim-to-Real transfer problem would allow robots to leverage the modern machine learning algorithms and compute power in learning complex motor skills in a safe and efficient fashion. However, this is also a challenging problem because the real-world is noisy and unpredictable. Within this context, we first introduce a transfer learning algorithm that can successfully operate in unknown and changing dynamics within the training dynamics. To allow successful transfer outside the training environments, we further propose an algorithm that uses a limited amount of samples in the testing environments to adapt the simulation-trained policy. We demonstrate two variants of the algorithm that were applied to achieve Sim-to-Real transfer for a biped robot, Robotis Darwin OP2, and a quadruped robot, Ghost Robotics Minitaur, respectively. Finally, we consider the problem of safety during policy execution and transfer. We propose the training of a universal safe policy (USP) that controls the robot to avoid unsafe states from a diverse set of states, and an algorithm to combine a USP and a task policy to complete the task while acting safely. We demonstrate that the resulting algorithm can allow policies to adapt to notably different simulated dynamics with at most two failure trials, suggesting a promising path towards learning robust and safe control policies for sim-to-real transfer.
Sponsor
Date Issued
2020-05-17
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI