Title:
Physics-based reinforcement learning for autonomous manipulation

dc.contributor.advisor Isbell, Charles L.
dc.contributor.author Scholz, Jonathan
dc.contributor.committeeMember Thomaz, Andrea
dc.contributor.committeeMember Egerstedt, Magnus
dc.contributor.committeeMember Littman, Michael
dc.contributor.committeeMember Christensen, Henrik I.
dc.contributor.department Interactive Computing
dc.date.accessioned 2016-01-07T17:24:30Z
dc.date.available 2016-01-07T17:24:30Z
dc.date.created 2015-12
dc.date.issued 2015-08-21
dc.date.submitted December 2015
dc.date.updated 2016-01-07T17:24:30Z
dc.description.abstract With recent research advances, the dream of bringing domestic robots into our everyday lives has become more plausible than ever. Domestic robotics has grown dramatically in the past decade, with applications ranging from house cleaning to food service to health care. To date, the majority of the planning and control machinery for these systems are carefully designed by human engineers. A large portion of this effort goes into selecting the appropriate models and control techniques for each application, and these skills take years to master. Relieving the burden on human experts is therefore a central challenge for bringing robot technology to the masses. This work addresses this challenge by introducing a physics engine as a model space for an autonomous robot, and defining procedures for enabling robots to decide when and how to learn these models. We also present an appropriate space of motor controllers for these models, and introduce ways to intelligently select when to use each controller based on the estimated model parameters. We integrate these components into a framework called Physics-Based Reinforcement Learning, which features a stochastic physics engine as the core model structure. Together these methods enable a robot to adapt to unfamiliar environments without human intervention. The central focus of this thesis is on fast online model learning for objects with under-specified dynamics. We develop our approach across a diverse range of domestic tasks, starting with a simple table-top manipulation task, followed by a mobile manipulation task involving a single utility cart, and finally an open-ended navigation task with multiple obstacles impeding robot progress. We also present simulation results illustrating the efficiency of our method compared to existing approaches in the learning literature.
dc.description.degree Ph.D.
dc.format.mimetype application/pdf
dc.identifier.uri http://hdl.handle.net/1853/54366
dc.language.iso en_US
dc.publisher Georgia Institute of Technology
dc.subject Machine learning
dc.subject Robotics
dc.subject Reinforcement learning
dc.title Physics-based reinforcement learning for autonomous manipulation
dc.type Text
dc.type.genre Dissertation
dspace.entity.type Publication
local.contributor.advisor Isbell, Charles L.
local.contributor.corporatename College of Computing
local.contributor.corporatename School of Interactive Computing
local.contributor.corporatename Institute for Robotics and Intelligent Machines (IRIM)
relation.isAdvisorOfPublication 3f357176-4c4b-402c-8b61-ec18ffb083a6
relation.isOrgUnitOfPublication c8892b3c-8db6-4b7b-a33a-1b67f7db2021
relation.isOrgUnitOfPublication aac3f010-e629-4d08-8276-81143eeaf5cc
relation.isOrgUnitOfPublication 66259949-abfd-45c2-9dcc-5a6f2c013bcf
thesis.degree.level Doctoral
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
SCHOLZ-DISSERTATION-2015.pdf
Size:
18.52 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
3.87 KB
Format:
Plain Text
Description: