Title:
Teaching robots about human environments: Leveraging human interaction to efficiently learn and use multisensory object affordances

dc.contributor.advisor Thomaz, Andrea L.
dc.contributor.advisor Chernova, Sonia
dc.contributor.author Chu, Vivian
dc.contributor.committeeMember Christensen, Henrik I.
dc.contributor.committeeMember Kemp, Charles C.
dc.contributor.committeeMember Srinivasa, Siddhartha
dc.contributor.department Interactive Computing
dc.date.accessioned 2018-05-31T18:12:29Z
dc.date.available 2018-05-31T18:12:29Z
dc.date.created 2018-05
dc.date.issued 2018-01-09
dc.date.submitted May 2018
dc.date.updated 2018-05-31T18:12:29Z
dc.description.abstract The real world is complex, unstructured, and contains high levels of uncertainty. Although past work shows that robots can successfully operate in situations where a single skill is needed, they will need a framework that enables them to reason and learn continuously so that they can operate effectively in human-centric environments. One framework that allows robots to aggregate a library of skills is to model the world using affordances. In this thesis, we choose to model affordances as the relationship between a robot's actions on its environment and the effects of those actions. By modeling the world with affordances, robots can reason about what actions they need to take to achieve a goal. This thesis provides a framework that allows robots to learn affordance models through interaction and human guidance. Within the scope of robot affordance learning, there has been a large focus on learning visual skill representations due to the difficulty of getting robots to interact with the environment. Furthermore, utilizing different modalities (e.g., touch and sound) introduces challenges such as different sampling rates and data resolution. This thesis addresses the above challenges by contributing a human-centered framework for robot affordance learning that allows human teachers to guide the robot in the modeling process throughout the entire pipeline of affordance learning. We introduce several novel human-guided robot self-exploration algorithms that use human guidance to enable robots to efficiently explore the environment and learn affordance models for a diverse range of manipulation tasks. The work contributes a multisensory affordance model that integrates visual, haptic, and audio input, and a novel control framework that allows adaptive object manipulation using multisensory affordances.
dc.description.degree Ph.D.
dc.format.mimetype application/pdf
dc.identifier.uri http://hdl.handle.net/1853/59839
dc.language.iso en_US
dc.publisher Georgia Institute of Technology
dc.subject Robotics
dc.subject Robot learning
dc.subject Affordance learning
dc.subject Human robot interaction
dc.subject Multisensory data
dc.subject Robot object manipulation
dc.subject Human-guided robot exploration
dc.subject Machine learning
dc.subject Artificial intelligence
dc.subject Haptics
dc.subject Adaptable controllers
dc.subject Multisensory robot control
dc.subject Human-guided affordance learning
dc.subject Interactive multisensory perception
dc.subject Multimodal data
dc.subject Sensor fusion
dc.title Teaching robots about human environments: Leveraging human interaction to efficiently learn and use multisensory object affordances
dc.type Text
dc.type.genre Dissertation
dspace.entity.type Publication
local.contributor.advisor Chernova, Sonia
local.contributor.corporatename College of Computing
local.contributor.corporatename School of Interactive Computing
relation.isAdvisorOfPublication c2c8220e-349a-4030-9c58-0981da9d1c5d
relation.isOrgUnitOfPublication c8892b3c-8db6-4b7b-a33a-1b67f7db2021
relation.isOrgUnitOfPublication aac3f010-e629-4d08-8276-81143eeaf5cc
thesis.degree.level Doctoral
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
CHU-DISSERTATION-2018.pdf
Size:
82.19 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
3.86 KB
Format:
Plain Text
Description: