Title:
Autonomously learning to visually detect where manipulation will succeed

dc.contributor.author Nguyen, Hai en_US
dc.contributor.author Kemp, Charles C. en_US
dc.contributor.corporatename Georgia Institute of Technology. Healthcare Robotics Lab en_US
dc.contributor.corporatename Georgia Institute of Technology. Institute for Robotics and Intelligent Machines en_US
dc.date.accessioned 2013-12-19T17:49:07Z
dc.date.available 2013-12-19T17:49:07Z
dc.date.issued 2013-09
dc.description © The Author(s) 2013. This article is published with open access at Springerlink.com en_US
dc.description DOI: 10.1007/s10514-013-9363-y en_US
dc.description.abstract Visual features can help predict if a manipulation behavior will succeed at a given location. For example, the success of a behavior that flips light switches depends on the location of the switch. We present methods that enable a mobile manipulator to autonomously learn a function that takes an RGB image and a registered 3D point cloud as input and returns a 3D location at which a manipulation behavior is likely to succeed. With our methods, robots autonomously train a pair of support vector machine (SVM) classifiers by trying behaviors at locations in the world and observing the results. Our methods require a pair of manipulation behaviors that can change the state of the world between two sets (e.g., light switch up and light switch down), classifiers that detect when each behavior has been successful, and an initial hint as to where one of the behaviors will be successful. When given an image feature vector associated with a 3D location, a trained SVM predicts if the associated manipulation behavior will be successful at the 3D location. To evaluate our approach, we performed experiments with a PR2 robot from Willow Garage in a simulated home using behaviors that flip a light switch, push a rocker-type light switch, and operate a drawer. By using active learning, the robot efficiently learned SVMs that enabled it to consistently succeed at these tasks. After training, the robot also continued to learn in order to adapt in the event of failure. en_US
dc.identifier.citation Autonomously Learning to Visually Detect Where Manipulation Will Succeed, Hai Nguyen and Charles C. Kemp, Autonomous Robots, September 2013. en_US
dc.identifier.doi 10.1007/s10514-013-9363-y
dc.identifier.issn 0929-5593
dc.identifier.issn 1573-7527
dc.identifier.uri http://hdl.handle.net/1853/49876
dc.language.iso en_US en_US
dc.publisher Georgia Institute of Technology en_US
dc.publisher.original Springer Verlag en_US
dc.subject Robot learning en_US
dc.subject Mobile manipulation en_US
dc.subject Home robots en_US
dc.subject Behavior-based systems en_US
dc.subject Active learning en_US
dc.title Autonomously learning to visually detect where manipulation will succeed en_US
dc.type Text
dc.type.genre Article
dspace.entity.type Publication
local.contributor.author Kemp, Charles C.
local.contributor.corporatename Healthcare Robotics Lab
local.contributor.corporatename Institute for Robotics and Intelligent Machines (IRIM)
relation.isAuthorOfPublication e4f743b9-0557-4889-a16e-00afe0715f4c
relation.isOrgUnitOfPublication c6394b0e-6e8b-42dc-aeed-0e22560bd6f1
relation.isOrgUnitOfPublication 66259949-abfd-45c2-9dcc-5a6f2c013bcf
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
HCR_AR_001.pdf
Size:
1.01 MB
Format:
Adobe Portable Document Format
Description: