Title:
A CRF that combines tactile sensing and vision for haptic mapping

dc.contributor.advisor Kemp, Charles C.
dc.contributor.author Asoka Kumar Shenoi, Ashwin Kumar
dc.contributor.committeeMember Vela, Patricio A.
dc.contributor.committeeMember Hays, James
dc.contributor.department Electrical and Computer Engineering
dc.date.accessioned 2016-05-27T13:24:31Z
dc.date.available 2016-05-27T13:24:31Z
dc.date.created 2016-05
dc.date.issued 2016-05-02
dc.date.submitted May 2016
dc.date.updated 2016-05-27T13:24:31Z
dc.description.abstract We consider the problem of enabling a robot to efficiently obtain a dense haptic map of its visible surroundings Using the complementary properties of vision and tactile sensing. Our approach assumes that visible surfaces that look similar to one another are likely to have similar haptic properties. In our previous work, we introduced an iterative algorithm that enabled a robot to infer dense haptic labels across visible surfaces in an RGB-D image when given a sequence of sparse haptic labels. In this work, we describe how dense conditional random fields (CRFs) can be applied to this same problem and present results from evaluating a dense CRF’s performance in simulated trials with idealized haptic labels. We evaluated our method using several publicly available RGB-D image datasets with indoor cluttered scenes pertinent to robot manipulation. In these simulated trials, the dense CRF substantially outperformed our previous algorithm by correctly assigning haptic labels to an average of 93% (versus 76% in our previous work) of all object pixels in an image given the highest number of contact points per object. Likewise, the dense CRF correctly assigned haptic labels to an average of 81% (versus 63% in our previous work) of all object pixels in an image given a low number of contact points per object. We compared the performance of dense CRF using uniform prior with a dense CRF using prior obtained from the visible scene using a Fully Convolutional Network trained for visual material recognition. The use of the convolutional network further improves the performance of the algorithm. We also performed experiments with the humanoid robot DARCI reaching in a cluttered foliage environment while using our algorithm to create a haptic map. The algorithm correctly assigned the label to 82.52% of the scenes with trunks and leaves after 10 reaches into the environment.
dc.description.degree M.S.
dc.format.mimetype application/pdf
dc.identifier.uri http://hdl.handle.net/1853/55027
dc.language.iso en_US
dc.publisher Georgia Institute of Technology
dc.subject Tactile
dc.subject Vision
dc.subject Haptic mapping
dc.subject CNN
dc.subject CRF
dc.title A CRF that combines tactile sensing and vision for haptic mapping
dc.type Text
dc.type.genre Thesis
dspace.entity.type Publication
local.contributor.advisor Kemp, Charles C.
local.contributor.corporatename School of Electrical and Computer Engineering
local.contributor.corporatename College of Engineering
relation.isAdvisorOfPublication e4f743b9-0557-4889-a16e-00afe0715f4c
relation.isOrgUnitOfPublication 5b7adef2-447c-4270-b9fc-846bd76f80f2
relation.isOrgUnitOfPublication 7c022d60-21d5-497c-b552-95e489a06569
thesis.degree.level Masters
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
ASOKAKUMARSHENOI-THESIS-2016.pdf
Size:
8.74 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
3.88 KB
Format:
Plain Text
Description: