Title:
Mapping Grounded Object Properties Across Perceptually Heterogeneous Embodiments

Thumbnail Image
Author(s)
Kira, Zsolt
Authors
Person
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Series
Supplementary to
Abstract
As robots become more common, it becomes increasingly useful for them to communicate and effectively share knowledge that they have learned through their individual experiences. Learning from experiences, however, is oftentimes embodiment-specific; that is, the knowledge learned is grounded in the robot’s unique sensors and actuators. This type of learning raises questions as to how communication and knowledge exchange via social interaction can occur, as properties of the world can be grounded differently in different robots. This is especially true when the robots are heterogeneous, with different sensors and perceptual features used to define the properties. In this paper, we present methods and representations that allow heterogeneous robots to learn grounded property representations, such as that of color categories, and then build models of their similarities and differences in order to map their respective representations. We use a conceptual space representation, where object properties are learned and represented as regions in a metric space, implemented via supervised learning of Gaussian Mixture Models. We then propose to use confusion matrices that are built using instances from each robot, obtained in a shared context, in order to learn mappings between the properties of each robot. Results are demonstrated using two perceptually heterogeneous Pioneer robots, one with a web camera and another with a camcorder.
Sponsor
Date Issued
2009
Extent
Resource Type
Text
Resource Subtype
Paper
Rights Statement
Rights URI