Organizational Unit:
Socially Intelligent Machines Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 5 of 5
  • Item
    Effective robot task learning by focusing on task-relevant objects
    (Georgia Institute of Technology, 2009-10) Lee, Kyu Hwa ; Lee, Jinhan ; Thomaz, Andrea L. ; Bobick, Aaron F.
    In a robot learning from demonstration framework involving environments with many objects, one of the key problems is to decide which objects are relevant to a given task. In this paper, we analyze this problem and propose a biologically-inspired computational model that enables the robot to focus on the task-relevant objects. To filter out incompatible task models, we compute a task relevance value (TRV) for each object, which shows a human demonstrator's implicit indication of the relevance to the task. By combining an intentional action representation with `motionese', our model exhibits recognition capabilities compatible with the way that humans demonstrate. We evaluate the system on demonstrations from five different human subjects, showing its ability to correctly focus on the appropriate objects in these demonstrations.
  • Item
    Effects of Social Exploration Mechanisms on Robot Learning
    (Georgia Institute of Technology, 2009) Cakmak, Maya ; DePalma, Nick ; Thomaz, Andrea L. ; Arriaga, Rosa I.
    Social learning in robotics has largely focused on imitation learning. Here we take a broader view and are interested in the multifaceted ways that a social partner can influence the learning process. We implement four social learning mechanisms on a robot: stimulus enhancement, emulation, mimicking, and imitation, and illustrate the computational benefits of each. In particular, we illustrate that some strategies are about directing the attention of the learner to objects and others are about actions. Taken together these strategies form a rich repertoire allowing social learners to use a social partner to greatly impact their learning process. We demonstrate these results in simulation and with physical robot ‘playmates’.
  • Item
    Computational Benefits of Social Learning Mechanisms: Stimulus Enhancement and Emulation
    (Georgia Institute of Technology, 2009) Cakmak, Maya ; DePalma, Nick ; Arriaga, Rosa I. ; Thomaz, Andrea L.
    Social learning in robotics has largely focused on imitation learning. In this work, we take a broader view of social learning and are interested in the multifaceted ways that a social partner can influence the learning process. We implement stimulus enhancement and emulation on a robot, and illustrate the computational benefits of social learning over individual learning. Additionally we characterize the differences between these two social learning strategies, showing that the preferred strategy is dependent on the current behavior of the social partner. We demonstrate these learning results both in simulation and with physical robot ‘playmates’.
  • Item
    Social Learning Mechanisms for Robots
    (Georgia Institute of Technology, 2009) Thomaz, Andrea L. ; Cakmak, Maya
    There is currently a surge of interest in service robotics—a desire to have robots leave the labs and factory floors to help solve critical issues facing our society, ranging from eldercare to education. A critical issue is that we cannot preprogram these robots with every skill they will need to play a useful role in society—they will need the ability to interact with ordinary people and acquire new relevant skills after they are deployed. Using human input with Machine Learning systems is not a new goal, but we believe that the problem needs reframing before the field will succeed in building robots that learn from everyday people. Many related works focus on machine performance gains; asking, “What can I get the person do to help my robot learn better?” In an approach we call, Socially Guided Machine Learning (SG-ML), we formulate the problem as a human-machine interaction; asking, “How can I improve the dynamics of this tightly coupled teacher-learner system?” With the belief that machines meant to learn from people can better take advantage of the ways in which people naturally approach teaching, our research aims to understand and computationally model mechanisms of human social learning in order to build machines that are natural and intuitive to teach. In this paper we focus on a particular aspect of SG-ML. When building a robot learner that takes advantage of human input, one of the design questions is “What is the right level of human guidance?” One has to determine how much and what kind of interaction to require of the human. We first review prior work with respect to these questions, and then summarize three recent projects. In the first two projects we investigate self versus social learning and demonstrate ways in which the two are mutually beneficial. In the third Andrea L. Thomaz, Maya Cakmak project we investigate a variety of social learning strategies, implementing four biologically inspired ways to take advantage of the social environment. We see computational benefits with each strategy depending on the environment, demonstrating the usefulness of non-imitative social learning. Taken together these projects argue that robots need a variety of learning strategies working together, including self and several types of social mechanisms, in order to succeed in Socially Guided Machine Learning.
  • Item
    Learning about Objects with Human Teachers
    (Georgia Institute of Technology, 2009) Thomaz, Andrea L. ; Cakmak, Maya
    A general learning task for a robot in a new environment is to learn about objects and what actions/effects they afford. To approach this, we look at ways that a human partner can intuitively help the robot learn, Socially Guided Machine Learning. We present experiments conducted with our robot, Junior, and make six observations characterizing how people approached teaching about objects. We show that Junior successfully used transparency to mitigate errors. Finally, we present the impact of “social” versus “nonsocial” data sets when training SVM classifiers.