Organizational Unit:
Socially Intelligent Machines Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 7 of 7
  • Item
    Trajectories and Keyframes for Kinesthetic Teaching: A Human-Robot Interaction Perspective
    (Georgia Institute of Technology, 2012-03) Akgun, Baris ; Cakmak, Maya ; Yoo, Jae Wook ; Thomaz, Andrea L.
    Kinesthetic teaching is an approach to providing demonstrations to a robot in Learning from Demonstration whereby a human physically guides a robot to perform a skill. In the common usage of kinesthetic teaching, the robot's trajectory during a demonstration is recorded from start to end. In this paper we consider an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive keyframes that can be connected to perform the skill. We present a user-study (n = 34) comparing the two approaches and highlighting their complementary nature. The study also tests and shows the potential benefits of iterative and adaptive versions of keyframe demonstrations. Finally, we introduce a hybrid method that combines trajectories and keyframes in a single demonstration
  • Item
    Towards Grounding Concepts for Transfer in Goal Learning from Demonstration
    (Georgia Institute of Technology, 2011-08) Chao, Crystal ; Cakmak, Maya ; Thomaz, Andrea L.
    We aim to build robots that frame the task learning problem as goal inference so that they are natural to teach and meet people's expectations for a learning partner. The focus of this work is the scenario of a social robot that learns task goals from human demonstrations without prior knowledge of high-level concepts. In the system that we present, these discrete concepts are grounded from low-level continuous sensor data through unsupervised learning, and task goals are subsequently learned on them using Bayesian inference. The grounded concepts are derived from the structure of the Learning from Demonstration (LfD) problem and exhibit degrees of prototypicality. These concepts can be used to transfer knowledge to future tasks, resulting in faster learning of those tasks. Using sensor data taken during demonstrations to our robot from five human teachers, we show the expressivity of using grounded concepts when learning new tasks from demonstration. We then show how the learning curve improves when transferring the knowledge of grounded concepts to future tasks.
  • Item
    Transparent Active Learning for Robots
    (Georgia Institute of Technology, 2010) Chao, Crystal ; Cakmak, Maya ; Thomaz, Andrea L.
    This research aims to enable robots to learn from human teachers. Motivated by human social learning, we believe that a transparent learning process can help guide the human teacher to provide the most informative instruction. We believe active learning is an inherently transparent machine learning approach because the learner formulates queries to the oracle that reveal information about areas of uncertainty in the underlying model. In this work, we implement active learning on the Simon robot in the form of nonverbal gestures that query a human teacher about a demonstration within the context of a social dialogue. Our preliminary pilot study data show potential for transparency through active learning to improve the accuracy and efficiency of the teaching process. However, our data also seem to indicate possible undesirable effects from the human teacher’s perspective regarding balance of the interaction. These preliminary results argue for control strategies that balance leading and following during a social learning interaction.
  • Item
    Effects of Social Exploration Mechanisms on Robot Learning
    (Georgia Institute of Technology, 2009) Cakmak, Maya ; DePalma, Nick ; Thomaz, Andrea L. ; Arriaga, Rosa I.
    Social learning in robotics has largely focused on imitation learning. Here we take a broader view and are interested in the multifaceted ways that a social partner can influence the learning process. We implement four social learning mechanisms on a robot: stimulus enhancement, emulation, mimicking, and imitation, and illustrate the computational benefits of each. In particular, we illustrate that some strategies are about directing the attention of the learner to objects and others are about actions. Taken together these strategies form a rich repertoire allowing social learners to use a social partner to greatly impact their learning process. We demonstrate these results in simulation and with physical robot ‘playmates’.
  • Item
    Computational Benefits of Social Learning Mechanisms: Stimulus Enhancement and Emulation
    (Georgia Institute of Technology, 2009) Cakmak, Maya ; DePalma, Nick ; Arriaga, Rosa I. ; Thomaz, Andrea L.
    Social learning in robotics has largely focused on imitation learning. In this work, we take a broader view of social learning and are interested in the multifaceted ways that a social partner can influence the learning process. We implement stimulus enhancement and emulation on a robot, and illustrate the computational benefits of social learning over individual learning. Additionally we characterize the differences between these two social learning strategies, showing that the preferred strategy is dependent on the current behavior of the social partner. We demonstrate these learning results both in simulation and with physical robot ‘playmates’.
  • Item
    Social Learning Mechanisms for Robots
    (Georgia Institute of Technology, 2009) Thomaz, Andrea L. ; Cakmak, Maya
    There is currently a surge of interest in service robotics—a desire to have robots leave the labs and factory floors to help solve critical issues facing our society, ranging from eldercare to education. A critical issue is that we cannot preprogram these robots with every skill they will need to play a useful role in society—they will need the ability to interact with ordinary people and acquire new relevant skills after they are deployed. Using human input with Machine Learning systems is not a new goal, but we believe that the problem needs reframing before the field will succeed in building robots that learn from everyday people. Many related works focus on machine performance gains; asking, “What can I get the person do to help my robot learn better?” In an approach we call, Socially Guided Machine Learning (SG-ML), we formulate the problem as a human-machine interaction; asking, “How can I improve the dynamics of this tightly coupled teacher-learner system?” With the belief that machines meant to learn from people can better take advantage of the ways in which people naturally approach teaching, our research aims to understand and computationally model mechanisms of human social learning in order to build machines that are natural and intuitive to teach. In this paper we focus on a particular aspect of SG-ML. When building a robot learner that takes advantage of human input, one of the design questions is “What is the right level of human guidance?” One has to determine how much and what kind of interaction to require of the human. We first review prior work with respect to these questions, and then summarize three recent projects. In the first two projects we investigate self versus social learning and demonstrate ways in which the two are mutually beneficial. In the third Andrea L. Thomaz, Maya Cakmak project we investigate a variety of social learning strategies, implementing four biologically inspired ways to take advantage of the social environment. We see computational benefits with each strategy depending on the environment, demonstrating the usefulness of non-imitative social learning. Taken together these projects argue that robots need a variety of learning strategies working together, including self and several types of social mechanisms, in order to succeed in Socially Guided Machine Learning.
  • Item
    Learning about Objects with Human Teachers
    (Georgia Institute of Technology, 2009) Thomaz, Andrea L. ; Cakmak, Maya
    A general learning task for a robot in a new environment is to learn about objects and what actions/effects they afford. To approach this, we look at ways that a human partner can intuitively help the robot learn, Socially Guided Machine Learning. We present experiments conducted with our robot, Junior, and make six observations characterizing how people approached teaching about objects. We show that Junior successfully used transparency to mitigate errors. Finally, we present the impact of “social” versus “nonsocial” data sets when training SVM classifiers.