Organizational Unit:
Socially Intelligent Machines Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 8 of 8
  • Item
    Multi-Cue Contingency Detection
    (Georgia Institute of Technology, 2012-04) Lee, Jinhan ; Chao, Crystal ; Bobick, Aaron F. ; Thomaz, Andrea L.
    The ability to detect a human's contingent response is an essential skill for a social robot attempting to engage new interaction partners or maintain ongoing turn-taking interactions. Prior work on contingency detection focuses on single cues from isolated channels, such as changes in gaze, motion, or sound.We propose a framework that integrates multiple cues for detecting contingency from multimodal sensor data in human-robot interaction scenarios. We describe three levels of integration and discuss our method for performing sensor fusion at each of these levels. We perform a Wizard-of-Oz data collection experiment in a turn-taking scenario in which our humanoid robot plays the turn-taking imitation game “Simon says" with human partners. Using this data set, which includes motion and body pose cues from a depth and color image and audio cues from a microphone, we evaluate our contingency detection module with the proposed integration mechanisms and show gains in accuracy of our multi-cue approach over single-cue contingency detection. We show the importance of selecting the appropriate level of cue integration as well as the implications of varying the referent event parameter.
  • Item
    Timing in Multimodal Turn-Taking Interactions: Control and Analysis Using Timed Petri Nets
    (Georgia Institute of Technology, 2012) Chao, Crystal ; Thomaz, Andrea L.
    Turn-taking interactions with humans are multimodal and reciprocal in nature. In addition, the timing of actions is of great importance, as it influences both social and task strategies. To enable the precise control and analysis of timed discrete events for a robot, we develop a system for multimodal collaboration based on a timed Petri net (TPN) representation. We also argue for action interruptions in reciprocal interaction and describe its implementation within our system. Using the system, our autonomously operating humanoid robot Simon collaborates with humans through both speech and physical action to solve the Towers of Hanoi, during which the human and the robot take turns manipulating objects in a shared physical workspace. We hypothesize that action interruptions have a positive impact on turn-taking and evaluate this in the Towers of Hanoi domain through two experimental methods. One is a between-groups user study with 16 participants. The other is a simulation experiment using 200 simulated users of varying speed, initiative, compliance, and correctness. In these experiments, action interruptions are either present or absent in the system. Our collective results show that action interruptions lead to increased task efficiency through increased user initiative, improved interaction balance, and higher sense of fluency. In arriving at these results, we demonstrate how these evaluation methods can be highly complementary in the analysis of interaction dynamics
  • Item
    Controlling Social Dynamics with a Parametrized Model of Floor Regulation
    (Georgia Institute of Technology, 2012) Chao, Crystal ; Thomaz, Andrea L.
    Turn-taking is ubiquitous in human communication, yet turn-taking between humans and robots continues to be stilted and awkward for human users. The goal of our work is to build autonomous robot controllers for successfully engaging in human-like turn-taking interactions. Towards this end, we present CADENCE, a novel computational model and architecture that explicitly reasons about the four components of floor regulation: seizing the floor, yielding the floor, holding the floor, and auditing the owner of the floor. The model is parametrized to enable the robot to achieve a range of social dynamics for the human-robot dyad. In a between-groups experiment with 30 participants, our humanoid robot uses this turn-taking system at two contrasting parametrizations to engage users in autonomous object play interactions. Our results from the study show that: (1) manipulating these turn-taking parameters results in significantly different robot behavior; (2) people perceive the robot’s behavioral differences and consequently attribute different personalities to the robot; and (3) changing the robot’s personality results in different behavior from the human, manipulating the social dynamics of the dyad. We discuss the implications of this work for various contextual applications as well as the key limitations of the system to be addressed in future work.
  • Item
    Towards Grounding Concepts for Transfer in Goal Learning from Demonstration
    (Georgia Institute of Technology, 2011-08) Chao, Crystal ; Cakmak, Maya ; Thomaz, Andrea L.
    We aim to build robots that frame the task learning problem as goal inference so that they are natural to teach and meet people's expectations for a learning partner. The focus of this work is the scenario of a social robot that learns task goals from human demonstrations without prior knowledge of high-level concepts. In the system that we present, these discrete concepts are grounded from low-level continuous sensor data through unsupervised learning, and task goals are subsequently learned on them using Bayesian inference. The grounded concepts are derived from the structure of the Learning from Demonstration (LfD) problem and exhibit degrees of prototypicality. These concepts can be used to transfer knowledge to future tasks, resulting in faster learning of those tasks. Using sensor data taken during demonstrations to our robot from five human teachers, we show the expressivity of using grounded concepts when learning new tasks from demonstration. We then show how the learning curve improves when transferring the knowledge of grounded concepts to future tasks.
  • Item
    Simon plays Simon says: The timing of turn-taking in an imitation game
    (Georgia Institute of Technology, 2011) Chao, Crystal ; Lee, Jinhan ; Begum, Momotaz ; Thomaz, Andrea L.
    Turn-taking is fundamental to the way humans engage in information exchange, but robots currently lack the turn-taking skills required for natural communication. In order to bring effective turn-taking to robots, we must first understand the underlying processes in the context of what is possible to implement. We describe a data collection experiment with an interaction format inspired by “Simon says,” a turn-taking imitation game that engages the channels of gaze, speech, and motion. We analyze data from 23 human subjects interacting with a humanoid social robot and propose the principle of minimum necessary information (MNI) as a factor in determining the timing of the human response.We also describe the other observed phenomena of channel exclusion, efficiency, and adaptation. We discuss the implications of these principles and propose some ways to incorporate our findings into a computational model of turn-taking.
  • Item
    Designing Interactions for Robot Active Learners
    (Georgia Institute of Technology, 2010-06) Cakmak, Maya ; Chao, Crystal ; Thomaz, Andrea L.
    This paper addresses some of the problems that arise when applying active learning to the context of human–robot interaction (HRI). Active learning is an attractive strategy for robot learners because it has the potential to improve the accuracy and the speed of learning, but it can cause issues from an interaction perspective. Here we present three interaction modes that enable a robot to use active learning queries. The three modes differ in when they make queries: the first makes a query every turn, the second makes a query only under certain conditions, and the third makes a query only when explicitly requested by the teacher.We conduct an experiment in which 24 human subjects teach concepts to our upper-torso humanoid robot, Simon, in each interaction mode, and we compare these modes against a baseline mode using only passive supervised learning.We report results from both a learning and an interaction perspective. The data show that the three modes using active learning are preferable to the mode using passive supervised learning both in terms of performance and human subject preference, but each mode has advantages and disadvantages. Based on our results, we lay out several guidelines that can inform the design of future robotic systems that use active learning in an HRI setting.
  • Item
    Transparent Active Learning for Robots
    (Georgia Institute of Technology, 2010) Chao, Crystal ; Cakmak, Maya ; Thomaz, Andrea L.
    This research aims to enable robots to learn from human teachers. Motivated by human social learning, we believe that a transparent learning process can help guide the human teacher to provide the most informative instruction. We believe active learning is an inherently transparent machine learning approach because the learner formulates queries to the oracle that reveal information about areas of uncertainty in the underlying model. In this work, we implement active learning on the Simon robot in the form of nonverbal gestures that query a human teacher about a demonstration within the context of a social dialogue. Our preliminary pilot study data show potential for transparency through active learning to improve the accuracy and efficiency of the teaching process. However, our data also seem to indicate possible undesirable effects from the human teacher’s perspective regarding balance of the interaction. These preliminary results argue for control strategies that balance leading and following during a social learning interaction.
  • Item
    Turn-Taking for Human-Robot Interaction
    (Georgia Institute of Technology, 2010) Chao, Crystal ; Thomaz, Andrea L.