Organizational Unit:
Socially Intelligent Machines Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 5 of 5
  • Item
    Keyframe-based Learning from Demonstration Method and Evaluation
    (Georgia Institute of Technology, 2012-06) Akgun, Baris ; Cakmak, Maya ; Jiang, Karl ; Thomaz, Andrea L.
    We present a framework for learning skills from novel types of demonstrations that have been shown to be desirable from a human-robot interaction perspective. Our approach –Keyframe-based Learning from Demonstration (KLfD)– takes demonstrations that consist of keyframes; a sparse set of points in the state space that produces the intended skill when visited in sequence. The conventional type of trajectory demonstrations or a hybrid of the two are also handled by KLfD through a conversion to keyframes. Our method produces a skill model that consists of an ordered set of keyframe clusters, which we call Sequential Pose Distributions (SPD). The skill is reproduced by splining between clusters. We present results from two domains: mouse gestures in 2D and scooping, pouring and placing skills on a humanoid robot. KLfD has performance similar to existing LfD techniques when applied to conventional trajectory demonstrations. Additionally, we demonstrate that KLfD may be preferable when demonstration type is suited for the skill.
  • Item
    Multi-Cue Contingency Detection
    (Georgia Institute of Technology, 2012-04) Lee, Jinhan ; Chao, Crystal ; Bobick, Aaron F. ; Thomaz, Andrea L.
    The ability to detect a human's contingent response is an essential skill for a social robot attempting to engage new interaction partners or maintain ongoing turn-taking interactions. Prior work on contingency detection focuses on single cues from isolated channels, such as changes in gaze, motion, or sound.We propose a framework that integrates multiple cues for detecting contingency from multimodal sensor data in human-robot interaction scenarios. We describe three levels of integration and discuss our method for performing sensor fusion at each of these levels. We perform a Wizard-of-Oz data collection experiment in a turn-taking scenario in which our humanoid robot plays the turn-taking imitation game “Simon says" with human partners. Using this data set, which includes motion and body pose cues from a depth and color image and audio cues from a microphone, we evaluate our contingency detection module with the proposed integration mechanisms and show gains in accuracy of our multi-cue approach over single-cue contingency detection. We show the importance of selecting the appropriate level of cue integration as well as the implications of varying the referent event parameter.
  • Item
    Vision-based Contingency Detection
    (Georgia Institute of Technology, 2011) Lee, Jinhan ; Kiser, Jeffrey F. ; Bobick, Aaron F. ; Thomaz, Andrea L.
    We present a novel method for the visual detection of a contingent response by a human to the stimulus of a robot action. Contingency is de ned as a change in an agent's be- havior within a speci c time window in direct response to a signal from another agent; detection of such responses is essential to assess the willingness and interest of a human in interacting with the robot. Using motion-based features to describe the possible contingent action, our approach as- sesses the visual self-similarity of video subsequences cap- tured before the robot exhibits its signaling behavior and statistically models the typical graph-partitioning cost of separating an arbitrary subsequence of frames from the oth- ers. After the behavioral signal, the video is similarly ana- lyzed and the cost of separating the after-signal frames from the before-signal sequences is computed; a lower than typ- ical cost indicates likely contingent reaction. We present a preliminary study in which data were captured and analyzed for algorithmic performance.
  • Item
    Spatiotemporal Correspondence as a Metric for Human-like Robot Motion
    (Georgia Institute of Technology, 2011) Gielniak, Michael J. ; Thomaz, Andrea L.
    Coupled degrees-of-freedom exhibit correspondence, in that their trajectories in uence each other. In this paper we add evidence to the hypothesis that spatiotemporal corre- spondence (STC) of distributed actuators is a component of human-like motion. We demonstrate a method for making robot motion more human-like, by optimizing with respect to a nonlinear STC metric. Quantitative evaluation of STC between coordinated robot motion, human motion capture data, and retargeted human motion capture data projected onto an anthropomorphic robot suggests that coordinating robot motion with respect to the STC metric makes the motion more human-like. A user study based on mimick- ing shows that STC-optimized motion is (1) more often rec- ognized as a common human motion, (2) more accurately identi ed as the originally intended motion, and (3) mim- icked more accurately than a non-optimized version. We conclude that coordinating robot motion with respect to the STC metric makes the motion more human-like. Finally, we present and discuss data on potential reasons why coordi- nating motion increases recognition and ability to mimic.
  • Item
    Learning about Objects with Human Teachers
    (Georgia Institute of Technology, 2009) Thomaz, Andrea L. ; Cakmak, Maya
    A general learning task for a robot in a new environment is to learn about objects and what actions/effects they afford. To approach this, we look at ways that a human partner can intuitively help the robot learn, Socially Guided Machine Learning. We present experiments conducted with our robot, Junior, and make six observations characterizing how people approached teaching about objects. We show that Junior successfully used transparency to mitigate errors. Finally, we present the impact of “social” versus “nonsocial” data sets when training SVM classifiers.