Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 18
  • Item
    Effective robot task learning by focusing on task-relevant objects
    (Georgia Institute of Technology, 2009-10) Lee, Kyu Hwa ; Lee, Jinhan ; Thomaz, Andrea L. ; Bobick, Aaron F.
    In a robot learning from demonstration framework involving environments with many objects, one of the key problems is to decide which objects are relevant to a given task. In this paper, we analyze this problem and propose a biologically-inspired computational model that enables the robot to focus on the task-relevant objects. To filter out incompatible task models, we compute a task relevance value (TRV) for each object, which shows a human demonstrator's implicit indication of the relevance to the task. By combining an intentional action representation with `motionese', our model exhibits recognition capabilities compatible with the way that humans demonstrate. We evaluate the system on demonstrations from five different human subjects, showing its ability to correctly focus on the appropriate objects in these demonstrations.
  • Item
    Bayesian Surprise and Landmark Detection
    (Georgia Institute of Technology, 2009-05) Ranganathan, Ananth ; Dellaert, Frank
    Automatic detection of landmarks, usually special places in the environment such as gateways, for topological mapping has proven to be a difficult task. We present the use of Bayesian surprise, introduced in computer vision, for landmark detection. Further, we provide a novel hierarchical, graphical model for the appearance of a place and use this model to perform surprise-based landmark detection. Our scheme is agnostic to the sensor type, and we demonstrate this by implementing a simple laser model for computing surprise. We evaluate our landmark detector using appearance and laser measurements in the context of a topological mapping algorithm, thus demonstrating the practical applicability of the detector.
  • Item
    Effects of Social Exploration Mechanisms on Robot Learning
    (Georgia Institute of Technology, 2009) Cakmak, Maya ; DePalma, Nick ; Thomaz, Andrea L. ; Arriaga, Rosa I.
    Social learning in robotics has largely focused on imitation learning. Here we take a broader view and are interested in the multifaceted ways that a social partner can influence the learning process. We implement four social learning mechanisms on a robot: stimulus enhancement, emulation, mimicking, and imitation, and illustrate the computational benefits of each. In particular, we illustrate that some strategies are about directing the attention of the learner to objects and others are about actions. Taken together these strategies form a rich repertoire allowing social learners to use a social partner to greatly impact their learning process. We demonstrate these results in simulation and with physical robot ‘playmates’.
  • Item
    Computational Benefits of Social Learning Mechanisms: Stimulus Enhancement and Emulation
    (Georgia Institute of Technology, 2009) Cakmak, Maya ; DePalma, Nick ; Arriaga, Rosa I. ; Thomaz, Andrea L.
    Social learning in robotics has largely focused on imitation learning. In this work, we take a broader view of social learning and are interested in the multifaceted ways that a social partner can influence the learning process. We implement stimulus enhancement and emulation on a robot, and illustrate the computational benefits of social learning over individual learning. Additionally we characterize the differences between these two social learning strategies, showing that the preferred strategy is dependent on the current behavior of the social partner. We demonstrate these learning results both in simulation and with physical robot ‘playmates’.
  • Item
    Social Learning Mechanisms for Robots
    (Georgia Institute of Technology, 2009) Thomaz, Andrea L. ; Cakmak, Maya
    There is currently a surge of interest in service robotics—a desire to have robots leave the labs and factory floors to help solve critical issues facing our society, ranging from eldercare to education. A critical issue is that we cannot preprogram these robots with every skill they will need to play a useful role in society—they will need the ability to interact with ordinary people and acquire new relevant skills after they are deployed. Using human input with Machine Learning systems is not a new goal, but we believe that the problem needs reframing before the field will succeed in building robots that learn from everyday people. Many related works focus on machine performance gains; asking, “What can I get the person do to help my robot learn better?” In an approach we call, Socially Guided Machine Learning (SG-ML), we formulate the problem as a human-machine interaction; asking, “How can I improve the dynamics of this tightly coupled teacher-learner system?” With the belief that machines meant to learn from people can better take advantage of the ways in which people naturally approach teaching, our research aims to understand and computationally model mechanisms of human social learning in order to build machines that are natural and intuitive to teach. In this paper we focus on a particular aspect of SG-ML. When building a robot learner that takes advantage of human input, one of the design questions is “What is the right level of human guidance?” One has to determine how much and what kind of interaction to require of the human. We first review prior work with respect to these questions, and then summarize three recent projects. In the first two projects we investigate self versus social learning and demonstrate ways in which the two are mutually beneficial. In the third Andrea L. Thomaz, Maya Cakmak project we investigate a variety of social learning strategies, implementing four biologically inspired ways to take advantage of the social environment. We see computational benefits with each strategy depending on the environment, demonstrating the usefulness of non-imitative social learning. Taken together these projects argue that robots need a variety of learning strategies working together, including self and several types of social mechanisms, in order to succeed in Socially Guided Machine Learning.
  • Item
    Learning about Objects with Human Teachers
    (Georgia Institute of Technology, 2009) Thomaz, Andrea L. ; Cakmak, Maya
    A general learning task for a robot in a new environment is to learn about objects and what actions/effects they afford. To approach this, we look at ways that a human partner can intuitively help the robot learn, Socially Guided Machine Learning. We present experiments conducted with our robot, Junior, and make six observations characterizing how people approached teaching about objects. We show that Junior successfully used transparency to mitigate errors. Finally, we present the impact of “social” versus “nonsocial” data sets when training SVM classifiers.
  • Item
    Place Recognition-Based Fixed-Lag Smoothing for Environments with Unreliable GPS
    (Georgia Institute of Technology, 2008-05) Mottaghi, Roozbeh ; Kaess, Michael ; Ranganathan, Ananth ; Roberts, Richard ; Dellaert, Frank
    Pose estimation of outdoor robots presents some distinct challenges due to the various uncertainties in the robot sensing and action. In particular, global positioning sensors of outdoor robots do not always work perfectly, causing large drift in the location estimate of the robot. To overcome this common problem, we propose a new approach for global localization using place recognition. First, we learn the location of some arbitrary key places using odometry measurements and GPS measurements only at the start and the end of the robot trajectory. In subsequent runs, when the robot perceives a key place, our fixed-lag smoother fuses odometry measurements with the relative location to the key place to improve its pose estimate. Outdoor mobile robot experiments show that place recognition measurements significantly improve the estimate of the smoother in the absence of GPS measurements.
  • Item
    Stereo Tracking and Three-Point/One-Point Algorithms - A Robust Approach in Visual Odometry
    (Georgia Institute of Technology, 2006-10) Ni, Kai ; Dellaert, Frank
    In this paper, we present an approach of calculating visual odometry for outdoor robots equipped with a stereo rig. Instead of the typical feature matching or tracking, we use an improved stereo-tracking method that simultaneously decides the feature displacement in both cameras. Based on the matched features, a three-point algorithm for the resulting quadrifocal setting is carried out in a RANSAC framework to recover the unknown odometry. In addition, the change in rotation can be derived from infinity homography, and the remaining translational unknowns can be obtained even faster consequently . Both approaches are quite robust and deal well with challenging conditions such as wheel slippage.
  • Item
    A Rao-Blackwellized Particle Filter for Topological Mapping
    (Georgia Institute of Technology, 2006-05) Ranganathan, Ananth ; Dellaert, Frank
    We present a particle filtering algorithm to construct topological maps of an uninstrument environment. The algorithm presented here constructs the posterior on the space of all possible topologies given measurements, and is based on our previous work on a Bayesian inference framework for topological maps [21]. Constructing the posterior solves the perceptual aliasing problem in a general, robust manner. The use of a Rao-Blackwellized Particle Filter (RBPF) for this purpose makes the inference in the space of topologies incremental and run in real-time. The RBPF maintains the joint posterior on topological maps and locations of landmarks. We demonstrate that, using the landmark locations thus obtained, the global metric map can be obtained from the topological map generated by our algorithm through a simple post-processing step. A data-driven proposal is provided to overcome the degeneracy problem inherent in particle filters. The use of a Dirichlet process prior on landmark labels is also a novel aspect of this work. We use laser range scan and odometry measurements to present experimental results on a robot.
  • Item
    Data-Driven MCMC for Learning and Inference in Switching Linear Dynamic Systems
    (Georgia Institute of Technology, 2005-07) Oh, Sang Min ; Rehg, James M. ; Balch, Tucker ; Dellaert, Frank
    Switching Linear Dynamic System (SLDS) models are a popular technique for modeling complex nonlinear dynamic systems. An SLDS has significantly more descriptive power than an HMM, but inference in SLDS models is computationally intractable. This paper describes a novel inference algorithm for SLDS models based on the Data- Driven MCMC paradigm. We describe a new proposal distribution which substantially increases the convergence speed. Comparisons to standard deterministic approximation methods demonstrate the improved accuracy of our new approach. We apply our approach to the problem of learning an SLDS model of the bee dance. Honeybees communicate the location and distance to food sources through a dance that takes place within the hive. We learn SLDS model parameters from tracking data which is automatically extracted from video. We then demonstrate the ability to successfully segment novel bee dances into their constituent parts, effectively decoding the dance of the bees.