Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 152
  • Item
    The Middle Child Problem: Revisiting Parametric Min-cut and Seeds for Object Proposals
    (Georgia Institute of Technology, 2015-12) Humayun, Ahmad ; Li, Fuxin ; Rehg, James M.
    Object proposals have recently fueled the progress in detection performance. These proposals aim to provide category-agnostic localizations for all objects in an image. One way to generate proposals is to perform parametric min-cuts over seed locations. This paper demonstrates that standard parametric-cut models are ineffective in obtaining medium-sized objects, which we refer to as the middle child problem. We propose a new energy minimization framework incorporating geodesic distances between segments which solves this problem. In addition, we introduce a new superpixel merging algorithm which can generate a small set of seeds that reliably cover a large number of objects of all sizes. We call our method POISE - "Proposals for Objects from Improved Seeds and Energies." POISE enables parametric min-cuts to reach their full potential. On PASCAL VOC it generates ~2,640 segments with an average overlap of 0.81, whereas the closest competing methods require more than 4,200 proposals to reach the same accuracy. We show detailed quantitative comparisons against 5 state-of-the-art methods on PASCAL VOC and Microsoft COCO segmentation challenges.
  • Item
    Temporal Heterogeneity and the Value of Slowness in Robotic Systems
    (Georgia Institute of Technology, 2015-12) Arkin, Ronald C. ; Egerstedt, Magnus B.
    Robot teaming is a well-studied area, but little research to date has been conducted on the fundamental benefits of heterogeneous teams and virtually none on temporal heterogeneity, where timescales of the various platforms are radically different. This paper explores this aspect of robot ecosystems consisting of fast and slow robots (SlowBots) working together, including the bio-inspiration for such systems.
  • Item
    Robots learning actions and goals from everyday people
    (Georgia Institute of Technology, 2015-11-16) Akgun, Baris
    Robots are destined to move beyond the caged factory floors towards domains where they will be interacting closely with humans. They will encounter highly varied environments, scenarios and user demands. As a result, programming robots after deployment will be an important requirement. To address this challenge, the field of Learning from Demonstration (LfD) emerged with the vision of programming robots through demonstrations of the desired behavior instead of explicit programming. The field of LfD within robotics has been around for more than 30 years and is still an actively researched field. However, very little research is done on the implications of having a non-robotics expert as a teacher. This thesis aims to bridge this gap by developing learning from demonstration algorithms and interaction paradigms that allow non-expert people to teach robots new skills. The first step of the thesis was to evaluate how non-expert teachers provide demonstrations to robots. Keyframe demonstrations are introduced to the field of LfD to help people teach skills to robots and compared with the traditional trajectory demonstrations. The utility of keyframes are validated by a series of experiments with more than 80 participants. Based on the experiments, a hybrid of trajectory and keyframe demonstrations are proposed to take advantage of both and a method was developed to learn from trajectories, keyframes and hybrid demonstrations in a unified way. A key insight from these user experiments was that teachers are goal oriented. They concentrated on achieving the goal of the demonstrated skills rather than providing good quality demonstrations. Based on this observation, this thesis introduces a method that can learn actions and goals from the same set of demonstrations. The action models are used to execute the skill and goal models to monitor this execution. A user study with eight participants and two skills showed that successful goal models can be learned from non- expert teacher data even if the resulting action models are not as successful. Following these results, this thesis further develops a self-improvement algorithm that uses the goal monitoring output to improve the action models, without further user input. This approach is validated with an expert user and two skills. Finally, this thesis builds an interactive LfD system that incorporates both goal learning and self-improvement and evaluates it with 12 naive users and three skills. The results suggests that teacher feedback during experiments increases skill execution and monitoring success. Moreover, non-expert data can be used as a seed to self-improvement to fix unsuccessful action models.
  • Item
    TAR: Trajectory adaptation for recognition of robot tasks to improve teamwork
    (Georgia Institute of Technology, 2015-11-10) Novitzky, Michael
    One key to more effective cooperative interaction in a multi-robot team is the ability to understand the behavior and intent of other robots. Observed teammate action sequences can be learned to perform trajectory recognition which can be used to determine their current task. Previously, we have applied behavior histograms, hidden Markov models (HMMs), and conditional random fields (CRFs) to perform trajectory recognition as an approach to task monitoring in the absence of commu- nication. To demonstrate trajectory recognition of various autonomous vehicles, we used trajectory-based techniques for model generation and trajectory discrimination in experiments using actual data. In addition to recognition of trajectories, we in- troduced strategies, based on the honeybee’s waggle dance, in which cooperating autonomous teammates could leverage recognition during periods of communication loss. While the recognition methods were able to discriminate between the standard trajectories performed in a typical survey mission, there were inaccuracies and delays in identifying new trajectories after a transition had occurred. Inaccuracies in recog- nition lead to inefficiencies as cooperating teammates acted on incorrect data. We then introduce the Trajectory Adaptation for Recognition (TAR) framework which seeks to directly address difficulties in recognizing the trajectories of autonomous vehicles by modifying the trajectories they follow to perform them. Optimization techniques are used to modify the trajectories to increase the accuracy of recognition while also improving task objectives and maintaining vehicle dynamics. Experiments are performed which demonstrate that using trajectories optimized in this manner lead to improved recognition accuracy.
  • Item
    Probabilistic Verification of Multi-robot Missions in Uncertain Environments
    (Georgia Institute of Technology, 2015-11) Lyons, Damian M. ; Arkin, Ronald C. ; Jiang, Shu ; Harrington, Dagan ; Tang, Feng ; Tang, Peng
    The effective use of autonomous robot teams in highly-critical missions depends on being able to establish performance guarantees. However, establishing a guarantee for the behavior of an autonomous robot operating in an uncertain environment with obstacles is a challenging problem. This paper addresses the challenges involved in building a software tool for verifying the behavior of a multi-robot waypoint mission that includes uncertain environment geometry as well as uncertainty in robot motion. One contribution of this paper is an approach to the problem of apriori specification of uncertain environments for robot program verification. A second contribution is a novel method to extend the Bayesian Network formulation to reason about random variables with different subpopulations, introduced to address the challenge of representing the effects of multiple sensory histories when verifying a robot mission. The third contribution is experimental validation results presented to show the effectiveness of this approach on a two-robot, bounding overwatch mission.
  • Item
    Mixed-Initiative Human-Robot Interaction: Definition, Taxonomy, and Survey
    (Georgia Institute of Technology, 2015-10) Jiang, Shu ; Arkin, Ronald C.
    The objectives of this article are: 1) to present a taxonomy for mixed-initiative human-robot interaction and 2) to survey its state of practice through the examination of past research along each taxonomical dimension. The paper starts with some definitions of mixed-initiative interaction (MII) from the perspective of human-computer interaction (HCI) to introduce the basic concepts of MII. We then synthesize these definitions to the robotic context for mixed-initiative human-robot teams. A taxonomy for mixed-initiative in human-robot interaction is then presented. The goal of the taxonomy is to inform the design of mixed-initiative human-robot systems by identifying key elements of these systems. The state of practice of mixed-initiative human-robot interaction is then surveyed and examined along each taxonomical dimension.
  • Item
    Towards a Robot Computational Model to Preserve Dignity in Stigmatizing Patient-Caregiver Relationships
    (Georgia Institute of Technology, 2015-10) Pettinati, Michael J. ; Arkin, Ronald C.
    Parkinson’s disease (PD) patients with an expressive mask are particularly vulnerable to stigmatization during interactions with their caregivers due to their inability to express affect through nonverbal channels. Our approach to uphold PD patient dignity is through the use of an ethical robot that mediates patient shame when it recognizes norm violations in the patient-caregiver interaction. This paper presents the basis for a computational model tasked with computing patient shame and the empathetic response of a caregiver during “empathetic opportunities” in their interaction. A PD patient is liable to suffer indignity when there is a substantial difference between his experienced shame and the empathy shown by the caregiver. When this difference falls outside of acceptable set bounds (norms), the robotic agent will act using subtle, nonverbal cues to guide the relationship back within these bounds, preserving patient dignity.
  • Item
    The Benefits of Robot Deception in Search and Rescue: Computational Approach for Deceptive Action Selection via Case-Based Reasoning
    (Georgia Institute of Technology, 2015-10) Shim, Jaeeun ; Arkin, Ronald C.
    By increasing the use of autonomous rescue robots in search and rescue (SAR), the chance of interaction between rescue robots and human victims also grows. More specifically, when autonomous rescue robots are considered in SAR, it is important for robots to handle sensitively human victims’ emotions. Deception can potentially be used effectively by robots to control human victims’ fear and shock as used by human rescuers. In this paper, we introduce robotic deception in SAR contexts and present a novel computational approach for an autonomous rescue robot’s deceptive action selection mechanism.
  • Item
    Primate-inspired Autonomous Navigation Using Mental Rotation and Advice-Giving
    (Georgia Institute of Technology, 2015-09) Velayudhan, Lakshmi ; Arkin, Ronald C.
    The cognitive process that enables many primate species to efficiently traverse their environment has been a subject of numerous studies. Mental rotation is hypothesized to be one such process. The evolutionary causes for dominance in primates of mental rotation over its counterpart, rotational invariance, is still not conclusively understood. Advice-giving offers a possible explanation for this dominance in more evolved primate species such as humans. This project aims at exploring the relationship between advice-giving and mental rotation by designing a system that combines the two processes in order to achieve successful navigation to a goal location. Two approaches to visual advice-giving were explored namely, segment based and object based advice-giving. The results obtained upon execution of the navigation algorithm on a Pioneer 2-DX robotic platform offers evidence regarding a linkage between advice-giving and mental rotation. An overall navigational accuracy of 90.9% and 71.43% were obtained respectively for the segment-based and object-based methods. These results also indicate how the two processes can function together in order to accomplish a navigational task in the absence of any external aid, as is the case with primates.
  • Item
    Physics-based reinforcement learning for autonomous manipulation
    (Georgia Institute of Technology, 2015-08-21) Scholz, Jonathan
    With recent research advances, the dream of bringing domestic robots into our everyday lives has become more plausible than ever. Domestic robotics has grown dramatically in the past decade, with applications ranging from house cleaning to food service to health care. To date, the majority of the planning and control machinery for these systems are carefully designed by human engineers. A large portion of this effort goes into selecting the appropriate models and control techniques for each application, and these skills take years to master. Relieving the burden on human experts is therefore a central challenge for bringing robot technology to the masses. This work addresses this challenge by introducing a physics engine as a model space for an autonomous robot, and defining procedures for enabling robots to decide when and how to learn these models. We also present an appropriate space of motor controllers for these models, and introduce ways to intelligently select when to use each controller based on the estimated model parameters. We integrate these components into a framework called Physics-Based Reinforcement Learning, which features a stochastic physics engine as the core model structure. Together these methods enable a robot to adapt to unfamiliar environments without human intervention. The central focus of this thesis is on fast online model learning for objects with under-specified dynamics. We develop our approach across a diverse range of domestic tasks, starting with a simple table-top manipulation task, followed by a mobile manipulation task involving a single utility cart, and finally an open-ended navigation task with multiple obstacles impeding robot progress. We also present simulation results illustrating the efficiency of our method compared to existing approaches in the learning literature.