Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 319
  • Item
    Formal Performance Guarantees for Behavior-Based Localization Missions
    (Georgia Institute of Technology, 2016-11) Lyons, Damian M. ; Arkin, Ronald C.
    Localization and mapping algorithms can allow a robot to navigate well in an unknown environment. However, whether such algorithms enhance any specific robot mission is currently a matter for empirical validation. In this paper we apply our MissionLab/VIPARS mission design and verification approach to an autonomous robot mission that uses probabilistic localization software. Two approaches to modeling probabilistic localization for verification are presented: a high-level approach, and a sample-based approach which allows run-time code to be embedded in verification. Verification and experimental validation results are presented for two waypoint missions using each method, demonstrating the accuracy of verification, and both are compared with verification of an odometry-only mission, to show the mission-specific benefit of localization.
  • Item
    Grasp selection strategies for robot manipulation using a superquadric-based object representation
    (Georgia Institute of Technology, 2016-07-29) Huaman, Ana Consuelo
    This thesis presents work on the implementation of a robotic system targeted to perform a set of basic manipulation tasks instructed by a human user. The core motivation on the development of this system was in enabling our robot to achieve these tasks reliably, in a time-efficient manner and under mildly realistic constraints. Robot manipulation as a field has grown exponentially during the last years, presenting us with a vast array of robots exhibiting skills as sophisticated as preparing dinner, making an expresso or operating a drill. These complex tasks are in general achieved by using equally complex frameworks, assuming extensive pre-existing knowledge, such as perfect environment knowledge, sizable amounts of training data or availability of crowdsourcing resources. In this work we postulate that elementary tasks, such as pick-up, pick-and-place and pouring, can be realized with online algorithms and a limited knowledge of the objects to be manipulated. The presented work shows a fully implemented pipeline where each module is designed to meet the core requirements specified above. We present a number of experiments involving a database of 10 household objects used in 3 selected elementary manipulation tasks. Our contributions are distributed in each module of our pipeline: (1) We demonstrate that superquadrics are useful primitive shapes suitable to represent on-the-fly a considerable number of convex household objects; their parametric nature (3 axis and 2 shape parameters) is shown to be helpful to represent simple semantic labels for objects (i.e. for a pouring task) useful for grasp and motion planning. (2) We introduce a hand-and-arm metric that considers both grasp robustness and arm end-comfort to select grasps for simple pick-up tasks. We show with real and simulation results that considering both hand and arm aspects of the manipulation task helps to select grasps that are easier to execute in real environments without sacrificing grasp stability on the process. (3) We present grasp selection and planning strategies that exploit task constraints to select the more appropriate grasp to carry out a manipulation task in an online and efficient manner (in terms of planning and execution time).
  • Item
    Planning in constraint space for multi-body manipulation tasks
    (Georgia Institute of Technology, 2016-04-05) Erdogan, Can
    Robots are inherently limited by physical constraints on their link lengths, motor torques, battery power and structural rigidity. To thrive in circumstances that push these limits, such as in search and rescue scenarios, intelligent agents can use the available objects in their environment as tools. Reasoning about arbitrary objects and how they can be placed together to create useful structures such as ramps, bridges or simple machines is critical to push beyond one's physical limitations. Unfortunately, the solution space is combinatorial in the number of available objects and the configuration space of the chosen objects and the robot that uses the structure is high dimensional. To address these challenges, we propose using constraint satisfaction as a means to test the feasibility of candidate structures and adopt search algorithms in the classical planning literature to find sufficient designs. The key idea is that the interactions between the components of a structure can be encoded as equality and inequality constraints on the configuration spaces of the respective objects. Furthermore, constraints that are induced by a broadly defined action, such as placing an object on another, can be grouped together using logical representations such as Planning Domain Definition Language (PDDL). Then, a classical planning search algorithm can reason about which set of constraints to impose on the available objects, iteratively creating a structure that satisfies the task goals and the robot constraints. To demonstrate the effectiveness of this framework, we present both simulation and real robot results with static structures such as ramps, bridges and stairs, and quasi-static structures such as lever-fulcrum simple machines.
  • Item
    Navigation behavior design and representations for a people aware mobile robot system
    (Georgia Institute of Technology, 2016-01-15) Cosgun, Akansel
    There are millions of robots in operation around the world today, and almost all of them operate on factory floors in isolation from people. However, it is now becoming clear that robots can provide much more value assisting people in daily tasks in human environments. Perhaps the most fundamental capability for a mobile robot is navigating from one location to another. Advances in mapping and motion planning research in the past decades made indoor navigation a commodity for mobile robots. Yet, questions remain on how the robots should move around humans. This thesis advocates the use of semantic maps and spatial rules of engagement to enable non-expert users to effortlessly interact with and control a mobile robot. A core concept explored in this thesis is the Tour Scenario, where the task is to familiarize a mobile robot to a new environment after it is first shipped and unpacked in a home or office setting. During the tour, the robot follows the user and creates a semantic representation of the environment. The user labels objects, landmarks and locations by performing pointing gestures and using the robot's user interface. The spatial semantic information is meaningful to humans, as it allows providing commands to the robot such as ``bring me a cup from the kitchen table". While the robot is navigating towards the goal, it should not treat nearby humans as obstacles and should move in a socially acceptable manner. Three main navigation behaviors are studied in this work. The first behavior is the point-to-point navigation. The navigation planner presented in this thesis borrows ideas from human-human spatial interactions, and takes into account personal spaces as well as reactions of people who are in close proximity to the trajectory of the robot. The second navigation behavior is person following. After the description of a basic following behavior, a user study on person following for telepresence robots is presented. Additionally, situation awareness for person following is demonstrated, where the robot facilitates tasks by predicting the intent of the user and utilizing the semantic map. The third behavior is person guidance. A tour-guide robot is presented with a particular application for visually impaired users.
  • Item
    An Analysis of Displays for Probabilistic Robotic Mission Verification Results
    (Georgia Institute of Technology, 2016) O‘Brien, Matthew ; Arkin, Ronald C.
    An approach for the verification of autonomous behavior-based robotic missions has been developed in a collaborative effort between Fordham University and Georgia Tech. This paper addresses the step after verification, how to present this information to users. The verification of robotic missions is inherently probabilistic, opening the possibility of misinterpretation by operators. A human study was performed to test three different displays (numeric, graphic, and symbolic) for summarizing the verification results. The displays varied by format and specificity. Participants made decisions about high-risk robotic missions using a prototype interface. Consistent with previous work, the type of display had no effect. The displays did not reduce the time participants took compared to a control group with no summary, but did improve the accuracy of their decisions. Participants showed a strong preference for more specific data, heavily using the full verification results. Based on these results, a different display paradigm is suggested.
  • Item
    The Middle Child Problem: Revisiting Parametric Min-cut and Seeds for Object Proposals
    (Georgia Institute of Technology, 2015-12) Humayun, Ahmad ; Li, Fuxin ; Rehg, James M.
    Object proposals have recently fueled the progress in detection performance. These proposals aim to provide category-agnostic localizations for all objects in an image. One way to generate proposals is to perform parametric min-cuts over seed locations. This paper demonstrates that standard parametric-cut models are ineffective in obtaining medium-sized objects, which we refer to as the middle child problem. We propose a new energy minimization framework incorporating geodesic distances between segments which solves this problem. In addition, we introduce a new superpixel merging algorithm which can generate a small set of seeds that reliably cover a large number of objects of all sizes. We call our method POISE - "Proposals for Objects from Improved Seeds and Energies." POISE enables parametric min-cuts to reach their full potential. On PASCAL VOC it generates ~2,640 segments with an average overlap of 0.81, whereas the closest competing methods require more than 4,200 proposals to reach the same accuracy. We show detailed quantitative comparisons against 5 state-of-the-art methods on PASCAL VOC and Microsoft COCO segmentation challenges.
  • Item
    Temporal Heterogeneity and the Value of Slowness in Robotic Systems
    (Georgia Institute of Technology, 2015-12) Arkin, Ronald C. ; Egerstedt, Magnus B.
    Robot teaming is a well-studied area, but little research to date has been conducted on the fundamental benefits of heterogeneous teams and virtually none on temporal heterogeneity, where timescales of the various platforms are radically different. This paper explores this aspect of robot ecosystems consisting of fast and slow robots (SlowBots) working together, including the bio-inspiration for such systems.
  • Item
    Robots learning actions and goals from everyday people
    (Georgia Institute of Technology, 2015-11-16) Akgun, Baris
    Robots are destined to move beyond the caged factory floors towards domains where they will be interacting closely with humans. They will encounter highly varied environments, scenarios and user demands. As a result, programming robots after deployment will be an important requirement. To address this challenge, the field of Learning from Demonstration (LfD) emerged with the vision of programming robots through demonstrations of the desired behavior instead of explicit programming. The field of LfD within robotics has been around for more than 30 years and is still an actively researched field. However, very little research is done on the implications of having a non-robotics expert as a teacher. This thesis aims to bridge this gap by developing learning from demonstration algorithms and interaction paradigms that allow non-expert people to teach robots new skills. The first step of the thesis was to evaluate how non-expert teachers provide demonstrations to robots. Keyframe demonstrations are introduced to the field of LfD to help people teach skills to robots and compared with the traditional trajectory demonstrations. The utility of keyframes are validated by a series of experiments with more than 80 participants. Based on the experiments, a hybrid of trajectory and keyframe demonstrations are proposed to take advantage of both and a method was developed to learn from trajectories, keyframes and hybrid demonstrations in a unified way. A key insight from these user experiments was that teachers are goal oriented. They concentrated on achieving the goal of the demonstrated skills rather than providing good quality demonstrations. Based on this observation, this thesis introduces a method that can learn actions and goals from the same set of demonstrations. The action models are used to execute the skill and goal models to monitor this execution. A user study with eight participants and two skills showed that successful goal models can be learned from non- expert teacher data even if the resulting action models are not as successful. Following these results, this thesis further develops a self-improvement algorithm that uses the goal monitoring output to improve the action models, without further user input. This approach is validated with an expert user and two skills. Finally, this thesis builds an interactive LfD system that incorporates both goal learning and self-improvement and evaluates it with 12 naive users and three skills. The results suggests that teacher feedback during experiments increases skill execution and monitoring success. Moreover, non-expert data can be used as a seed to self-improvement to fix unsuccessful action models.
  • Item
    TAR: Trajectory adaptation for recognition of robot tasks to improve teamwork
    (Georgia Institute of Technology, 2015-11-10) Novitzky, Michael
    One key to more effective cooperative interaction in a multi-robot team is the ability to understand the behavior and intent of other robots. Observed teammate action sequences can be learned to perform trajectory recognition which can be used to determine their current task. Previously, we have applied behavior histograms, hidden Markov models (HMMs), and conditional random fields (CRFs) to perform trajectory recognition as an approach to task monitoring in the absence of commu- nication. To demonstrate trajectory recognition of various autonomous vehicles, we used trajectory-based techniques for model generation and trajectory discrimination in experiments using actual data. In addition to recognition of trajectories, we in- troduced strategies, based on the honeybee’s waggle dance, in which cooperating autonomous teammates could leverage recognition during periods of communication loss. While the recognition methods were able to discriminate between the standard trajectories performed in a typical survey mission, there were inaccuracies and delays in identifying new trajectories after a transition had occurred. Inaccuracies in recog- nition lead to inefficiencies as cooperating teammates acted on incorrect data. We then introduce the Trajectory Adaptation for Recognition (TAR) framework which seeks to directly address difficulties in recognizing the trajectories of autonomous vehicles by modifying the trajectories they follow to perform them. Optimization techniques are used to modify the trajectories to increase the accuracy of recognition while also improving task objectives and maintaining vehicle dynamics. Experiments are performed which demonstrate that using trajectories optimized in this manner lead to improved recognition accuracy.
  • Item
    Probabilistic Verification of Multi-robot Missions in Uncertain Environments
    (Georgia Institute of Technology, 2015-11) Lyons, Damian M. ; Arkin, Ronald C. ; Jiang, Shu ; Harrington, Dagan ; Tang, Feng ; Tang, Peng
    The effective use of autonomous robot teams in highly-critical missions depends on being able to establish performance guarantees. However, establishing a guarantee for the behavior of an autonomous robot operating in an uncertain environment with obstacles is a challenging problem. This paper addresses the challenges involved in building a software tool for verifying the behavior of a multi-robot waypoint mission that includes uncertain environment geometry as well as uncertainty in robot motion. One contribution of this paper is an approach to the problem of apriori specification of uncertain environments for robot program verification. A second contribution is a novel method to extend the Bayesian Network formulation to reason about random variables with different subpopulations, introduced to address the challenge of representing the effects of multiple sensory histories when verifying a robot mission. The third contribution is experimental validation results presented to show the effectiveness of this approach on a two-robot, bounding overwatch mission.