Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 496
  • Item
    Pulling Open Novel Doors and Drawers with Equilibrium Point Control
    (Georgia Institute of Technology, 2009-12) Jain, Advait ; Kemp, Charles C.
    A large variety of doors and drawers can be found within human environments. Humans regularly operate these mechanisms without difficulty, even if they have not previously interacted with a particular door or drawer. In this paper, we empirically demonstrate that equilibrium point control can enable a humanoid robot to pull open a variety of doors and drawers without detailed prior models, and infer their kinematics in the process. Our implementation uses a 7 DoF anthropomorphic arm with series elastic actuators (SEAs) at each joint, a hook as an end effector, and low mechanical impedance. For our control scheme, each SEA applies a gravity compensating torque plus a torque from a simulated, torsional, viscoelastic spring. Each virtual spring has constant stiffness and damping, and a variable equilibrium angle. These equilibrium angles form a joint space equilibrium point (JEP), which has a corresponding Cartesian space equilibrium point (CEP) for the arm's end effector. We present two controllers that generate a CEP at each time step (ca. 100 ms) and use inverse kinematics to command the arm with the corresponding JEP. One controller produces a linear CEP trajectory and the other alters its CEP trajectory based on real-time estimates of the mechanism's kinematics. We also present results from empirical evaluations of their performance (108 trials). In these trials, both controllers were robust with respect to variations in the mechanism, the pose of the base, the stiffness of the arm, and the way the handle was hooked. We also tested the more successful controller with 12 distinct mechanisms. In these tests, it was able to open 11 of the 12 mechanisms in a single trial, and successfully categorized the 11 mechanisms as having a rotary or prismatic joint, and opening to the right or left. Additionally, in the 7 out of 8 trials with rotary joints, the robot accurately estimated the location of the axis of rotation.
  • Item
    Constructing a high-performance robot from commercially available parts
    (Georgia Institute of Technology, 2009-12) Smith, Christian ; Christensen, Henrik I.
    Robot manipulators were the topic of this article. A large number of robot manipulators have been designed over the last half century, and several of these have become standard platforms for R&D efforts. The most widely used is the Unimate PUMA 560 series. Recently, there have been attempts to utilize standard platforms, as exemplified by the learning applied to ground robots (LAGRs) program organized by Defense Advanced Research Projects Agency (DARPA). The RobotCub project has also made a few robots available to the research community. As actuation systems have become more powerful and miniaturized, it has become possible to build dynamical robot systems to perform dynamic tasks.However, for research work, it is often a challenge to get access to a high-performance robot, which is also available to other researchers. In many respects, robotics has lacked standard systems based upon which comparative research could be performed. Too much research is performed on a basis that cannotbe replicated, reproduced, or reused. For basic manipulation, there has until recently been limited access to light weight manipulators with good dynamics.In this article, it describe the design of a high-performance robot manipulator that is built from components off the shelf to allow easy replication. In addition, it was designed to have enough dynamics to allow ball catching, which in reality implies that the system has adequate dynamics for most tasks.
  • Item
    The role of trust and relationships in human-robot social interaction
    (Georgia Institute of Technology, 2009-11-10) Wagner, Alan Richard
    Can a robot understand a human's social behavior? Moreover, how should a robot act in response to a human's behavior? If the goals of artificial intelligence are to understand, imitate, and interact with human level intelligence then researchers must also explore the social underpinnings of this intellect. Our endeavor is buttressed by work in biology, neuroscience, social psychology and sociology. Initially developed by Kelley and Thibaut, social psychology's interdependence theory serves as a conceptual skeleton for the study of social situations, a computational process of social deliberation, and relationships (Kelley&Thibaut, 1978). We extend and expand their original work to explore the challenge of interaction with an embodied, situated robot. This dissertation investigates the use of outcome matrices as a means for computationally representing a robot's interactions. We develop algorithms that allow a robot to create these outcome matrices from perceptual information and then to use them to reason about the characteristics of their interactive partner. This work goes on to introduce algorithms that afford a means for reasoning about a robot's relationships and the trustworthiness of a robot's partners. Overall, this dissertation embodies a general, principled approach to human-robot interaction which results in a novel and scientifically meaningful approach to topics such as trust and relationships.
  • Item
    Visual Place Categorization: Problem, Dataset, and Algorithm
    (Georgia Institute of Technology, 2009-10) Wu, Jianxin ; Rehg, James M. ; Christensen, Henrik I.
    In this paper we describe the problem of Visual Place Categorization (VPC) for mobile robotics, which involves predicting the semantic category of a place from image measurements acquired from an autonomous platform. For example, a robot in an unfamiliar home environment should be able to recognize the functionality of the rooms it visits, such as kitchen, living room, etc. We describe an approach to VPC based on sequential processing of images acquired with a conventional video camera.We identify two key challenges: Dealing with non-characteristic views and integrating restricted-FOV imagery into a holistic prediction. We present a solution to VPC based upon a recently-developed visual feature known as CENTRIST (CENsus TRansform hISTogram). We describe a new dataset for VPC which we have recently collected and are making publicly available. We believe this is the first significant, realistic dataset for the VPC problem. It contains the interiors of six different homes with ground truth labels. We use this dataset to validate our solution approach, achieving promising results.
  • Item
    Effective robot task learning by focusing on task-relevant objects
    (Georgia Institute of Technology, 2009-10) Lee, Kyu Hwa ; Lee, Jinhan ; Thomaz, Andrea L. ; Bobick, Aaron F.
    In a robot learning from demonstration framework involving environments with many objects, one of the key problems is to decide which objects are relevant to a given task. In this paper, we analyze this problem and propose a biologically-inspired computational model that enables the robot to focus on the task-relevant objects. To filter out incompatible task models, we compute a task relevance value (TRV) for each object, which shows a human demonstrator's implicit indication of the relevance to the task. By combining an intentional action representation with `motionese', our model exhibits recognition capabilities compatible with the way that humans demonstrate. We evaluate the system on demonstrations from five different human subjects, showing its ability to correctly focus on the appropriate objects in these demonstrations.
  • Item
    RF vision: RFID receive signal strength indicator (RSSI) images for sensor fusion and mobile manipulation
    (Georgia Institute of Technology, 2009-10) Deyle, Travis ; Nguyen, Hai ; Reynolds, Matt S. ; Kemp, Charles C.
    In this work we present a set of integrated methods that enable an RFID-enabled mobile manipulator to approach and grasp an object to which a self-adhesive passive (battery-free) UHF RFID tag has been affixed. Our primary contribution is a new mode of perception that produces images of the spatial distribution of received signal strength indication (RSSI) for each of the tagged objects in an environment. The intensity of each pixel in the 'RSSI image' is the measured RF signal strength for a particular tag in the corresponding direction. We construct these RSSI images by panning and tilting an RFID reader antenna while measuring the RSSI value at each bearing. Additionally, we present a framework for estimating a tagged object's 3D location using fused ID-specific features derived from an RSSI image, a camera image, and a laser range finder scan. We evaluate these methods using a robot with actuated, long-range RFID antennas and finger-mounted short-range antennas. The robot first scans its environment to discover which tagged objects are within range, creates a user interface, orients toward the user-selected object using RF signal strength, estimates the 3D location of the object using an RSSI image with sensor fusion, approaches and grasps the object, and uses its finger-mounted antennas to confirm that the desired object has been grasped. In our tests, the sensor fusion system with an RSSI image correctly located the requested object in 17 out of 18 trials (94.4%), an 11.1% improvement over the system's performance when not using an RSSI image. The robot correctly oriented to the requested object in 8 out of 9 trials (88.9%), and in 3 out of 3 trials the entire system successfully grasped the object selected by the user.
  • Item
    Assistive Formation Maintenance for Human-Led Multi-Robot Systems
    (Georgia Institute of Technology, 2009-10) Parker, Lonnie T. ; Howard, Ayanna M.
    In ground-based military maneuvers, group formations require flexibility when traversing from one point to the next. For a human-led team of semi-autonomous agents, a certain level of awareness demonstrated by the agents regarding the quality of the formation is preferable. Through the use of a Multi-Robot System (MRS), this work combines leader-follower principles augmented by an assistive formation maintenance (AFM) method to improve formation keeping and demonstrate a formation-in-motion concept. This is achieved using the Robot Mean Task Allocation method (RTMA), a strategy used to allocate formation positions to each unit within a continuously mobile MRS. The end goal is to provide a military application that allows a soldier to efficiently tele-operate a semi-autonomous MRS capable of holding formation amidst a cluttered environment. Baseline simulation is performed in Player/Stage to show the applicability of our developed model and its potential for expansive research.
  • Item
    Wii-mote robot control using human motion models
    (Georgia Institute of Technology, 2009-10) Smith, Christian ; Christensen, Henrik I.
    As mass-market video game controllers have become more advanced, there has been a recent increase in interest for using these as intuitive and inexpensive control devices. In this paper we examine position control for a robot using a wiimote game controller. We show that human motion models can be used to achieve better precision than traditional tracking approaches, sufficient for simpler tasks. We also present an experiment that shows that very intuitive control can be achieved, as novice subjects can control a robot arm through simple tasks after just a few minutes of practice and minimal instructions.
  • Item
    PPS-Tags: Physical, Perceptual and Semantic Tags for Autonomous Mobile Manipulation
    (Georgia Institute of Technology, 2009-10) Nguyen, Hai ; Deyle, Travis ; Reynolds, Matt S. ; Kemp, Charles C.
    For many promising application areas, autonomous mobile manipulators do not yet exhibit sufficiently robust performance. We propose the use of tags applied to task-relevant locations in human environments in order to help autonomous mobile manipulators physically interact with the location, perceive the location, and understand the location’s semantics. We call these tags physical, perceptual and semantic tags (PPS-tags). We present three examples of PPS-tags, each of which combines compliant and colorful material with a UHF RFID tag. The RFID tag provides a unique identifier that indexes into a semantic database that holds information such as the following: what actions can be performed at the location, how can these actions be performed, and what state changes should be observed upon task success? We also present performance results for our robot operating on a PPS-tagged light switch, rocker light switch, lamp, drawer, and trash can. We tested the robot performing the available actions from 4 distinct locations with each of these 5 tagged devices. For the light switch, rocker light switch, lamp, and trash can, the robot succeeded in all trials (24/24). The robot failed to open the drawer when starting from an oblique angle, and thus succeeded in 6 out of 8 trials. We also tested the ability of the robot to detect failure in unusual circumstances, such as the lamp being unplugged and the drawer being stuck.
  • Item
    Normalized graph-cuts for large scale visual SLAM
    (Georgia Institute of Technology, 2009-10) Rogers, John G. ; Christensen, Henrik I.
    Simultaneous Localization and Mapping (SLAM) suffers from a quadratic space and time complexity per update step. Recent advancements have been made in approximating the posterior by forcing the information matrix to remain sparse as well as exact techniques for generating the posterior in the full SLAM solution to both the trajectory and the map. Current approximate techniques for maintaining an online estimate of the map for a robot to use while exploring make capacity-based decisions about when to split into sub-maps. This paper will describe an alternative partitioning strategy for online approximate real-time SLAM which makes use of normalized graph cuts to remove less information from the full map.