Organizational Unit:
Healthcare Robotics Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 6 of 6
  • Item
    RF vision: RFID receive signal strength indicator (RSSI) images for sensor fusion and mobile manipulation
    (Georgia Institute of Technology, 2009-10) Deyle, Travis ; Nguyen, Hai ; Reynolds, Matt S. ; Kemp, Charles C.
    In this work we present a set of integrated methods that enable an RFID-enabled mobile manipulator to approach and grasp an object to which a self-adhesive passive (battery-free) UHF RFID tag has been affixed. Our primary contribution is a new mode of perception that produces images of the spatial distribution of received signal strength indication (RSSI) for each of the tagged objects in an environment. The intensity of each pixel in the 'RSSI image' is the measured RF signal strength for a particular tag in the corresponding direction. We construct these RSSI images by panning and tilting an RFID reader antenna while measuring the RSSI value at each bearing. Additionally, we present a framework for estimating a tagged object's 3D location using fused ID-specific features derived from an RSSI image, a camera image, and a laser range finder scan. We evaluate these methods using a robot with actuated, long-range RFID antennas and finger-mounted short-range antennas. The robot first scans its environment to discover which tagged objects are within range, creates a user interface, orients toward the user-selected object using RF signal strength, estimates the 3D location of the object using an RSSI image with sensor fusion, approaches and grasps the object, and uses its finger-mounted antennas to confirm that the desired object has been grasped. In our tests, the sensor fusion system with an RSSI image correctly located the requested object in 17 out of 18 trials (94.4%), an 11.1% improvement over the system's performance when not using an RSSI image. The robot correctly oriented to the requested object in 8 out of 9 trials (88.9%), and in 3 out of 3 trials the entire system successfully grasped the object selected by the user.
  • Item
    PPS-Tags: Physical, Perceptual and Semantic Tags for Autonomous Mobile Manipulation
    (Georgia Institute of Technology, 2009-10) Nguyen, Hai ; Deyle, Travis ; Reynolds, Matt S. ; Kemp, Charles C.
    For many promising application areas, autonomous mobile manipulators do not yet exhibit sufficiently robust performance. We propose the use of tags applied to task-relevant locations in human environments in order to help autonomous mobile manipulators physically interact with the location, perceive the location, and understand the location’s semantics. We call these tags physical, perceptual and semantic tags (PPS-tags). We present three examples of PPS-tags, each of which combines compliant and colorful material with a UHF RFID tag. The RFID tag provides a unique identifier that indexes into a semantic database that holds information such as the following: what actions can be performed at the location, how can these actions be performed, and what state changes should be observed upon task success? We also present performance results for our robot operating on a PPS-tagged light switch, rocker light switch, lamp, drawer, and trash can. We tested the robot performing the available actions from 4 distinct locations with each of these 5 tagged devices. For the light switch, rocker light switch, lamp, and trash can, the robot succeeded in all trials (24/24). The robot failed to open the drawer when starting from an oblique angle, and thus succeeded in 6 out of 8 trials. We also tested the ability of the robot to detect failure in unusual circumstances, such as the lamp being unplugged and the drawer being stuck.
  • Item
    Bio-inspired Assistive Robotics: Service Dogs as a Model for Human-Robot Interaction and Mobile Manipulation
    (Georgia Institute of Technology, 2008-10) Nguyen, Hai ; Kemp, Charles C.
    Service dogs have successfully provided assistance to thousands of motor-impaired people worldwide. As a step towards the creation of robots that provide comparable assistance, we present a biologically inspired robot capable of obeying many of the same commands and exploiting the same environmental modifications as service dogs. The robot responds to a subset of the 71 verbal commands listed in the service dog training manual used by Georgia Canines for Independence. In our implementation, the human directs the robot by giving a verbal command and illuminating a task-relevant location with an off-the-shelf green laser pointer. We also describe a novel and inexpensive way to engineer the environment in order to help assistive robots perform useful tasks with generality and robustness. In particular, we show that by tying or otherwise affixing colored towels to doors and drawers an assistive robot can robustly open these doors and drawers in a manner similar to a service dog. This is analogous to the common practice of tying bandannas or handkerchiefs to door handles and drawer handles in order to enable service dogs to operate them. This method has the advantage of simplifying both the perception and physical interaction required to perform the task. It also enables the robot to use the same small set of behaviors to perform a variety of tasks across distinct doors and drawers. We report quantitative results for our assistive robot when performing assistive tasks in response to user commands in a modified environment. In our tests, the robot successfully opened two different drawers in 18 out of 20 trials (90%), closed a drawer in 9 out of 10 trials (90%), and opened a door that required first operating a handle and then pushing it open in 8 out of 10 trials (80%). Additionally, the robot succeeded in single trial tests of opening a microwave, grasping an object, placing an object, delivering an object, and responding to various other commands, such as staying quiet.
  • Item
    A Clickable World: Behavior Selection Through Pointing and Context for Mobile Manipulation
    (Georgia Institute of Technology, 2008-09) Nguyen, Hai ; Jain, Advait ; Anderson, Cressel D. ; Kemp, Charles C.
    We present a new behavior selection system for human-robot interaction that maps virtual buttons overlaid on the physical environment to the robotpsilas behaviors, thereby creating a clickable world. The user clicks on a virtual button and activates the associated behavior by briefly illuminating a corresponding 3D location with an off-the-shelf green laser pointer. As we have described in previous work, the robot can detect this click and estimate its 3D location using an omnidirectional camera and a pan/tilt stereo camera. In this paper, we show that the robot can select the appropriate behavior to execute using the 3D location of the click, the context around this 3D location, and its own state. For this work, the robot performs this selection process using a cascade of classifiers. We demonstrate the efficacy of this approach with an assistive object-fetching application. Through empirical evaluation, we show that the 3D location of the click, the state of the robot, and the surrounding context is sufficient for the robot to choose the correct behavior from a set of behaviors and perform the following tasks: pick-up a designated object from a floor or table, deliver an object to a designated person, place an object on a designated table, go to a designated location, and touch a designated location with its end effector.
  • Item
    EL-E: An Assistive Mobile Manipulator that Autonomously Fetches Objects from Flat Surfaces
    (Georgia Institute of Technology, 2008-03-12) Nguyen, Hai ; Anderson, Cressel D. ; Trevor, Alexander J. B. ; Jain, Advait ; Xu, Zhe ; Kemp, Charles C.
    Objects within human environments are usually found on flat surfaces that are orthogonal to gravity, such as floors, tables, and shelves. We first present a new assistive robot that is explicitly designed to take advantage of this common structure in order to retrieve unmodeled, everyday objects for people with motor impairments. This compact, stati- cally stable mobile manipulator has a novel kinematic and sensory configuration that facilitates autonomy and human- robot interaction within indoor human environments. Sec- ond, we present a behavior system that enables this robot to fetch objects selected with a laser pointer from the floor and tables. The robot can approach an object selected with the laser pointer interface, detect if the object is on an elevated surface, raise or lower its arm and sensors to this surface, and visually and tacitly grasp the object. Once the object is acquired, the robot can place the object on a laser des- ignated surface above the floor, follow the laser pointer on the floor, or deliver the object to a seated person selected with the laser pointer. Within this paper we present initial results for object acquisition and delivery to a seated, able- bodied individual. For this test, the robot succeeded in 6 out of 7 trials (86%).
  • Item
    A Point-and-Click Interface for the Real World: Laser Designation of Objects for Mobile Manipulation
    (Georgia Institute of Technology, 2008-03) Kemp, Charles C. ; Anderson, Cressel D. ; Nguyen, Hai ; Trevor, Alexander J. B. ; Xu, Zhe
    We present a novel interface for human-robot interaction that enables a human to intuitively and unambiguously se- lect a 3D location in the world and communicate it to a mo- bile robot. The human points at a location of interest and illuminates it (“clicks it”) with an unaltered, off-the-shelf, green laser pointer. The robot detects the resulting laser spot with an omnidirectional, catadioptric camera with a narrow-band green filter. After detection, the robot moves its stereo pan/tilt camera to look at this location and esti- mates the location’s 3D position with respect to the robot’s frame of reference. Unlike previous approaches, this interface for gesture-based pointing requires no instrumentation of the environment, makes use of a non-instrumented everyday pointing device, has low spatial error out to 3 meters, is fully mobile, and is robust enough for use in real-world applications. We demonstrate that this human-robot interface enables a person to designate a wide variety of everyday objects placed throughout a room. In 99.4% of these tests, the robot successfully looked at the designated object and estimated its 3D position with low average error. We also show that this interface can support object acquisition by a mobile manipulator. For this application, the user selects an object to be picked up from the floor by “clicking” on it with the laser pointer interface. In 90% of these trials, the robot successfully moved to the designated object and picked it up off of the floor.