Person:
Kemp, Charles C.

Associated Organization(s)
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 3 of 3
  • Item
    RFID-Guided Robots for Pervasive Automation
    (Georgia Institute of Technology, 2010-01-15) Deyle, Travis ; Nguyen, Hai ; Reynolds, Matt S. ; Kemp, Charles C.
    Passive UHF RFID tags are well matched to robots' needs. Unlike lowfrequency (LF) and high-frequency (HF) RFID tags, passive UHF RFID tags are readable from across a room, enabling a mobile robot to efficiently discover and locate them. Using tags' unique IDs, a semantic database, and RF perception via actuated antennas, this paper shows how a robot can reliably interact with people and manipulate labeled objects.
  • Item
    RF vision: RFID receive signal strength indicator (RSSI) images for sensor fusion and mobile manipulation
    (Georgia Institute of Technology, 2009-10) Deyle, Travis ; Nguyen, Hai ; Reynolds, Matt S. ; Kemp, Charles C.
    In this work we present a set of integrated methods that enable an RFID-enabled mobile manipulator to approach and grasp an object to which a self-adhesive passive (battery-free) UHF RFID tag has been affixed. Our primary contribution is a new mode of perception that produces images of the spatial distribution of received signal strength indication (RSSI) for each of the tagged objects in an environment. The intensity of each pixel in the 'RSSI image' is the measured RF signal strength for a particular tag in the corresponding direction. We construct these RSSI images by panning and tilting an RFID reader antenna while measuring the RSSI value at each bearing. Additionally, we present a framework for estimating a tagged object's 3D location using fused ID-specific features derived from an RSSI image, a camera image, and a laser range finder scan. We evaluate these methods using a robot with actuated, long-range RFID antennas and finger-mounted short-range antennas. The robot first scans its environment to discover which tagged objects are within range, creates a user interface, orients toward the user-selected object using RF signal strength, estimates the 3D location of the object using an RSSI image with sensor fusion, approaches and grasps the object, and uses its finger-mounted antennas to confirm that the desired object has been grasped. In our tests, the sensor fusion system with an RSSI image correctly located the requested object in 17 out of 18 trials (94.4%), an 11.1% improvement over the system's performance when not using an RSSI image. The robot correctly oriented to the requested object in 8 out of 9 trials (88.9%), and in 3 out of 3 trials the entire system successfully grasped the object selected by the user.
  • Item
    PPS-Tags: Physical, Perceptual and Semantic Tags for Autonomous Mobile Manipulation
    (Georgia Institute of Technology, 2009-10) Nguyen, Hai ; Deyle, Travis ; Reynolds, Matt S. ; Kemp, Charles C.
    For many promising application areas, autonomous mobile manipulators do not yet exhibit sufficiently robust performance. We propose the use of tags applied to task-relevant locations in human environments in order to help autonomous mobile manipulators physically interact with the location, perceive the location, and understand the location’s semantics. We call these tags physical, perceptual and semantic tags (PPS-tags). We present three examples of PPS-tags, each of which combines compliant and colorful material with a UHF RFID tag. The RFID tag provides a unique identifier that indexes into a semantic database that holds information such as the following: what actions can be performed at the location, how can these actions be performed, and what state changes should be observed upon task success? We also present performance results for our robot operating on a PPS-tagged light switch, rocker light switch, lamp, drawer, and trash can. We tested the robot performing the available actions from 4 distinct locations with each of these 5 tagged devices. For the light switch, rocker light switch, lamp, and trash can, the robot succeeded in all trials (24/24). The robot failed to open the drawer when starting from an oblique angle, and thus succeeded in 6 out of 8 trials. We also tested the ability of the robot to detect failure in unusual circumstances, such as the lamp being unplugged and the drawer being stuck.