Organizational Unit:
Healthcare Robotics Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 13
  • Item
    Autonomously learning to visually detect where manipulation will succeed
    (Georgia Institute of Technology, 2013-09) Nguyen, Hai ; Kemp, Charles C.
    Visual features can help predict if a manipulation behavior will succeed at a given location. For example, the success of a behavior that flips light switches depends on the location of the switch. We present methods that enable a mobile manipulator to autonomously learn a function that takes an RGB image and a registered 3D point cloud as input and returns a 3D location at which a manipulation behavior is likely to succeed. With our methods, robots autonomously train a pair of support vector machine (SVM) classifiers by trying behaviors at locations in the world and observing the results. Our methods require a pair of manipulation behaviors that can change the state of the world between two sets (e.g., light switch up and light switch down), classifiers that detect when each behavior has been successful, and an initial hint as to where one of the behaviors will be successful. When given an image feature vector associated with a 3D location, a trained SVM predicts if the associated manipulation behavior will be successful at the 3D location. To evaluate our approach, we performed experiments with a PR2 robot from Willow Garage in a simulated home using behaviors that flip a light switch, push a rocker-type light switch, and operate a drawer. By using active learning, the robot efficiently learned SVMs that enabled it to consistently succeed at these tasks. After training, the robot also continued to learn in order to adapt in the event of failure.
  • Item
    ROS Commander (ROSCo): Behavior Creation for Home Robots
    (Georgia Institute of Technology, 2013-05) Nguyen, Hai ; Ciocarlie, Matei ; Hsiao, Kaijen ; Kemp, Charles C.
    We introduce ROS Commander (ROSCo), an open source system that enables expert users to construct, share, and deploy robot behaviors for home robots. A user builds a behavior in the form of a Hierarchical Finite State Machine (HFSM) out of generic, parameterized building blocks, with a real robot in the develop and test loop. Once constructed, users save behaviors in an open format for direct use with robots, or for use as parts of new behaviors. When the system is deployed, a user can show the robot where to apply behaviors relative to fiducial markers (AR Tags), which allows the robot to quickly become operational in a new environment. We show evidence that the underlying state machine representation and current building blocks are capable of spanning a variety of desirable behaviors for home robots, such as opening a refrigerator door with two arms, flipping a light switch, unlocking a door, and handing an object to someone. Our experiments show that sensor-driven behaviors constructed with ROSCo can be executed in realistic home environments with success rates between 80% and 100%. We conclude by describing a test in the home of a person with quadriplegia, in which the person was able to automate parts of his home using previously-built Behaviors.
  • Item
    Robots for Humanity: A Case Study in Assistive Mobile Manipulation
    (Georgia Institute of Technology, 2013-03) Chen, Tiffany L. ; Ciocarlie, Matei ; Cousins, Steve ; Grice, Phillip M. ; Hawkins, Kelsey ; Hsiao, Kaijen ; Kemp, Charles C. ; King, Chih-Hung ; Lazewatsky, Daniel A. ; Nguyen, Hai ; Paepcke, Andreas ; Pantofaru, Caroline ; Smart, William D. ; Takayama, Leila
    Assistive mobile manipulators have the potential to one day serve as surrogates and helpers for people with disabilities, giving them the freedom to perform tasks such as scratching an itch, picking up a cup, or socializing with their families. This article introduces a collaborative project with the goal of putting assistive mobile manipulators into real homes to work with people with disabilities. Through a participatory design process in which users have been actively involved from day one, we are identifying and developing assistive capabilities for the PR2 robot. Our approach is to develop a diverse suite of open source software tools that blend the capabilities of the user and the robot. Within this article, we introduce the project, describe our progress, and discuss lessons we have learned.
  • Item
    Autonomous Active Learning of Task-Relevant Features for Mobile Manipulation
    (Georgia Institute of Technology, 2011) Nguyen, Hai ; Kemp, Charles C.
    We present an active learning approach that enables a mobile manipulator to autonomously learn task-relevant features. For a given behavior, our system trains a Support Vector Machine (SVM) that predicts the 3D locations at which the behavior will succeed. This decision is made based on visual features that surround each 3D location. After a quick initialization by the user, the robot efficiently collects and labels positive and negative examples fully autonomously. To demonstrate the efficacy of our approach, we present results for behaviors that flip a light switch up and down, push the top or bottom of a rocker-type light switch, and open or close a drawer. Our implementation uses a Willow Garage PR2 robot. We show that our approach produces classifiers that predict the success of these behaviors. In addition, we show that the robot can continuously learn from its experience. In our initial evaluation of 6 behaviors with learned classifiers, each behavior succeeded in 5 out of 5 trials with at most one retry.
  • Item
    Perceiving Clutter and Surfaces for Object Placement in Indoor Environments
    (Georgia Institute of Technology, 2010-12) Schuster, Martin J. ; Okerman, Jason ; Nguyen, Hai ; Rehg, James M. ; Kemp, Charles C.
    Handheld manipulable objects can often be found on flat surfaces within human environments. Researchers have previously demonstrated that perceptually segmenting a flat surface from the objects resting on it can enable robots to pick and place objects. However, methods for performing this segmentation can fail when applied to scenes with natural clutter. For example, low-profile objects and dense clutter that obscures the underlying surface can complicate the interpretation of the scene. As a first step towards characterizing the statistics of real-world clutter in human environments, we have collected and hand labeled 104 scans of cluttered tables using a tilting laser range finder (LIDAR) and a camera. Within this paper, we describe our method of data collection, present notable statistics from the dataset, and introduce a perceptual algorithm that uses machine learning to discriminate surface from clutter. We also present a method that enables a humanoid robot to place objects on uncluttered parts of flat surfaces using this perceptual algorithm. In cross-validation tests, the perceptual algorithm achieved a correct classification rate of 78.70% for surface and 90.66% for clutter, and outperformed our previously published algorithm. Our humanoid robot succeeded in 16 out of 20 object placing trials on 9 different unaltered tables, and performed successfully in several high-clutter situations. 3 out of 4 failures resulted from placing objects too close to the edge of the table.
  • Item
    The complex structure of simple devices: A survey of trajectories and forces that open doors and drawers
    (Georgia Institute of Technology, 2010-09) Jain, Advait ; Nguyen, Hai ; Rath, Mrinal ; Okerman, Jason ; Kemp, Charles C.
    Instrumental activities of daily living (IADLs) involve physical interactions with diverse mechanical systems found within human environments. In this paper, we describe our efforts to capture the everyday mechanics of doors and drawers, which form an important sub-class of mechanical systems for IADLs. We also discuss the implications of our results for the design of assistive robots. By answering questions such as “How high are the handles of most doors and drawers?” and “What forces are necessary to open most doors and drawers?”, our approach can inform robot designers as they make tradeoffs between competing requirements for assistive robots, such as cost, workspace, and power. Using a custom motion/force capture system, we captured kinematic trajectories and forces while operating 29 doors and 15 drawers in 6 homes and 1 office building in Atlanta, GA, USA. We also hand-measured the kinematics of 299 doors and 152 drawers in 11 area homes. We show that operation of these seemingly simple mechanisms involves significant complexities, including non-linear forces and large kinematic variation. We also show that the data exhibit significant structure. For example, 91.8% of the variation in the force sequences used to open doors can be represented using a 2-dimensional linear subspace. This complexity and structure suggests that capturing everyday mechanics may be a useful approach for improving the design of assistive robots.
  • Item
    RFID-Guided Robots for Pervasive Automation
    (Georgia Institute of Technology, 2010-01-15) Deyle, Travis ; Nguyen, Hai ; Reynolds, Matt S. ; Kemp, Charles C.
    Passive UHF RFID tags are well matched to robots' needs. Unlike lowfrequency (LF) and high-frequency (HF) RFID tags, passive UHF RFID tags are readable from across a room, enabling a mobile robot to efficiently discover and locate them. Using tags' unique IDs, a semantic database, and RF perception via actuated antennas, this paper shows how a robot can reliably interact with people and manipulate labeled objects.
  • Item
    RF vision: RFID receive signal strength indicator (RSSI) images for sensor fusion and mobile manipulation
    (Georgia Institute of Technology, 2009-10) Deyle, Travis ; Nguyen, Hai ; Reynolds, Matt S. ; Kemp, Charles C.
    In this work we present a set of integrated methods that enable an RFID-enabled mobile manipulator to approach and grasp an object to which a self-adhesive passive (battery-free) UHF RFID tag has been affixed. Our primary contribution is a new mode of perception that produces images of the spatial distribution of received signal strength indication (RSSI) for each of the tagged objects in an environment. The intensity of each pixel in the 'RSSI image' is the measured RF signal strength for a particular tag in the corresponding direction. We construct these RSSI images by panning and tilting an RFID reader antenna while measuring the RSSI value at each bearing. Additionally, we present a framework for estimating a tagged object's 3D location using fused ID-specific features derived from an RSSI image, a camera image, and a laser range finder scan. We evaluate these methods using a robot with actuated, long-range RFID antennas and finger-mounted short-range antennas. The robot first scans its environment to discover which tagged objects are within range, creates a user interface, orients toward the user-selected object using RF signal strength, estimates the 3D location of the object using an RSSI image with sensor fusion, approaches and grasps the object, and uses its finger-mounted antennas to confirm that the desired object has been grasped. In our tests, the sensor fusion system with an RSSI image correctly located the requested object in 17 out of 18 trials (94.4%), an 11.1% improvement over the system's performance when not using an RSSI image. The robot correctly oriented to the requested object in 8 out of 9 trials (88.9%), and in 3 out of 3 trials the entire system successfully grasped the object selected by the user.
  • Item
    PPS-Tags: Physical, Perceptual and Semantic Tags for Autonomous Mobile Manipulation
    (Georgia Institute of Technology, 2009-10) Nguyen, Hai ; Deyle, Travis ; Reynolds, Matt S. ; Kemp, Charles C.
    For many promising application areas, autonomous mobile manipulators do not yet exhibit sufficiently robust performance. We propose the use of tags applied to task-relevant locations in human environments in order to help autonomous mobile manipulators physically interact with the location, perceive the location, and understand the location’s semantics. We call these tags physical, perceptual and semantic tags (PPS-tags). We present three examples of PPS-tags, each of which combines compliant and colorful material with a UHF RFID tag. The RFID tag provides a unique identifier that indexes into a semantic database that holds information such as the following: what actions can be performed at the location, how can these actions be performed, and what state changes should be observed upon task success? We also present performance results for our robot operating on a PPS-tagged light switch, rocker light switch, lamp, drawer, and trash can. We tested the robot performing the available actions from 4 distinct locations with each of these 5 tagged devices. For the light switch, rocker light switch, lamp, and trash can, the robot succeeded in all trials (24/24). The robot failed to open the drawer when starting from an oblique angle, and thus succeeded in 6 out of 8 trials. We also tested the ability of the robot to detect failure in unusual circumstances, such as the lamp being unplugged and the drawer being stuck.
  • Item
    Bio-inspired Assistive Robotics: Service Dogs as a Model for Human-Robot Interaction and Mobile Manipulation
    (Georgia Institute of Technology, 2008-10) Nguyen, Hai ; Kemp, Charles C.
    Service dogs have successfully provided assistance to thousands of motor-impaired people worldwide. As a step towards the creation of robots that provide comparable assistance, we present a biologically inspired robot capable of obeying many of the same commands and exploiting the same environmental modifications as service dogs. The robot responds to a subset of the 71 verbal commands listed in the service dog training manual used by Georgia Canines for Independence. In our implementation, the human directs the robot by giving a verbal command and illuminating a task-relevant location with an off-the-shelf green laser pointer. We also describe a novel and inexpensive way to engineer the environment in order to help assistive robots perform useful tasks with generality and robustness. In particular, we show that by tying or otherwise affixing colored towels to doors and drawers an assistive robot can robustly open these doors and drawers in a manner similar to a service dog. This is analogous to the common practice of tying bandannas or handkerchiefs to door handles and drawer handles in order to enable service dogs to operate them. This method has the advantage of simplifying both the perception and physical interaction required to perform the task. It also enables the robot to use the same small set of behaviors to perform a variety of tasks across distinct doors and drawers. We report quantitative results for our assistive robot when performing assistive tasks in response to user commands in a modified environment. In our tests, the robot successfully opened two different drawers in 18 out of 20 trials (90%), closed a drawer in 9 out of 10 trials (90%), and opened a door that required first operating a handle and then pushing it open in 8 out of 10 trials (80%). Additionally, the robot succeeded in single trial tests of opening a microwave, grasping an object, placing an object, delivering an object, and responding to various other commands, such as staying quiet.