Organizational Unit:
Healthcare Robotics Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 7 of 7
  • Item
    Autonomously learning to visually detect where manipulation will succeed
    (Georgia Institute of Technology, 2013-09) Nguyen, Hai ; Kemp, Charles C.
    Visual features can help predict if a manipulation behavior will succeed at a given location. For example, the success of a behavior that flips light switches depends on the location of the switch. We present methods that enable a mobile manipulator to autonomously learn a function that takes an RGB image and a registered 3D point cloud as input and returns a 3D location at which a manipulation behavior is likely to succeed. With our methods, robots autonomously train a pair of support vector machine (SVM) classifiers by trying behaviors at locations in the world and observing the results. Our methods require a pair of manipulation behaviors that can change the state of the world between two sets (e.g., light switch up and light switch down), classifiers that detect when each behavior has been successful, and an initial hint as to where one of the behaviors will be successful. When given an image feature vector associated with a 3D location, a trained SVM predicts if the associated manipulation behavior will be successful at the 3D location. To evaluate our approach, we performed experiments with a PR2 robot from Willow Garage in a simulated home using behaviors that flip a light switch, push a rocker-type light switch, and operate a drawer. By using active learning, the robot efficiently learned SVMs that enabled it to consistently succeed at these tasks. After training, the robot also continued to learn in order to adapt in the event of failure.
  • Item
    ROS Commander (ROSCo): Behavior Creation for Home Robots
    (Georgia Institute of Technology, 2013-05) Nguyen, Hai ; Ciocarlie, Matei ; Hsiao, Kaijen ; Kemp, Charles C.
    We introduce ROS Commander (ROSCo), an open source system that enables expert users to construct, share, and deploy robot behaviors for home robots. A user builds a behavior in the form of a Hierarchical Finite State Machine (HFSM) out of generic, parameterized building blocks, with a real robot in the develop and test loop. Once constructed, users save behaviors in an open format for direct use with robots, or for use as parts of new behaviors. When the system is deployed, a user can show the robot where to apply behaviors relative to fiducial markers (AR Tags), which allows the robot to quickly become operational in a new environment. We show evidence that the underlying state machine representation and current building blocks are capable of spanning a variety of desirable behaviors for home robots, such as opening a refrigerator door with two arms, flipping a light switch, unlocking a door, and handing an object to someone. Our experiments show that sensor-driven behaviors constructed with ROSCo can be executed in realistic home environments with success rates between 80% and 100%. We conclude by describing a test in the home of a person with quadriplegia, in which the person was able to automate parts of his home using previously-built Behaviors.
  • Item
    Robots for Humanity: A Case Study in Assistive Mobile Manipulation
    (Georgia Institute of Technology, 2013-03) Chen, Tiffany L. ; Ciocarlie, Matei ; Cousins, Steve ; Grice, Phillip M. ; Hawkins, Kelsey ; Hsiao, Kaijen ; Kemp, Charles C. ; King, Chih-Hung ; Lazewatsky, Daniel A. ; Nguyen, Hai ; Paepcke, Andreas ; Pantofaru, Caroline ; Smart, William D. ; Takayama, Leila
    Assistive mobile manipulators have the potential to one day serve as surrogates and helpers for people with disabilities, giving them the freedom to perform tasks such as scratching an itch, picking up a cup, or socializing with their families. This article introduces a collaborative project with the goal of putting assistive mobile manipulators into real homes to work with people with disabilities. Through a participatory design process in which users have been actively involved from day one, we are identifying and developing assistive capabilities for the PR2 robot. Our approach is to develop a diverse suite of open source software tools that blend the capabilities of the user and the robot. Within this article, we introduce the project, describe our progress, and discuss lessons we have learned.
  • Item
    Autonomous Active Learning of Task-Relevant Features for Mobile Manipulation
    (Georgia Institute of Technology, 2011) Nguyen, Hai ; Kemp, Charles C.
    We present an active learning approach that enables a mobile manipulator to autonomously learn task-relevant features. For a given behavior, our system trains a Support Vector Machine (SVM) that predicts the 3D locations at which the behavior will succeed. This decision is made based on visual features that surround each 3D location. After a quick initialization by the user, the robot efficiently collects and labels positive and negative examples fully autonomously. To demonstrate the efficacy of our approach, we present results for behaviors that flip a light switch up and down, push the top or bottom of a rocker-type light switch, and open or close a drawer. Our implementation uses a Willow Garage PR2 robot. We show that our approach produces classifiers that predict the success of these behaviors. In addition, we show that the robot can continuously learn from its experience. In our initial evaluation of 6 behaviors with learned classifiers, each behavior succeeded in 5 out of 5 trials with at most one retry.
  • Item
    Perceiving Clutter and Surfaces for Object Placement in Indoor Environments
    (Georgia Institute of Technology, 2010-12) Schuster, Martin J. ; Okerman, Jason ; Nguyen, Hai ; Rehg, James M. ; Kemp, Charles C.
    Handheld manipulable objects can often be found on flat surfaces within human environments. Researchers have previously demonstrated that perceptually segmenting a flat surface from the objects resting on it can enable robots to pick and place objects. However, methods for performing this segmentation can fail when applied to scenes with natural clutter. For example, low-profile objects and dense clutter that obscures the underlying surface can complicate the interpretation of the scene. As a first step towards characterizing the statistics of real-world clutter in human environments, we have collected and hand labeled 104 scans of cluttered tables using a tilting laser range finder (LIDAR) and a camera. Within this paper, we describe our method of data collection, present notable statistics from the dataset, and introduce a perceptual algorithm that uses machine learning to discriminate surface from clutter. We also present a method that enables a humanoid robot to place objects on uncluttered parts of flat surfaces using this perceptual algorithm. In cross-validation tests, the perceptual algorithm achieved a correct classification rate of 78.70% for surface and 90.66% for clutter, and outperformed our previously published algorithm. Our humanoid robot succeeded in 16 out of 20 object placing trials on 9 different unaltered tables, and performed successfully in several high-clutter situations. 3 out of 4 failures resulted from placing objects too close to the edge of the table.
  • Item
    The complex structure of simple devices: A survey of trajectories and forces that open doors and drawers
    (Georgia Institute of Technology, 2010-09) Jain, Advait ; Nguyen, Hai ; Rath, Mrinal ; Okerman, Jason ; Kemp, Charles C.
    Instrumental activities of daily living (IADLs) involve physical interactions with diverse mechanical systems found within human environments. In this paper, we describe our efforts to capture the everyday mechanics of doors and drawers, which form an important sub-class of mechanical systems for IADLs. We also discuss the implications of our results for the design of assistive robots. By answering questions such as “How high are the handles of most doors and drawers?” and “What forces are necessary to open most doors and drawers?”, our approach can inform robot designers as they make tradeoffs between competing requirements for assistive robots, such as cost, workspace, and power. Using a custom motion/force capture system, we captured kinematic trajectories and forces while operating 29 doors and 15 drawers in 6 homes and 1 office building in Atlanta, GA, USA. We also hand-measured the kinematics of 299 doors and 152 drawers in 11 area homes. We show that operation of these seemingly simple mechanisms involves significant complexities, including non-linear forces and large kinematic variation. We also show that the data exhibit significant structure. For example, 91.8% of the variation in the force sequences used to open doors can be represented using a 2-dimensional linear subspace. This complexity and structure suggests that capturing everyday mechanics may be a useful approach for improving the design of assistive robots.
  • Item
    RFID-Guided Robots for Pervasive Automation
    (Georgia Institute of Technology, 2010-01-15) Deyle, Travis ; Nguyen, Hai ; Reynolds, Matt S. ; Kemp, Charles C.
    Passive UHF RFID tags are well matched to robots' needs. Unlike lowfrequency (LF) and high-frequency (HF) RFID tags, passive UHF RFID tags are readable from across a room, enabling a mobile robot to efficiently discover and locate them. Using tags' unique IDs, a semantic database, and RF perception via actuated antennas, this paper shows how a robot can reliably interact with people and manipulate labeled objects.