Organizational Unit:
Healthcare Robotics Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 7 of 7
  • Item
    Assistive Mobile Manipulation for Self-Care Tasks Around the Head
    (Georgia Institute of Technology, 2014) Hawkins, Kelsey P. ; Grice, Phillip M. ; Chen, Tiffany L. ; King, Chih-Hung ; Kemp, Charles C.
    Human-scale mobile robots with arms have the potential to assist people with a variety of tasks. We present a proof-of-concept system that has enabled a person with severe quadriplegia named Henry Evans to shave himself in his own home using a general purpose mobile manipulator (PR2 from Willow Garage). The robot primarily provides assistance by holding a tool (e.g., an electric shaver) at user-specified locations around the user’s head, while he/she moves his/her head against it. If the robot detects forces inappropriate for the task (e.g., shaving), it withdraws the tool. The robot also holds a mirror with its other arm, so that the user can see what he/she is doing. For all aspects of the task, the robot and the human work together. The robot uses a series of distinct semi-autonomous subsystems during the task to navigate to poses next to the wheelchair, attain initial arm configurations, register a 3D model of the person’s head, move the tool to coarse semantically-labeled tool poses (e.g, “Cheek”), and finely position the tool via incremental movements. Notably, while moving the tool near the user’s head, the robot uses an ellipsoidal coordinate system attached to the 3D head model. In addition to describing the complete robotic system, we report results from Henry Evans using it to shave both sides of his face while sitting in his wheelchair at home. He found the process to be long (54 minutes) and the interface unintuitive. Yet, he also found the system to be comfortable to use, felt safe while using it, was satisfied with it, and preferred it to a human caregiver.
  • Item
    An investigation of responses to robot-initiated touch in a nursing context
    (Georgia Institute of Technology, 2013-10) Chen, Tiffany L. ; King, Chih-Hung Aaron ; Thomaz, Andrea L. ; Kemp, Charles C.
    Physical human-robot interaction has the potential to be useful in a number of domains, but this will depend on how people respond to the robot’s actions. For some domains, such as healthcare, a robot is likely to initiate physical contact with a person’s body. In order to investigate how people respond to this type of interaction, we conducted an experiment with 56 people in which a robotic nurse autonomously touched and wiped each participant’s forearm. On average, participants had a favorable response to the first time the robot touched them. However, we found that the perceived intent of the robot significantly influenced people’s responses. If people believed that the robot intended to clean their arms, the participants tended to respond more favorably than if they believed the robot intended to comfort them, even though the robot’s manipulation behavior was the same. Our results suggest that roboticists should consider this social factor in addition to the mechanics of physical interaction. Surprisingly, we found that participants in our study responded less favorably when given a verbal warning prior to the robot’s actions. In addition to these main results, we present post-hoc analyses of participants’ galvanic skin responses (GSR), open-ended responses, attitudes towards robots, and responses to a second trial.
  • Item
    Autonomously learning to visually detect where manipulation will succeed
    (Georgia Institute of Technology, 2013-09) Nguyen, Hai ; Kemp, Charles C.
    Visual features can help predict if a manipulation behavior will succeed at a given location. For example, the success of a behavior that flips light switches depends on the location of the switch. We present methods that enable a mobile manipulator to autonomously learn a function that takes an RGB image and a registered 3D point cloud as input and returns a 3D location at which a manipulation behavior is likely to succeed. With our methods, robots autonomously train a pair of support vector machine (SVM) classifiers by trying behaviors at locations in the world and observing the results. Our methods require a pair of manipulation behaviors that can change the state of the world between two sets (e.g., light switch up and light switch down), classifiers that detect when each behavior has been successful, and an initial hint as to where one of the behaviors will be successful. When given an image feature vector associated with a 3D location, a trained SVM predicts if the associated manipulation behavior will be successful at the 3D location. To evaluate our approach, we performed experiments with a PR2 robot from Willow Garage in a simulated home using behaviors that flip a light switch, push a rocker-type light switch, and operate a drawer. By using active learning, the robot efficiently learned SVMs that enabled it to consistently succeed at these tasks. After training, the robot also continued to learn in order to adapt in the event of failure.
  • Item
    Improving robot manipulation with data-driven object-centric models of everyday forces
    (Georgia Institute of Technology, 2013-06) Jain, Advait ; Kemp, Charles C.
    Based on a lifetime of experience, people anticipate the forces associated with performing a manipulation task. In contrast, most robots lack common sense about the forces involved in everyday manipulation tasks. In this paper, we present data-driven methods to inform robots about the forces that they are likely to encounter when performing specific tasks. In the context of door opening, we demonstrate that data-driven object-centric models can be used to haptically recognize specific doors, haptically recognize classes of door (e.g., refrigerator vs. kitchen cabinet), and haptically detect anomalous forces while opening a door, even when opening a specific door for the first time.We also demonstrate that two distinct robots can use forces captured from people opening doors to better detect anomalous forces. These results illustrate the potential for robots to use shared databases of forces to bettermanipulate theworld and attain common sense about everyday forces.
  • Item
    Reaching in clutter with whole-arm tactile sensing
    (Georgia Institute of Technology, 2013-04) Jain, Advait ; Killpack, Marc D. ; Edsinger, Aaron ; Kemp, Charles C.
    Clutter creates challenges for robot manipulation, including a lack of non-contact trajectories and reduced visibility for line-of-sight sensors. We demonstrate that robots can use whole-arm tactile sensing to perceive clutter and maneuver within it, while keeping contact forces low. We first present our approach to manipulation, which emphasizes the benefits of making contact across the entire manipulator and assumes the manipulator has low-stiffness actuation and tactile sensing across its entire surface. We then present a novel controller that exploits these assumptions. The controller only requires haptic sensing, handles multiple contacts, and does not need an explicit model of the environment prior to contact. It uses model predictive control with a time horizon of length one and a linear quasi-static mechanical model. In our experiments, the controller enabled a real robot and a simulated robot to reach goal locations in a variety of environments, including artificial foliage, a cinder block, and randomly generated clutter, while keeping contact forces low. While reaching, the robots performed maneuvers that included bending objects, compressing objects, sliding objects, and pivoting around objects. In simulation, whole-arm tactile sensing also outperformed per-link force–torque sensing in moderate clutter, with the relative benefits increasing with the amount of clutter.
  • Item
    Robots for Humanity: A Case Study in Assistive Mobile Manipulation
    (Georgia Institute of Technology, 2013-03) Chen, Tiffany L. ; Ciocarlie, Matei ; Cousins, Steve ; Grice, Phillip M. ; Hawkins, Kelsey ; Hsiao, Kaijen ; Kemp, Charles C. ; King, Chih-Hung ; Lazewatsky, Daniel A. ; Nguyen, Hai ; Paepcke, Andreas ; Pantofaru, Caroline ; Smart, William D. ; Takayama, Leila
    Assistive mobile manipulators have the potential to one day serve as surrogates and helpers for people with disabilities, giving them the freedom to perform tasks such as scratching an itch, picking up a cup, or socializing with their families. This article introduces a collaborative project with the goal of putting assistive mobile manipulators into real homes to work with people with disabilities. Through a participatory design process in which users have been actively involved from day one, we are identifying and developing assistive capabilities for the PR2 robot. Our approach is to develop a diverse suite of open source software tools that blend the capabilities of the user and the robot. Within this article, we introduce the project, describe our progress, and discuss lessons we have learned.
  • Item
    RFID-Guided Robots for Pervasive Automation
    (Georgia Institute of Technology, 2010-01-15) Deyle, Travis ; Nguyen, Hai ; Reynolds, Matt S. ; Kemp, Charles C.
    Passive UHF RFID tags are well matched to robots' needs. Unlike lowfrequency (LF) and high-frequency (HF) RFID tags, passive UHF RFID tags are readable from across a room, enabling a mobile robot to efficiently discover and locate them. Using tags' unique IDs, a semantic database, and RF perception via actuated antennas, this paper shows how a robot can reliably interact with people and manipulate labeled objects.