Organizational Unit:
Rehabilitation Engineering Research Center on Technologies to Support Aging-in-Place for People with Long-Term Disabilities

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 8 of 8
  • Item
    3D Human Pose Estimation on a Configurable Bed from a Pressure Image
    ( 2018) Clever, Henry M. ; Kapusta, Ariel ; Park, Daehyung ; Erickson, Zackory ; Chitalia, Yash ; Kemp, Charles C.
    Robots have the potential to assist people in bed, such as in healthcare settings, yet bedding materials like sheets and blankets can make observation of the human body difficult for robots. A pressure-sensing mat on a bed can provide pressure images that are relatively insensitive to bedding materials. However, prior work on estimating human pose from pressure images has been restricted to 2D pose estimates and flat beds. In this work, we present two convolutional neural networks to estimate the 3D joint positions of a person in a configurable bed from a single pressure image. The first network directly outputs 3D joint positions, while the second outputs a kinematic model that includes estimated joint angles and limb lengths. We evaluated our networks on data from 17 human participants with two bed configurations: supine and seated. Our networks achieved a mean joint position error of 77 mm when tested with data from people outside the training set, outperforming several baselines. We also present a simple mechanical model that provides insight into ambiguity associated with limbs raised off of the pressure mat, and demonstrate that Monte Carlo dropout can be used to estimate pose confidence in these situations. Finally, we provide a demonstration in which a mobile manipulator uses our network’s estimated kinematic model to reach a location on a person’s body in spite of the person being seated in a bed and covered by a blanket.
  • Item
    Haptic Simulation for Robot-Assisted Dressing
    (Georgia Institute of Technology, 2017) Yu, Wenhao ; Kapusta, Ariel ; Tan, Jie ; Kemp, Charles C. ; Turk, Greg ; Liu, C. Karen
    There is a considerable need for assistive dressing among people with disabilities, and robots have the potential to fulfill this need. However, training such a robot would require extensive trials in order to learn the skills of assistive dressing. Such training would be time-consuming and require considerable effort to recruit participants and conduct trials. In addition, for some cases that might cause injury to the person being dressed, it is impractical and unethical to perform such trials. In this work, we focus on a representative dressing task of pulling the sleeve of a hospital gown onto a person’s arm. We present a system that learns a haptic classifier for the outcome of the task given few (2-3) real-world trials with one person. Our system first optimizes the parameters of a physics simulator using real-world data. Using the optimized simulator, the system then simulates more haptic sensory data with noise models that account for randomness in the experiment. We then train hidden Markov Models (HMMs) on the simulated haptic data. The trained HMMs can then be used to classify and predict the outcome of the assistive dressing task based on haptic signals measured by a real robot’s end effector. This system achieves 92.83% accuracy in classifying the outcome of the robot-assisted dressing task with people not included in simulation optimization. We compare our classifiers to those trained on real-world data. We show that the classifiers from our system can categorize the dressing task outcomes more accurately than classifiers trained on ten times more real data.
  • Item
    Data-Driven Haptic Perception for Robot-Assisted Dressing
    (Georgia Institute of Technology, 2016-08) Kapusta, Ariel ; Yu, Wenhao ; Bhattacharjee, Tapomayukh ; Liu, C. Karen ; Turk, Greg ; Kemp, Charles C.
    Dressing is an important activity of daily living (ADL) with which many people require assistance due to impairments. Robots have the potential to provide dressing assistance, but physical interactions between clothing and the human body can be complex and difficult to visually observe. We provide evidence that data-driven haptic perception can be used to infer relationships between clothing and the human body during robot-assisted dressing. We conducted a carefully controlled experiment with 12 human participants during which a robot pulled a hospital gown along the length of each person’s forearm 30 times. This representative task resulted in one of the following three outcomes: the hand missed the opening to the sleeve; the hand or forearm became caught on the sleeve; or the full forearm successfully entered the sleeve. We found that hidden Markov models (HMMs) using only forces measured at the robot’s end effector classified these outcomes with high accuracy. The HMMs’ performance generalized well to participants (98.61% accuracy) and velocities (98.61% accuracy) outside of the training data. They also performed well when we limited the force applied by the robot (95.8% accuracy with a 2N threshold), and could predict the outcome early in the process. Despite the lightweight hospital gown, HMMs that used forces in the direction of gravity substantially outperformed those that did not. The best performing HMMs used forces in the direction of motion and the direction of gravity.
  • Item
    Collaboration Between a Robotic Bed and a Mobile Manipulator May Improve Physical Assistance for People with Disabilities
    (Georgia Institute of Technology, 2016-08) Kapusta, Ariel ; Chitalia, Yash ; Park, Daehyung ; Kemp, Charles C.
    We present a robotic system designed to provide physical assistance to a person in bed. The system consists of a robotic bed (Autobed) and a mobile manipulator (PR2) that work together. The 3 degree-of-freedom (DoF) robotic bed moves the person’s body and uses a pressure sensing mat to estimate the body’s position. The mobile manipulator positions itself with respect to the bed and compliantly moves a lightweight object with one of its 7-DoF arms. The system optimizes its motions with respect to a task model and a model of the human’s body. The user provides high-level supervision to the system via a web-based interface. We first evaluated the ability of the robotic bed to estimate the location of the head of a person in a supine configuration via a study with 7 able-bodied participants. This estimation was robust to bedding, including a pillow under the person’s head. We then evaluated the ability of the full system to autonomously reach task-relevant poses on a medical mannequin placed in a supine position on the bed. We found that the robotic bed’s motion and perception each improved the overall system’s performance. Our results suggest that this type of multi-robot system could more effectively bring objects to desired locations with respect to the user’s body than a mobile manipulator working alone. This may in turn lead to improved physical assistance for people with disabilities at home and in healthcare facilities, since many assistive tasks involve an object being moved with respect to a person’s body.
  • Item
    Optimization of Robot Configurations for Assistive Tasks
    (Georgia Institute of Technology, 2016) Kapusta, Ariel ; Kemp, Charles C.
    Robots can provide assistance with activities of daily living (ADLs) to humans with motor impairments. Specialized robots, such as desktop robotic feeding systems, have been successful for specific assistive tasks when placed in fixed and designated positions with respect to the user. General-purpose mobile manipulators could act as a more versatile form of assistive technology, able to perform many tasks, but selecting a configuration for the robots from which to perform a task can be challenging due to the high number of degrees of freedom of the robots and the complexity of the tasks. As with the specialized, fixed robots, once in a good configuration, another system or the user can provide the fine control to perform the details of the task. In this short paper, we present Task-centric Optimization of robot Configurations (TOC), a method for selecting configurations for a PR2 and a robotic bed to allow the PR2 to provide effective assistance with ADLs. TOC builds upon previous work, Task-centric initial Configuration Selection (TCS), addressing some of the limitations of TCS. Notable alterations are selecting configurations from the continuous configuration space using a Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimization, introducing a joint-limit-weighted manipulability term, and changing the framework to move all optimization offline and using function approximation at run-time. To evaluate TOC, we created models of 13 activities of daily living (ADLs) and compared TOC’s and TCS’s performance with these 13 assistive tasks in a computer simulation of a PR2, a robotic bed, and a model of a human body. TOC performed as well or better than TCS in most of our tests against state estimation error. We also implemented TOC on a real PR2 and a real robotic bed and found that from the TOC-selected configuration the PR2 could reach all task-relevant goals on a mannequin on the bed.
  • Item
    A Robotic System for Reaching in Dense Clutter that Integrates Model Predictive Control, Learning, Haptic Mapping, and Planning
    (Georgia Institute of Technology, 2014-09) Bhattacharjee, Tapomayukh ; Grice, Phillip M. ; Kapusta, Ariel ; Killpack, Marc D. ; Park, Daehyung ; Kemp, Charles C.
    We present a system that enables a robot to reach locations in dense clutter using only haptic sensing. Our system integrates model predictive control [1], learned initial conditions [2], tactile recognition of object types [3], haptic mapping, and geometric planning to efficiently reach locations using whole- arm tactile sensing [4]. We motivate our work, present a system architecture, summarize each component of the system, and present results from our evaluation of the system reaching to target locations in dense artificial foliage.
  • Item
    Learning to Reach into the Unknown: Selecting Initial Conditions When Reaching in Clutter
    (Georgia Institute of Technology, 2014-09) Park, Daehyung ; Kapusta, Ariel ; Kim, You Keun ; Rehg, James M. ; Kemp, Charles C.
    Often in highly-cluttered environments, a robot can observe the exterior of the environment with ease, but cannot directly view nor easily infer its detailed internal structure (e.g., dense foliage or a full refrigerator shelf). We present a data-driven approach that greatly improves a robot’s success at reaching to a goal location in the unknown interior of an environment based on observable external properties, such as the category of the clutter and the locations of openings into the clutter (i.e., apertures). We focus on the problem of selecting a good initial configuration for a manipulator when reaching with a greedy controller. We use density estimation to model the probability of a successful reach given an initial condition and then perform constrained optimization to find an initial condition with the highest estimated probability of success. We evaluate our approach with two simulated robots reaching in clutter, and provide a demonstration with a real PR2 robot reaching to locations through random apertures. In our evaluations, our approach significantly outperformed two alter- native approaches when making two consecutive reach attempts to goals in distinct categories of unknown clutter. Notably, our approach only uses sparse readily-apparent features.
  • Item
    Interleaving Planning and Control for Efficient Haptically-guided Reaching in Unknown Environments
    (Georgia Institute of Technology, 2014) Park, Daehyung ; Kapusta, Ariel ; Hawke, Jeffrey ; Kemp, Charles C.
    We present a new method for reaching in an initially unknown environment with only haptic sensing. In this paper, we propose a haptically-guided interleaving planning and control (HIPC) method with a haptic mapping framework. HIPC runs two planning methods, interleaving a task-space and a joint-space planner, to provide fast reaching performance. It continually replans a valid trajectory, alternating between planners and quickly reflecting collected tactile information from an unknown environment. One key idea is that tactile sensing can be used to directly map an immediate cause of interference when reaching. The mapping framework efficiently assigns raw tactile information from whole-arm tactile sensors into a 3D voxel-based collision map. Our method uses a previously published contact-regulating controller based on model predictive control (MPC). In our evaluation with a physics simulation of a humanoid robot, interleaving was superior at reaching in the 9 types of environments we used.