Organizational Unit:
Healthcare Robotics Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 8 of 8
Thumbnail Image
Item

Visual Odometry and Control for an Omnidirectional Mobile Robot with a Downward-Facing Camera

2010-10 , Killpack, Marc D. , Deyle, Travis , Anderson, Cressel D. , Kemp, Charles C.

An omnidirectional Mecanum base allows for more flexible mobile manipulation. However, slipping of the Mecanum wheels results in poor dead-reckoning estimates from wheel encoders, limiting the accuracy and overall utility of this type of base. We present a system with a downwardfacing camera and light ring to provide robust visual odometry estimates. We mounted the system under the robot which allows it to operate in conditions such as large crowds or low ambient lighting. We demonstrate that the visual odometry estimates are sufficient to generate closed-loop PID (Proportional Integral Derivative) and LQR (Linear Quadratic Regulator) controllers for motion control in three different scenarios: waypoint tracking, small disturbance rejection, and sideways motion. We report quantitative measurements that demonstrate superior control performance when using visual odometry compared to wheel encoders. Finally, we show that this system provides highfidelity odometry estimates and is able to compensate for wheel slip on a four-wheeled omnidirectional mobile robot base.

Thumbnail Image
Item

Laser Pointers and a Touch Screen: Intuitive Interfaces for Autonomous Mobile Manipulation for the Motor Impaired

2008-10 , Choi, Young Sang , Anderson, Cressel D. , Glass, Jonathan D. , Kemp, Charles C.

El-E (“Ellie”) is a prototype assistive robot designed to help people with severe motor impairments manipulate everyday objects. When given a 3D location, El-E can autonomously approach the location and pick up a nearby object. Based on interviews of patients with amyotrophic lateral sclerosis (ALS), we have developed and tested three distinct interfaces that enable a user to provide a 3D location to El-E and thereby select an object to be manipulated: an ear-mounted laser pointer, a hand-held laser pointer, and a touch screen interface. Within this paper, we present the results from a user study comparing these three user interfaces with a total of 134 trials involving eight patients with varying levels of impairment recruited from the Emory ALS Clinic. During this study, participants used the three interfaces to select everyday objects to be approached, grasped, and lifted off of the ground. The three interfaces enabled motor impaired users to command a robot to pick up an object with a 94.8% success rate overall after less than 10 minutes of learning to use each interface. On average, users selected objects 69% more quickly with the laser pointer interfaces than with the touch screen interface. We also found substantial variation in user preference. With respect to the Revised ALS Functional Rating Scale (ALSFRS-R), users with greater upper-limb mobility tended to prefer the hand-held laser pointer, while those with less upper-limb mobility tended to prefer the ear-mounted laser pointer. Despite the extra efficiency of the laser pointer interfaces, three patients preferred the touch screen interface, which has unique potential for manipulating remote objects out of the user’s line of sight. In summary, these results indicate that robots can enhance accessibility by supporting multiple interfaces. Furthermore, this work demonstrates that the communication of 3D locations during human-robot interaction can serve as a powerful abstraction barrier that supports distinct interfaces to assistive robots while using identical, underlying robotic functionality.

Thumbnail Image
Item

EL-E: An Assistive Mobile Manipulator that Autonomously Fetches Objects from Flat Surfaces

2008-03-12 , Nguyen, Hai , Anderson, Cressel D. , Trevor, Alexander J. B. , Jain, Advait , Xu, Zhe , Kemp, Charles C.

Objects within human environments are usually found on flat surfaces that are orthogonal to gravity, such as floors, tables, and shelves. We first present a new assistive robot that is explicitly designed to take advantage of this common structure in order to retrieve unmodeled, everyday objects for people with motor impairments. This compact, stati- cally stable mobile manipulator has a novel kinematic and sensory configuration that facilitates autonomy and human- robot interaction within indoor human environments. Sec- ond, we present a behavior system that enables this robot to fetch objects selected with a laser pointer from the floor and tables. The robot can approach an object selected with the laser pointer interface, detect if the object is on an elevated surface, raise or lower its arm and sensors to this surface, and visually and tacitly grasp the object. Once the object is acquired, the robot can place the object on a laser des- ignated surface above the floor, follow the laser pointer on the floor, or deliver the object to a seated person selected with the laser pointer. Within this paper we present initial results for object acquisition and delivery to a seated, able- bodied individual. For this test, the robot succeeded in 6 out of 7 trials (86%).

Thumbnail Image
Item

Human-Robot Interaction Studies for Autonomous Mobile Manipulation for the Motor Impaired

2009-03 , Choi, Young Sang , Anderson, Cressel D. , Deyle, Travis , Kemp, Charles C.

We are developing an autonomous mobile assistive robot named El-E to help individuals with severe motor impairments by performing various object manipulation tasks such as fetching, transporting, placing, and delivering. El-E can autonomously approach a location specified by the user through an interface such as a standard laser pointer and pick up a nearby object. The initial target user population of the robot is individuals suffering from amyotrophic lateral sclerosis (ALS). ALS, also known as Lou Gehrig’s disease, is a progressive neuro-degenerative disease resulting in motor impairments throughout the entire body. Due to the severity and progressive nature of ALS, the results from developing robotic technologies to assist ALS patients could be applied to wider motor impaired populations. To accomplish successful development and real world application of assistive robot technology, we have to acquire familiarity with the needs and everyday living conditions of these individuals. We also believe the participation of prospective users throughout the design and development process is essential in improving the usability and accessibility of the robot for the target user population. To assess the needs of prospective users and to evaluate the technology being developed, we applied various methodologies of human studies including interviewing, photographing, and conducting controlled experiments. We present an overview of research from the Healthcare Robotics Lab related to patient needs assessment and human experiments with emphasis on the methods of human centered approach.

Thumbnail Image
Item

A foveated passive UHF RFID system for mobile manipulation

2008-09 , Deyle, Travis , Anderson, Cressel D. , Kemp, Charles C. , Reynolds, Matt S.

We present a novel antenna and system architecture for mobile manipulation based on passive RFID technology operating in the 850 MHz - 950 MHz ultra-high-frequency (UHF) spectrum. This system exploits the electromagnetic properties of UHF radio signals to present a mobile robot with both wide-angle dasiaperipheral visionpsila, sensing multiple tagged objects in the area in front of the robot, and focused, high-acuity dasiacentral visionpsila, sensing only tagged objects close to the end effector of the manipulator. These disparate tasks are performed using the same UHF RFID tag, coupled in two different electromagnetic modes. Wide-angle sensing is performed with an antenna designed for far-field electromagnetic wave propagation, while focused sensing is performed with a specially designed antenna mounted on the end effector that optimizes near-field magnetic coupling. We refer to this RFID system as dasiafoveatedpsila, by analogy with the anatomy of the human eye. We report a series of experiments on an untethered autonomous mobile manipulator in a 2.5D environment that demonstrate the features of this architecture using two novel behaviors, one in which data from the far-field antenna is used to determine if a specific tagged object is present in the robotpsilas working area and to navigate to that object, and a second using data from the near-field antenna to grasp a specified object from a collection of visually identical objects. The same UHF RFID tag is used to facilitate both the navigation and grasping tasks.

Thumbnail Image
Item

A Point-and-Click Interface for the Real World: Laser Designation of Objects for Mobile Manipulation

2008-03 , Kemp, Charles C. , Anderson, Cressel D. , Nguyen, Hai , Trevor, Alexander J. B. , Xu, Zhe

We present a novel interface for human-robot interaction that enables a human to intuitively and unambiguously se- lect a 3D location in the world and communicate it to a mo- bile robot. The human points at a location of interest and illuminates it (“clicks it”) with an unaltered, off-the-shelf, green laser pointer. The robot detects the resulting laser spot with an omnidirectional, catadioptric camera with a narrow-band green filter. After detection, the robot moves its stereo pan/tilt camera to look at this location and esti- mates the location’s 3D position with respect to the robot’s frame of reference. Unlike previous approaches, this interface for gesture-based pointing requires no instrumentation of the environment, makes use of a non-instrumented everyday pointing device, has low spatial error out to 3 meters, is fully mobile, and is robust enough for use in real-world applications. We demonstrate that this human-robot interface enables a person to designate a wide variety of everyday objects placed throughout a room. In 99.4% of these tests, the robot successfully looked at the designated object and estimated its 3D position with low average error. We also show that this interface can support object acquisition by a mobile manipulator. For this application, the user selects an object to be picked up from the floor by “clicking” on it with the laser pointer interface. In 90% of these trials, the robot successfully moved to the designated object and picked it up off of the floor.

Thumbnail Image
Item

Hand It Over or Set It Down: A User Study of Object Delivery with an Assistive Mobile Manipulator

2009 , Choi, Young Sang , Chen, Tiffany L. , Jain, Advait , Anderson, Cressel D. , Glass, Jonathan D. , Kemp, Charles C.

Delivering an object to a user would be a generally useful capability for service robots. Within this paper, we look at this capability in the context of assistive object retrieval for motor-impaired users. We first describe a behavior-based system that enables our mobile robot EL-E to autonomously deliver an object to a motor-impaired user. We then present our evaluation of this system with 8 motor-impaired patients from the Emory ALS Center. As part of this study, we compared handing the object to the user (direct delivery) with placing the object on a nearby table (indirect delivery). We tested the robot delivering a cordless phone, a medicine bottle, and a TV remote, which were ranked as three of the top four most important objects for robotic delivery by ALS patients in a previous study. Overall, the robot successfully delivered these objects in 126 out of 144 trials (88%) with a success rate of 97% for indirect delivery and 78% for direct delivery. In an accompanying survey, participants showed high satisfaction with the robot with 4 people preferring direct delivery and 4 people preferring indirect delivery. Our results indicate that indirect delivery to a surface can be a robust and reliable delivery method with high user satisfaction, and that robust direct delivery will require methods that handle diverse postures and body types.

Thumbnail Image
Item

A Clickable World: Behavior Selection Through Pointing and Context for Mobile Manipulation

2008-09 , Nguyen, Hai , Jain, Advait , Anderson, Cressel D. , Kemp, Charles C.

We present a new behavior selection system for human-robot interaction that maps virtual buttons overlaid on the physical environment to the robotpsilas behaviors, thereby creating a clickable world. The user clicks on a virtual button and activates the associated behavior by briefly illuminating a corresponding 3D location with an off-the-shelf green laser pointer. As we have described in previous work, the robot can detect this click and estimate its 3D location using an omnidirectional camera and a pan/tilt stereo camera. In this paper, we show that the robot can select the appropriate behavior to execute using the 3D location of the click, the context around this 3D location, and its own state. For this work, the robot performs this selection process using a cascade of classifiers. We demonstrate the efficacy of this approach with an assistive object-fetching application. Through empirical evaluation, we show that the 3D location of the click, the state of the robot, and the surrounding context is sufficient for the robot to choose the correct behavior from a set of behaviors and perform the following tasks: pick-up a designated object from a floor or table, deliver an object to a designated person, place an object on a designated table, go to a designated location, and touch a designated location with its end effector.