Organizational Unit:
Healthcare Robotics Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 3 of 3
  • Item
    A Clickable World: Behavior Selection Through Pointing and Context for Mobile Manipulation
    (Georgia Institute of Technology, 2008-09) Nguyen, Hai ; Jain, Advait ; Anderson, Cressel D. ; Kemp, Charles C.
    We present a new behavior selection system for human-robot interaction that maps virtual buttons overlaid on the physical environment to the robotpsilas behaviors, thereby creating a clickable world. The user clicks on a virtual button and activates the associated behavior by briefly illuminating a corresponding 3D location with an off-the-shelf green laser pointer. As we have described in previous work, the robot can detect this click and estimate its 3D location using an omnidirectional camera and a pan/tilt stereo camera. In this paper, we show that the robot can select the appropriate behavior to execute using the 3D location of the click, the context around this 3D location, and its own state. For this work, the robot performs this selection process using a cascade of classifiers. We demonstrate the efficacy of this approach with an assistive object-fetching application. Through empirical evaluation, we show that the 3D location of the click, the state of the robot, and the surrounding context is sufficient for the robot to choose the correct behavior from a set of behaviors and perform the following tasks: pick-up a designated object from a floor or table, deliver an object to a designated person, place an object on a designated table, go to a designated location, and touch a designated location with its end effector.
  • Item
    EL-E: An Assistive Mobile Manipulator that Autonomously Fetches Objects from Flat Surfaces
    (Georgia Institute of Technology, 2008-03-12) Nguyen, Hai ; Anderson, Cressel D. ; Trevor, Alexander J. B. ; Jain, Advait ; Xu, Zhe ; Kemp, Charles C.
    Objects within human environments are usually found on flat surfaces that are orthogonal to gravity, such as floors, tables, and shelves. We first present a new assistive robot that is explicitly designed to take advantage of this common structure in order to retrieve unmodeled, everyday objects for people with motor impairments. This compact, stati- cally stable mobile manipulator has a novel kinematic and sensory configuration that facilitates autonomy and human- robot interaction within indoor human environments. Sec- ond, we present a behavior system that enables this robot to fetch objects selected with a laser pointer from the floor and tables. The robot can approach an object selected with the laser pointer interface, detect if the object is on an elevated surface, raise or lower its arm and sensors to this surface, and visually and tacitly grasp the object. Once the object is acquired, the robot can place the object on a laser des- ignated surface above the floor, follow the laser pointer on the floor, or deliver the object to a seated person selected with the laser pointer. Within this paper we present initial results for object acquisition and delivery to a seated, able- bodied individual. For this test, the robot succeeded in 6 out of 7 trials (86%).
  • Item
    A Point-and-Click Interface for the Real World: Laser Designation of Objects for Mobile Manipulation
    (Georgia Institute of Technology, 2008-03) Kemp, Charles C. ; Anderson, Cressel D. ; Nguyen, Hai ; Trevor, Alexander J. B. ; Xu, Zhe
    We present a novel interface for human-robot interaction that enables a human to intuitively and unambiguously se- lect a 3D location in the world and communicate it to a mo- bile robot. The human points at a location of interest and illuminates it (“clicks it”) with an unaltered, off-the-shelf, green laser pointer. The robot detects the resulting laser spot with an omnidirectional, catadioptric camera with a narrow-band green filter. After detection, the robot moves its stereo pan/tilt camera to look at this location and esti- mates the location’s 3D position with respect to the robot’s frame of reference. Unlike previous approaches, this interface for gesture-based pointing requires no instrumentation of the environment, makes use of a non-instrumented everyday pointing device, has low spatial error out to 3 meters, is fully mobile, and is robust enough for use in real-world applications. We demonstrate that this human-robot interface enables a person to designate a wide variety of everyday objects placed throughout a room. In 99.4% of these tests, the robot successfully looked at the designated object and estimated its 3D position with low average error. We also show that this interface can support object acquisition by a mobile manipulator. For this application, the user selects an object to be picked up from the floor by “clicking” on it with the laser pointer interface. In 90% of these trials, the robot successfully moved to the designated object and picked it up off of the floor.