Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 5 of 5
  • Item
    Towards Assistive Feeding with a General-Purpose Mobile Manipulator
    (Georgia Institute of Technology, 2016-05) Park, Daehyung ; Kim, You Keun ; Erickson, Zackory ; Kemp, Charles C.
    General-purpose mobile manipulators have the potential to serve as a versatile form of assistive technology. However, their complexity creates challenges, including the risk of being too difficult to use. We present a proof-of-concept robotic system for assistive feeding that consists of a Willow Garage PR2, a high-level web-based interface, and specialized autonomous behaviors for scooping and feeding yogurt. As a step towards use by people with disabilities, we evaluated our system with 5 able-bodied participants. All 5 successfully ate yogurt using the system and reported high rates of success for the system’s autonomous behaviors. Also, Henry Evans, a person with severe quadriplegia, operated the system remotely to feed an able-bodied person. In general, people who operated the system reported that it was easy to use, including Henry. The feeding system also incorporates corrective actions designed to be triggered either autonomously or by the user. In an offline evaluation using data collected with the feeding system, a new version of our multimodal anomaly detection system outperformed prior versions.
  • Item
    Monocular Parallel Tracking and Mapping with Odometry Fusion for MAV Navigation in Feature-Lacking Environments
    (Georgia Institute of Technology, 2013-11) Ta, Duy-Nguyen ; Ok, Kyel ; Dellaert, Frank
    Despite recent progress, autonomous navigation on Micro Aerial Vehicles with a single frontal camera is still a challenging problem, especially in feature-lacking environ- ments. On a mobile robot with a frontal camera, monoSLAM can fail when there are not enough visual features in the scene, or when the robot, with rotationally dominant motions, yaws away from a known map toward unknown regions. To overcome such limitations and increase responsiveness, we present a novel parallel tracking and mapping framework that is suitable for robot navigation by fusing visual data with odometry measurements in a principled manner. Our framework can cope with a lack of visual features in the scene, and maintain robustness during pure camera rotations. We demonstrate our results on a dataset captured from the frontal camera of a quad- rotor flying in a typical feature-lacking indoor environment.
  • Item
    Turn-Taking for Human-Robot Interaction
    (Georgia Institute of Technology, 2010) Chao, Crystal ; Thomaz, Andrea L.
  • Item
    4D View Synthesis: Navigating Through Time and Space
    (Georgia Institute of Technology, 2007) Sun, Mingxuan ; Schindler, Grant ; Kang, Sing Bing ; Dellaert, Frank
  • Item
    Rao-Blackwellized Importance Sampling of Camera Parameters from Simple User Input with Visibility Preprocessing in Line Space
    (Georgia Institute of Technology, 2006-06) Quennesson, Kevin ; Dellaert, Frank
    Users know what they see before where they are: it is more natural to talk about high level visibility information ("I see such object") than about one's location or orientation. In this paper we introduce a method to find in 3D worlds a density of viewpoints of camera locations from high level visibility constraints on objects in this world. Our method is based on Rao-Blackwellized importance sampling. For efficiency purposes, the proposal distribution used for sampling is extracted from a visibility preprocessing technique adapted from computer graphics. We apply the method for finding in a 3D city model of Atlanta the virtual locations of real-world cameras and viewpoints.