Organizational Unit:
Humanoid Robotics Laboratory

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 4 of 4
  • Item
    Humanoid Teleoperation for Whole Body Manipulation
    (Georgia Institute of Technology, 2008-05) Stilman, Mike ; Nishiwaki, Koichi ; Kagami, Satoshi
    We present results of successful telemanipulation of large, heavy objects by a humanoid robot. Using a single joystick the operator controls walking and whole body manipulation along arbitrary paths for up to ten minutes of continuous execution. The robot grasps, walks, pushes, pulls, turns and re-grasps a 55kg range of loads on casters. Our telemanipulation framework changes reference frames online to let the operator steer the robot in free walking, its hands in grasping and the object during mobile manipulation. In the case of manipulation, our system computes a robot motion that satisfies the commanded object path as well as the kinematic and dynamic constraints of the robot. Furthermore, we achieve increased robot stability by learning dynamic friction models of manipulated objects.
  • Item
    Learning Object Models for Humanoid Manipulation
    (Georgia Institute of Technology, 2007-11) Stilman, Mike ; Nishiwaki, Koichi ; Kagami, Satoshi
    We present a successful implementation of rigid grasp manipulation for large objects moved along specified trajectories by a humanoid robot. HRP-2 manipulates tables on casters with a range of loads up to its own mass. The robot maintains dynamic balance by controlling its center of gravity to compensate for reflected forces. To achieve high performance for large objects with unspecified dynamics the robot learns a friction model for each object and applies it to torso trajectory generation. We empirically compare this method to a purely reactive strategy and show a significant increase in predictive power and stability.
  • Item
    Planning and Executing Navigation Among Movable Obstacles
    (Georgia Institute of Technology, 2007) Stilman, Mike ; Nishiwaki, Koichi ; Kagami, Satoshi ; Kuffner, James J.
    This paper explores autonomous locomotion, reaching, grasping and manipulation for the domain of Navigation Among Movable Obstacles (NAMO). The robot perceives and constructs a model of an environment filled with various fixed and movable obstacles, and automatically plans a navigation strategy to reach a desired goal location. The planned strategy consists of a sequence of walking and compliant manipulation operations. It is executed by the robot with online feedback. We give an overview of our NAMO system, as well as provide details of the autonomous planning, online grasping and compliant hand positioning during dynamically-stable walking. Finally, we present results of a successful implementation running on the Humanoid Robot HRP-2.
  • Item
    Humanoid HRP2-DHRC for Autonomous and Interactive Behavior
    (Georgia Institute of Technology, 2007) Kagami, Satoshi ; Nishiwaki, K. ; Kuffner, James ; Thompson, S. ; Chestnutt, J. ; Stilman, Mike ; Michel, P.
    Recently, research on humanoid-type robots has become increasingly active, and a broad array of fundamental issues are under investigation. However, in order to achieve a humanoid robot which can operate in human environments, not only the fundamental components themselves, but also the successful integration of these components will be required. At present, almost all humanoid robots that have been developed have been designed for bipedal locomotion experiments. In order to satisfy the functional demands of locomotion as well as high-level behaviors, humanoid robots require good mechanical design, hardware, and software which can support the integration of tactile sensing, visual perception, and motor control. Autonomous behaviors are currently still very primitive for humanoid-type robots. It is difficult to conduct research on high-level autonomy and intelligence in humanoids due to the development and maintenance costs of the hardware. We believe low-level autonomous functions will be required in order to conduct research on higher-level autonomous behaviors for humanoids.