Organizational Unit:
Humanoid Robotics Laboratory

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 2 of 2
  • Item
    Planning with Movable Obstacles in Continuous Environments with Uncertain Dynamics
    (Georgia Institute of Technology, 2013-05) Levihn, Martin ; Scholz, Jonathan ; Stilman, Mike
    In this paper we present a decision theoretic planner for the problem of Navigation Among Movable Obstacles (NAMO) operating under conditions faced by real robotic systems. While planners for the NAMO domain exist, they typically assume a deterministic environment or rely on discretization of the configuration and action spaces, preventing their use in practice. In contrast, we propose a planner that operates in real-world conditions such as uncertainty about the parameters of workspace objects and continuous configuration and action (control) spaces. To achieve robust NAMO planning despite these conditions, we introduce a novel integration of Monte Carlo simulation with an abstract MDP construction. We present theoretical and empirical arguments for time complexity linear in the number of obstacles as well as a detailed implementation and examples from a dynamic simulation environment.
  • Item
    Hierarchical Decision Theoretic Planning for Navigation Among Movable Obstacles
    (Georgia Institute of Technology, 2012-06) Levihn, Martin ; Scholz, Jonathan ; Stilman, Mike
    In this paper we present the first decision theoretic planner for the problem of Navigation Among Movable Obstacles (NAMO). While efficient planners for NAMO exist, they are challenging to implement in practice due to the inherent uncertainty in both perception and control of real robots. Generalizing existing NAMO planners to nondeterministic domains is particularly difficult due to the sensitivity of MDP methods to task dimensionality. Our work addresses this challenge by combining ideas from Hierarchical Reinforcement Learning with Monte Carlo Tree Search, and results in an algorithm that can be used for fast online planning in uncertain environments. We evaluate our algorithm in simulation, and provide a theoretical argument for our results which suggest linear time complexity in the number of obstacles for typical environments.