Person:
Dellaert, Frank

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 15
  • Item
    Towards Planning in Generalized Belief Space
    (Georgia Institute of Technology, 2013-12) Indelman, Vadim ; Carlone, Luca ; Dellaert, Frank
    We investigate the problem of planning under uncertainty, which is of interest in several robotic applications, ranging from autonomous navigation to manipulation. Recent effort from the research community has been devoted to design planning approaches working in a continuous domain, relaxing the assumption that the controls belong to a finite set. In this case robot policy is computed from the current robot belief (planning in belief space), while the environment in which the robot moves is usually assumed to be known or partially known. We contribute to this branch of the literature by relaxing the assumption of known environment; for this purpose we introduce the concept of generalized belief space (GBS), in which the robot maintains a joint belief over its state and the state of the environment. We use GBS within a Model Predictive Control (MPC) scheme; our formulation is valid for general cost functions and incorporates a dual-layer optimization: the outer layer computes the best control action, while the inner layer computes the generalized belief given the action. The resulting approach does not require prior knowledge of the environment and does not assume maximum likelihood observations. We also present an application to a specific family of cost functions and we elucidate on the theoretical derivation with numerical examples.
  • Item
    Optical Flow Templates for Superpixel Labeling in Autonomous Robot Navigation
    (Georgia Institute of Technology, 2013-11) Roberts, Richard ; Dellaert, Frank
    Instantaneous image motion in a camera on-board a mobile robot contains rich information about the structure of the environment. We present a new framework, optical flow templates, for capturing this information and an experimental proof-of-concept that labels superpixels using them. Optical flow templates encode the possible optical flow fields due to egomotion for a specific environment shape and robot attitude. We label optical flow in superpixels with the environment shape they image according to how consistent they are with each template. Specifically, in this paper we employ templates highly relevant to mobile robot navigation. Image regions consistent with ground plane and distant structure templates likely indicate free and traversable space, while image regions consistent with neither of these are likely to be nearby objects that are obstacles. We evaluate our method qualitatively and quantitatively in an urban driving scenario, labeling the ground plane, and obstacles such as passing cars, lamp posts, and parked cars. One key advantage of this framework is low computational complexity, and we demonstrate per-frame computation times of 20ms, excluding optical flow and superpixel calculation.
  • Item
    DDF-SAM 2.0: Consistent Distributed Smoothing and Mapping
    (Georgia Institute of Technology, 2013-05) Cunningham, Alexander ; Indelman, Vadim ; Dellaert, Frank
    This paper presents an consistent decentralized data fusion approach for robust multi-robot SLAM in dan- gerous, unknown environments. The DDF-SAM 2.0 approach extends our previous work by combining local and neigh- borhood information in a single, consistent augmented local map, without the overly conservative approach to avoiding information double-counting in the previous DDF-SAM algo- rithm. We introduce the anti-factor as a means to subtract information in graphical SLAM systems, and illustrate its use to both replace information in an incremental solver and to cancel out neighborhood information from shared summarized maps. This paper presents and compares three summarization techniques, with two exact approaches and an approximation. We evaluated the proposed system in a synthetic example and show the augmented local system and the associated summarization technique do not double-count information, while keeping performance tractable.
  • Item
    Autonomous Flight in GPS-Denied Environments Using Monocular Vision and Inertial Sensors
    (Georgia Institute of Technology, 2013-04) Wu, Allen D. ; Johnson, Eric N. ; Kaess, Michael ; Dellaert, Frank ; Chowdhary, Girish
    A vision-aided inertial navigation system that enables autonomous flight of an aerial vehicle in GPS-denied environments is presented. Particularly, feature point information from a monocular vision sensor are used to bound the drift resulting from integrating accelerations and angular rate measurements from an Inertial Measurement Unit (IMU) forward in time. An Extended Kalman filter framework is proposed for performing the tasks of vision-based mapping and navigation separately. When GPS is available, multiple observations of a single landmark point from the vision sensor are used to estimate the point’s location in inertial space. When GPS is not available, points that have been sufficiently mapped out can be used for estimating vehicle position and attitude. Simulation and flight test results of a vehicle operating autonomously in a simplified loss-of-GPS scenario verify the presented method.
  • Item
    Primate - Inspired Vehicle Navigation Using Optic Flow and Mental Rotations
    (Georgia Institute of Technology, 2013) Arkin, Ronald C. ; Dellaert, Frank ; Srinivasa, Natesh ; Kerwin, Ryan
    Robot navigation already has many relatively efficient solutions: reactive control, simultaneous localization and mapping (SLAM), Rapidly-Exploring Random Trees (RRTs), etc. But many primates possess an additional inherent spatial reasoning capability: mental rotation. Our research addresses the question of what role, if any, mental rotations can play in enhancing existing robot navigational capabilities. To answer this question we explore the use of optical flow as a basis for extracting abstract representations of the world, comparing these representations with a goal state of similar format and then iteratively providing a control signal to a robot to allow it to move in a direction consistent with achieving that goal state. We study a range of transformation methods to implement the mental rotation component of the architecture, including correlation and matching based on cognitive studies. We also include a discussion of how mental rotations may play a key role in understanding spatial advice giving, particularly from other members of the species, whether in map-based format, gestures, or other means of communication. Results to date are presented on our robotic platform.
  • Item
    Accurate On-Line 3D Occupancy Grids Using Manhattan World Constraints
    (Georgia Institute of Technology, 2012-10) Peasley, Brian ; Birchfield, Stan ; Cunningham, Alexander ; Dellaert, Frank
    In this paper we present an algorithm for constructing nearly drift-free 3D occupancy grids of large indoor environments in an online manner. Our approach combines data from an odometry sensor with output from a visual registration algorithm, and it enforces a Manhattan world constraint by utilizing factor graphs to produce an accurate online estimate of the trajectory of a mobile robotic platform. We also examine the advantages and limitations of the octree data structure representation of a 3D environment. Through several experiments in environments with varying sizes and construction we show that our method reduces rotational and translational drift significantly without performing any loop closing techniques.
  • Item
    Vistas and Wall-Floor Intersection Features: Enabling Autonomous Flight in Man-made Environments
    (Georgia Institute of Technology, 2012-10) Ok, Kyel ; Ta, Duy-Nguyen ; Dellaert, Frank
    We propose a solution toward the problem of autonomous flight and exploration in man-made indoor environments with a micro aerial vehicle (MAV), using a frontal camera, a downward-facing sonar, and an IMU. We present a general method to detect and steer an MAV toward distant features that we call vistas while building a map of the environment to detect unexplored regions. Our method enables autonomous exploration capabilities while working reliably in textureless indoor environments that are challenging for traditional monocular SLAM approaches. We overcome the difficulties faced by traditional approaches with Wall-Floor Intersection Features , a novel type of low-dimensional landmarks that are specifically designed for man-made environments to capture the geometric structure of the scene. We demonstrate our results on a small, commercially available quadrotor platform.
  • Item
    Attitude Heading Reference System with Rotation-Aiding Visual Landmarks
    (Georgia Institute of Technology, 2012-07) Beall, Chris ; Ta, Duy-Nguyen ; Ok, Kyel ; Dellaert, Frank
    In this paper we present a novel vision-aided attitude heading reference system for micro aerial vehicles (MAVs) and other mobile platforms, which does not rely on known landmark locations or full 3D map estimation as is common in the literature. Inertial sensors which are commonly found on MAVs suffer from additive biases and noise, and yaw error will grow without bounds. The bearing-only measurements, which we call vistas, aid the vehicle’s heading estimate and allow for long-term operation while correcting for sensor drift. Our method is experimentally validated on a commercially available low-cost quadrotor MAV.
  • Item
    Saliency Detection and Model-based Tracking: a Two Part Vision System for Small Robot Navigation in Forested Environments
    (Georgia Institute of Technology, 2012-05-01) Roberts, Richard ; Ta, Duy-Nguyen ; Straub, Julian ; Ok, Kyel ; Dellaert, Frank
    Towards the goal of fast, vision-based autonomous flight, localization, and map building to support local planning and control in unstructured outdoor environments, we present a method for incrementally building a map of salient tree trunks while simultaneously estimating the trajectory of a quadrotor flying through a forest. We make significant progress in a class of visual perception methods that produce low-dimensional, geometric information that is ideal for planning and navigation on aerial robots, while directing computational resources using motion saliency, which selects objects that are important to navigation and planning. By low-dimensional geometric information, we mean coarse geometric primitives, which for the purposes of motion planning and navigation are suitable proxies for real-world objects. Additionally, we develop a method for summarizing past image measurements that avoids expensive computations on a history of images while maintaining the key non-linearities that make full map and trajectory smoothing possible. We demonstrate results with data from a small, commercially-available quad-rotor flying in a challenging, forested environment.
  • Item
    Planar Segmentation of RGBD Images Using Fast Linear Fitting and Markov Chain Monte Carlo
    (Georgia Institute of Technology, 2012-05) Erdogan, Can ; Paluri, Manohar ; Dellaert, Frank
    With the advent of affordable RGBD sensors such as the Kinect, the collection of depth and appearance information from a scene has become effortless. However, neither the correct noise model for these sensors, nor a principled methodology for extracting planar segmentations has been developed yet. In this work, we advance the state of art with the following contributions: we correctly model the Kinect sensor data by observing that the data has inherent noise only over the measured disparity values, we formulate plane fitting as a linear least-squares problem that allow us to quickly merge different segments, and we apply an advanced Markov Chain Monte Carlo (MCMC) method, generalized Swendsen-Wang sampling, to efficiently search the space of planar segmentations.We evaluate our plane fitting and surface reconstruction algorithms with simulated and real-world data.