Person:
Christensen, Henrik I.

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 91
  • Item
    Occlusion-Aware Object Localization, Segmentation and Pose Estimation
    (Georgia Institute of Technology, 2015-09) Brahmbhatt, Samarth ; Ben Amor, Heni ; Christensen, Henrik I.
    We present a learning approach for localization and segmentation of objects in an image in a manner that is robust to partial occlusion. Our algorithm produces a bounding box around the full extent of the object and labels pixels in the interior that belong to the object. Like existing segmentation aware detection approaches, we learn an appearance model of the object and consider regions that do not fit this model as potential occlusions. However, in addition to the established use of pairwise potentials for encouraging local consistency, we use higher order potentials which capture information at the level of image segments. We also propose an efficient loss function that targets both localization and segmentation performance. Our algorithm achieves 13.52% segmentation error and 0.81 area under the false-positive per image vs. recall curve on average over the challenging CMU Kitchen Occlusion Dataset. This is 42.44% less segmentation error and a 16.13% increase in localization performance compared to the state-of-the-art. Finally, we show that the visibility labeling produced by our algorithm can make full 3D pose estimation from a single image robust to occlusion.
  • Item
    Information-based Reduced Landmark SLAM
    (Georgia Institute of Technology, 2015-05) Choudhary, Siddharth ; Indelman, Vadim ; Christensen, Henrik I. ; Dellaert, Frank
    In this paper, we present an information-based approach to select a reduced number of landmarks and poses for a robot to localize itself and simultaneously build an accurate map. We develop an information theoretic algorithm to efficiently reduce the number of landmarks and poses in a SLAM estimate without compromising the accuracy of the estimated trajectory. We also propose an incremental version of the reduction algorithm which can be used in SLAM framework resulting in information based reduced landmark SLAM. The results of reduced landmark based SLAM algorithm are shown on Victoria park dataset and a Synthetic dataset and are compared with standard graph SLAM (SAM [6]) algorithm. We demonstrate a reduction of 40-50% in the number of landmarks and around 55% in the number of poses with minimal estimation error as compared to standard SLAM algorithm.
  • Item
    Predicting Daily Activities From Egocentric Images Using Deep Learning
    (Georgia Institute of Technology, 2015) Castro, Daniel ; Hickson, Steven ; Bettadapura, Vinay ; Thomaz, Edison ; Abowd, Gregory D. ; Christensen, Henrik I. ; Essa, Irfan
    We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07% in predicting a person's activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data.
  • Item
    The Confluence of Robotics and Automation for Next Generation Manufacturing
    ( 2014-09-24) Christensen, Henrik I.
    In this presentation we will discuss how hard automation in many cases is getting replaced by flexible automation. The ideal factory is no longer a sequence of fixed processing stations, but a swarm of heterogenous stationary and mobile manipulation systems that can be re-configured to produce one-off products in a system that is fully integrated from design to manufacturing. The integration of new sensors, mixed-human-robot interaction, and cloud services offer an opportunity to rethink modern manufacturing.
  • Item
    SLAM with Object Discovery, Modeling and Mapping
    (Georgia Institute of Technology, 2014-09) Choudhary, Siddharth ; Trevor, Alexander J. B. ; Christensen, Henrik I. ; Dellaert, Frank
    Object discovery and modeling have been widely studied in the computer vision and robotics communities. SLAM approaches that make use of objects and higher level features have also recently been proposed. Using higher level features provides several benefits: these can be more discriminative, which helps data association, and can serve to inform service robotic tasks that require higher level information, such as object models and poses. We propose an approach for online object discovery and object modeling, and extend a SLAM system to utilize these discovered and modeled objects as landmarks to help localize the robot in an online manner. Such landmarks are particularly useful for detecting loop closures in larger maps. In addition to the map, our system outputs a database of detected object models for use in future SLAM or service robotic tasks. Experimental results are presented to demonstrate the approach’s ability to detect and model objects, as well as to improve SLAM results by detecting loop closures.
  • Item
    Trust Modeling in Multi-Robot Patrolling
    (Georgia Institute of Technology, 2014-06) Pippin, Charles ; Christensen, Henrik I.
    On typical multi-robot teams, there is an implicit assumption that robots can be trusted to effectively perform assigned tasks. The multi-robot patrolling task is an example of a domain that is particularly sensitive to reliability and performance of robots. Yet reliable performance of team members may not always be a valid assumption even within homogeneous teams. For instance, a robot’s performance may deteriorate over time or a robot may not estimate tasks correctly. Robots that can identify poorly performing team members as performance deteriorates, can dynamically adjust the task assignment strategy. This paper investigates the use of an observation based trust model for detecting unreliable robot team members. Robots can reason over this model to perform dynamic task reassignment to trusted team members. Experiments were performed in simulation and using a team of indoor robots in a patrolling task to demonstrate both centralized and decentralized approaches to task reassignment. The results clearly demonstrate that the use of a trust model can improve performance in the multi-robot patrolling task.
  • Item
    Efficient Hierarchical Graph-Based Segmentation of RGBD Videos
    (Georgia Institute of Technology, 2014-06) Hickson, Steven ; Birchfield, Stan ; Essa, Irfan ; Christensen, Henrik I.
    We present an efficient and scalable algorithm for segmenting 3D RGBD point clouds by combining depth, color, and temporal information using a multistage, hierarchical graph-based approach. Our algorithm processes a moving window over several point clouds to group similar regions over a graph, resulting in an initial over-segmentation. These regions are then merged to yield a dendrogram using agglomerative clustering via a minimum spanning tree algorithm. Bipartite graph matching at a given level of the hierarchical tree yields the final segmentation of the point clouds by maintaining region identities over arbitrarily long periods of time. We show that a multistage segmentation with depth then color yields better results than a linear combination of depth and color. Due to its incremental processing, our algorithm can process videos of any length and in a streaming pipeline. The algorithm’s ability to produce robust, efficient segmentation is demonstrated with numerous experimental results on challenging sequences from our own as well as public RGBD data sets.
  • Item
    Dynamic, cooperative multi-robot pa trolling with a team of UAVs
    (Georgia Institute of Technology, 2013) Pippin, Charles, E. ; Christensen, Henrik I. ; Weiss, Lora G.
    The multi-robot patrolling task has practical relevance in surveillance, search and rescue, and security applications. In this task, a team of robots must repeatedly visit areas in the environment, minimizing the time in-between visits to each. A team of robots can perform this task efficiently; however, challenges remain related to team formation and task assignment. This paper presents an approach for monitoring patrolling performance and dynamically adjusting the task assignment function based on observations of teammate performance. Experimental results are presented from realistic simulations of a cooperative patrolling scenario, using a team of UAVs.
  • Item
    Performance based task assignment in multi- robot patrolling
    (Georgia Institute of Technology, 2013) Pippin, Charles, E. ; Christensen, Henrik I. ; Weiss, Lora G.
    This article applies a performance metric to the multi-robot patrolling task to more efficiently distribute patrol areas among robot team members. The multi-robot patrolling task employs multiple robots to perform frequent visits to known areas in an environment, while minimizing the time between node visits. Conventional strategies for performing this task assume that the robots will perform as expected and do not address situations in which some team members patrol inefficiently. However, reliable performance of team members may not always be a valid assumption. This paper considers an approach for monitoring robot performance in a patrolling task and dynamically reassigning tasks from those team members that perform poorly. Experimental results from simulation and on a team of indoor robots demonstrate that in using this approach, tasks can be dynamically and more efficiently distributed in a multi-robot patrolling application.
  • Item
    3D Textureless Object Detection and Tracking: An Edge-based Approach
    (Georgia Institute of Technology, 2012-10) Choi, Changhyun ; Christensen, Henrik I.
    This paper presents an approach to textureless object detection and tracking of the 3D pose. Our detection and tracking schemes are coherently integrated in a particle filtering framework on the special Euclidean group, SE(3), in which the visual tracking problem is tackled by maintaining multiple hypotheses of the object pose. For textureless object detection, an efficient chamfer matching is employed so that a set of coarse pose hypotheses is estimated from the matching between 2D edge templates of an object and a query image. Particles are then initialized from the coarse pose hypotheses by randomly drawing based on costs of the matching. To ensure the initialized particles are at or close to the global optimum, an annealing process is performed after the initialization. While a standard edge-based tracking is employed after the annealed initialization, we employ a refinement process to establish improved correspondences between projected edge points from the object model and edge points from an input image. Comparative results for several image sequences with clutter are shown to validate the effectiveness of our approach.