Title:
Occlusion-Aware Object Localization, Segmentation and Pose Estimation

Thumbnail Image
Author(s)
Brahmbhatt, Samarth
Ben Amor, Heni
Christensen, Henrik I.
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Series
Supplementary to
Abstract
We present a learning approach for localization and segmentation of objects in an image in a manner that is robust to partial occlusion. Our algorithm produces a bounding box around the full extent of the object and labels pixels in the interior that belong to the object. Like existing segmentation aware detection approaches, we learn an appearance model of the object and consider regions that do not fit this model as potential occlusions. However, in addition to the established use of pairwise potentials for encouraging local consistency, we use higher order potentials which capture information at the level of image segments. We also propose an efficient loss function that targets both localization and segmentation performance. Our algorithm achieves 13.52% segmentation error and 0.81 area under the false-positive per image vs. recall curve on average over the challenging CMU Kitchen Occlusion Dataset. This is 42.44% less segmentation error and a 16.13% increase in localization performance compared to the state-of-the-art. Finally, we show that the visibility labeling produced by our algorithm can make full 3D pose estimation from a single image robust to occlusion.
Sponsor
Date Issued
2015-09
Extent
Resource Type
Text
Resource Subtype
Proceedings
Rights Statement
Rights URI