How Should a Robot Perceive the World?

Author(s)
Saxena, Ashutosh
Advisor(s)
Editor(s)
Associated Organization(s)
Series
Series
Collections
Supplementary to:
Abstract
In order for a robot to perform tasks in the human environments, it first needs to figure out "what" to perceive. While for some robotic tasks (such as an object finding robot) this is relatively straightforward (e.g., infer the object labels from RGB-D data), many other robotic tasks require a robot to be more creative about what to perceive. For example, for a robot to arrange a disorganized room, it would need to perceive the human preferences about the usage of objects as well as the low-level manipulation strategies. In this talk, I will illustrate the issues surrounding "what to perceive" through a few examples. The key to figuring out "how" to perceive lies in being able to model the underlying "structure" in the problem. I propose that for reasoning about the human environments, it is the humans that are the true underlying structure in the problem. This is not only true for tasks that involve humans explicitly (such as human activity detection), but also true for tasks in which a human was never observed! In this talk, I will present learning algorithms that model such underlying structure in the problem. Finally, using such learned structure, I will present several robotic applications ranging from single-image based aerial vehicle navigation to personal robots performing tasks of unloading items from a dishwasher, loading a fridge, arranging a disorganized room, and performing assistive tasks in response to human activities.
Sponsor
Date
2012-11-28
Extent
56:46 minutes
Resource Type
Moving Image
Resource Subtype
Lecture
Rights Statement
Rights URI