Title:
Safe Robot Planning and Control Using Uncertainty-Aware Deep Learning

Thumbnail Image
Author(s)
Fan, David D.
Authors
Advisor(s)
Theodorou, Evangelos A.
Agha-mohammadi, Ali-agha
Advisor(s)
Editor(s)
Associated Organization(s)
Series
Supplementary to
Abstract
In order for robots to autonomously operate in novel environments over extended periods of time, they must learn and adapt to changes in the dynamics of their motion and the environment. Neural networks have been shown to be a versatile and powerful tool for learning dynamics and semantic information. However, there is reluctance to deploy these methods on safety-critical or high-risk applications, since neural networks tend to be black-box function approximators. Therefore, there is a need for investigation into how these machine learning methods can be safely leveraged for learning-based controls, planning, and traversability. The aim of this thesis is to explore methods for both establishing safety guarantees as well as accurately quantifying risks when using deep neural networks for robot planning, especially in high-risk environments. First, we consider uncertainty-aware Bayesian Neural Networks for adaptive control, and introduce a method for guaranteeing safety under certain assumptions. Second, we investigate deep quantile regression learning methods for learning time-and-state varying uncertainties, which we use to perform trajectory optimization with Model Predictive Control. Third, we introduce a complete framework for risk-aware traversability and planning, which we use to enable safe exploration of extreme environments. Fourth, we again leverage deep quantile regression and establish a method for accurately learning the distribution of traversability risks in these environments, which can be used to create safety constraints for planning and control.
Sponsor
Date Issued
2021-07-26
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI