Title:
The Statistical Foundations of Learning to Control

Thumbnail Image
Author(s)
Recht, Benjamin
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Series
Collections
Supplementary to
Abstract
Given the dramatic successes in machine learning and reinforcement learning over the past half decade, there has been a surge of interest in applying these techniques to continuous control problems in robotics and autonomous vehicles. Though such control applications appear to be straightforward generalizations of standard reinforcement learning, few fundamental baselines have been established prescribing how well one must know a system in order to control it. In this talk, I will discuss how one might merge techniques from statistical learning theory with robust control to derive such baselines for such continuous control. I will explore several examples that balance parameter identification against controller design and demonstrate finite sample tradeoffs between estimation fidelity and desired control performance. I will describe how these simple baselines give us insights into shortcomings of existing reinforcement learning methodology. I will close by listing several exciting open problems that must be solved before we can build robust, safe learning systems that interact with an uncertain physical environment.
Sponsor
Date Issued
2018-11-14
Extent
59:52 minutes
Resource Type
Moving Image
Resource Subtype
Lecture
Rights Statement
Rights URI