Title:
Conflict-Aware Risk-Averse and Safe Reinforcement Learning: A Meta-Cognitive Learning Framework

Thumbnail Image
Author(s)
Modares, Hamidreza
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Series
Collections
Supplementary to
Abstract
While the success of reinforcement learning (RL) in computer games has shown impressive engineering feat, unlike the computer games, safety-critical settings such as unmanned vehicles must thrash around in the real world, which makes the entire enterprise unpredictable. Standard RL practice generally implants pre-specified performance metrics or objectives into the RL agent to encode the designers’ intention and preferences in achieving different and sometimes conflicting goals (e.g., cost efficiency, safety, speed of response, accuracy, etc.). Optimizing pre-specified performance metrics, however, cannot provide safety and performance guarantees across a vast variety of circumstances that the system might encounter in non-stationary and hostile environments. In this talk, I will discuss novel metacognitive RL algorithms to learn not only a control policy that optimizes accumulated reward values, but also what reward functions to optimize in the first place to formally assure safety with a good enough performance. I will present safe RL algorithms that adapt the focus of attention of RL algorithm to its variety of performance and safety objectives to resolve conflict and thus assure the feasibility of the reward function in a new circumstance. Moreover, model-free RL algorithms will be presented to solve the risk-averse optimal control (RAOC) problem to optimize the expected utility of outcomes while reducing the variance of cost under aleatory uncertainties (i.e., randomness). This is because, performance-critical systems must not only optimize the expected performance, but also reduce its variance to avoid performance fluctuation during RL’s course of operation. To solve the RAOC problem, I will present the three variants of RL algorithms and analyze their advantages and preferences for different situations/systems: 1) a one-shot static convex program based RL, 2) an iterative value iteration algorithm that solves a linear programming optimization at each iteration, and 3) an iterative policy iteration algorithm that solves a convex optimization at each iteration and guarantees the stability of the consecutive control policies.
Sponsor
Date Issued
2021-04-14
Extent
60:34 minutes
Resource Type
Moving Image
Resource Subtype
Lecture
Rights Statement
Rights URI