Title:
Deep Reinforcement Learning Using Data-Driven Reduced-Order Models Discovers and Stabilizes Low Dissipation Equilibria Part 2: Data-driven dimension reduction, dynamic modeling, and control of complex chaotic systems

Thumbnail Image
Author(s)
Graham, Michael
Zeng, Kevin
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Series
Collections
Supplementary to
Abstract
Mike Graham Overview: Our overall aim is to combine ideas from dynamical systems theory and machine learning to develop and apply reduced-order models of flow processes with complex chaotic dynamics. A particular aim is a minimal description of dynamics on manifolds of dimension much less than the nominal state dimension and use of these models to develop effective control strategies for reducing energy dissipation.
Kevin Zeng: Deep reinforcement learning (RL), a data-driven method capable of discovering complex control strategies for high-dimensional systems, requires substantial interactions with the target system, making it costly when the system is computationally or experimentally expensive (e.g. flow control). We mitigate this challenge by combining dimension reduction via an autoencoder with a neural ODE framework to learn a low-dimensional dynamical model, which we substitute in place of the true system during RL training to efficiently estimate the control policy. We apply our method to data from the Kuramoto-Sivashinsky equation. With a goal of minimizing dissipation, we extract control policies from the model using RL and show that the model-based strategies perform well on the full dynamical system and highlight that the RL agent discovers and stabilizes a forced equilibrium solution, despite never having been given explicit information about this state’s existence.
Sponsor
Date Issued
2021-10-27
Extent
37:52 minutes
Resource Type
Moving Image
Resource Subtype
Lecture
Rights Statement
Rights URI