Modeling chaotic spatiotemporal dynamics with a minimal representation using Neural ODEs
Author(s)
Graham, Michael
Linot, Alec
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
Mike Graham Overview: Our overall aim is to combine ideas from dynamical systems theory and machine learning to develop and apply reduced-order models of flow processes with complex chaotic dynamics. A particular aim is a minimal description of dynamics on manifolds of dimension much less than the nominal state dimension and use of these models to develop effective control strategies for reducing energy dissipation.
Alec Linot: Solutions to dissipative partial differential equations that exhibit chaotic dynamics often evolve to attractors that exist on finite-dimensional manifolds. We describe a data-driven reduced order modelling (ROM) method to find the coordinates on this manifold and find an ordinary differential equation (ODE) in these coordinates. The manifold coordinates are found by reducing the system dimension via an undercomplete autoencoder – a neural network that reduces then expands dimension – and an ODE is learned in this coordinate system with a Neural ODE. Learning an ODE, instead of a discrete time-map, allows us to evolve trajectories arbitrarily far forward, and allows for training on unevenly and/or widely spaced data in time. We test on the Kuramoto-Sivashinsky equation for domain sizes that exhibit spatiotemporally chaos, and find the ROM gives accurate short- and long-time statistics with training data separated up to 0.7 Lyapunov times. https://arxiv.org/abs/2109.00060
Alec Linot: Solutions to dissipative partial differential equations that exhibit chaotic dynamics often evolve to attractors that exist on finite-dimensional manifolds. We describe a data-driven reduced order modelling (ROM) method to find the coordinates on this manifold and find an ordinary differential equation (ODE) in these coordinates. The manifold coordinates are found by reducing the system dimension via an undercomplete autoencoder – a neural network that reduces then expands dimension – and an ODE is learned in this coordinate system with a Neural ODE. Learning an ODE, instead of a discrete time-map, allows us to evolve trajectories arbitrarily far forward, and allows for training on unevenly and/or widely spaced data in time. We test on the Kuramoto-Sivashinsky equation for domain sizes that exhibit spatiotemporally chaos, and find the ROM gives accurate short- and long-time statistics with training data separated up to 0.7 Lyapunov times. https://arxiv.org/abs/2109.00060
Sponsor
Date
2021-10-20
Extent
56:27 minutes
Resource Type
Moving Image
Resource Subtype
Lecture