Sampling-based Dynamic Optimization: Theory, Analysis and Applications

Author(s)
Wang, Ziyi
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Daniel Guggenheim School of Aerospace Engineering
The Daniel Guggenheim School of Aeronautics was established in 1931, with a name change in 1962 to the School of Aerospace Engineering
Supplementary to:
Abstract
Dynamic optimization solves for the optimal strategy for a system that evolves over time. It is an integral part of many disciplines such as optimal control theory and robotics. Sampling-based methods have gained much popularity in recent years for solving dynamic optimization problems due to their ability to handle discontinuities in the dynamics and cost function. While many sampling-based dynamic optimization algorithms are developed, they are based on different theoretical foundations and are often designed for specific applications with heuristics. This thesis aims to bridge the gap between different theories, derive the general forms of several state-of-the-art sampling-based dynamic optimization algorithms and their extensions to various problem formulations. We present three general perspectives for deriving sampling-based dynamic optimization algorithms, namely Variational Optimization, Variational Inference and Stochastic Search. We show the equivalence between Variational Optimization and optimal control theory under certain assumptions. We justify previously used heuristics with proper problem formulation and derivation. In addition, we demonstrate that state-of-the-art Model Predictive Path Integral and Cross Entropy Method algorithms can be recovered as special cases under these perspectives. We discuss the connections between the perspectives and the unique algorithmic characteristics of each perspective. A unifying analysis is performed on the convergence, sampling complexity and variance. Based on the three perspectives, we develop Model Predictive Control algorithms for several application scenarios. We first apply the different schemes to standard stochastic optimal control problems. A risk sensitive extension is also derived to optimize with respect to the conditional value-at-risk of the cost function. The resulting algorithm performs optimization on non-Gaussian beliefs provided by a particle filter. We also apply the Stochastic Search perspective to the complex jump diffusion process and opinion dynamics. Finally, we propose a general distributed framework for scaling sampling-based dynamic optimizers for large-scale multi-agent control using consensus Alternating Direction Method of Multipliers. The effectiveness and applicability of the proposed algorithms are highlighted by results on various systems from control theory and robotics in simulation. In particular, the distributed framework is tested on a 200-agent Dubins vehicle formation control task.
Sponsor
Date
2022-12-14
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI