Learning and Optimization on Geodesic Metric Spaces
Author(s)
Hu, Zihao
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
There has been growing interest in designing optimization algorithms with convergence guarantees when the parameter space is not a Euclidean set but a Riemannian manifold. This has attracted considerable attention because it allows for the encoding of constraints, and in many cases, it addresses the non-convexity of both the feasible set and the objective function. But thus far, this has primarily focused on vanilla convex optimization and first-order methods, and there has been far less research in the online setting as well as in minimax problems, which is the focus of the present work.
In the first part, we explore minimizing dynamic regret on Riemannian manifolds. We introduce optimistic mirror descent on manifolds in the online improper learning setting and apply the technique to establish adaptive dynamic regret bounds.
In the second part, we consider projection-free (online) optimization on Riemannian manifolds. We illustrate how to design algorithms that rely solely on a linear optimization oracle or a separation oracle to achieve sub-linear regret on manifolds.
Lastly, we consider the last-iterate convergence of Riemannian extragradient-type methods, which can be provably employed to tackle Riemannian minimax problems. The result in Euclidean space, achieved by extragradient, is $O\left(\frac{1}{\sqrt{T}}\right)$. We propose Riemannian extragradient and Riemannian past extragradient, showing both of them exhibit an analogous behavior.
Sponsor
Date
2023-12-10
Extent
Resource Type
Text
Resource Subtype
Dissertation