DYNAMIC OPTIMIZATION PROBLEMS ON GRAPHS WITH OVERLAPPING VARIABLES: ANALYSIS, ALGORITHMS, AND APPLICATIONS

Author(s)
Harari Hamam, Tomer
Editor(s)
Associated Organization(s)
Supplementary to:
Abstract
This dissertation studies optimization problems on graphs with overlapping variables: optimization problems’ objectives can be expressed as the sum of overlapping (sharing) local loss functions. Local here means that each loss function depends only on a small subset of the optimization variables. Such problems arise in signal processing, machine learning, control, statistical inference, and many other related fields. For example, the data defining the loss functions have a significant spatial or temporal dependency. To expose the local structure of these problems, we cast them on graphs. Each node is associated with a loss function, and two nodes share an edge whenever the two corresponding loss functions share variables. The goal is to use the graph’s topology to assume better ways to solve these problems. First, we consider problems with graphs with fixed topology; the emphasis is on deriving efficient decentralized algorithms to solve such large-scale problems. Our approach approximates how natural coupled-dynamical systems evolve. Each agent solves a local problem communicates its solution to agents with which it shares variables. We show how algorithms as the Jacobi method and ADMM solve these problems fully decentralized. Afterward, we move to graph problems with dynamical topologies that arise in time-varying optimization programs, where we get chain-like graphs. In particular, we are interested in how the solution for the t-th frame of variables changes in time: while adding a new functional and a new set of variables does, in general, change the solution everywhere, we give conditions under which the local estimates converge to limit points at a linear rate. Consequently, we derive theoretical guarantees for algorithms with limited memory and derive a new memory-efficient Newton online algorithm (NOA). We consider two natural extensions to the fundamental time-varying problem: convex optimization problems with local bounded constraints and the case where there is no transparent growth model of the dynamics. We present numerous examples of various applications where these results apply. These examples include signal reconstruction, image denoising, graph min-cut, Bayesian inference, simultaneous localization and mapping (SLAM), multi-task learning, and estimating a neural rate function from spikes train.
Sponsor
Date
2022-01-14
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI