Distributed Optimization Architectures for Large-Scale Decision-Making
Author(s)
Saravanos, Augustine D.
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
As the scale and complexity of modern decision-making systems are rapidly increasing in autonomy, robotics, artificial intelligence (AI), and various other domains, there is an emerging interest in developing scalable and effective algorithmic frameworks capable of handling large-scale, networked and uncertain environments. Several fundamental challenges must be addressed, including scalability, complex dynamics, robustness under uncertainty, interpretability and generalizability. This thesis presents a series of novel distributed algorithmic architectures at the intersection of optimization, control theory and deep learning, designed to address these challenges.
The main contributions of this thesis are summarized as follows: (i) Two novel distributed dynamic optimization architectures for large-scale multi-agent control are introduced, providing state-of-the-art scalability for optimal control in autonomous systems and demonstrating their effectiveness through hardware experiments. (ii) A family of scalable decentralized methods for distribution steering in multi-agent stochastic systems is presented, providing a trade-off between safety capabilities and computational efficiency, accompanied with convergence guarantees. (iii) A model predictive control version of the decentralized distribution steering framework is then proposed, enabling its application to real-world multi-agent systems operating under uncertainty. (iv) A hierarchical distribution control architecture is presented for very-large-scale clustered multi-agent systems, which exploits underlying hierarchies to achieve improved scalability and robustness, as demonstrated by experiments involving up to millions of agents. (v) Finally, a deep learning-aided distributed optimization architecture for large-scale quadratic programming (QP) is developed by unfolding a new consensus-based variant of the Operator Splitting QP (OSQP) solver into a supervised learning framework. Experiments demonstrate improved performance over traditional optimization, strong generalization to larger problems, and PAC-Bayes guarantees on the expected optimality gap for unseen instances.
Sponsor
Date
2025-07-30
Extent
Resource Type
Text
Resource Subtype
Dissertation