Series
Doctor of Philosophy with a Major in Algorithms, Combinatorics, and Optimization

Series Type
Degree Series
Description
Associated Organization(s)
Associated Organization(s)

Publication Search Results

Now showing 1 - 10 of 67
  • Item
    Fundamental Limits and Algorithms for Database and Graph Alignment
    (Georgia Institute of Technology, 2023-12-12) Dai, Osman Emre
    Data alignment refers to a class of problems where given two sets of anonymized data pertaining to overlapping sets of users, the goal is to identify the correspondences between the two sets. If the data of a user is contained in both sets, the correlation between the two data points associated with the user might make it possible to determine that both belong to the same user and hence link the data points. Alignment problems are of practical interest in applications such as privacy and data junction. Data alignment can be used to de-anonymize data, therefore, studying the feasibility of alignment allows for a more reliable understanding of the limitations of anonymization schemes put in place to protect against privacy breaches. Additionally, data alignment can aid in finding the correspondence between data from different sources, e.g. different sensors. The data fusion performed through data alignment in turn can help with variety of inference problems that arise in scientific and engineering applications. This thesis considers two types of data alignment problems: database and graph alignment. Database alignment refers to the setting where each feature (i.e. data points) in a data set is associated with a single user. Graph alignment refers to the setting where data points in each data set are associated with pairs of users. For both problems, we are particularly interested in the asymptotic case where n, the number of users with data in both sets, goes to infinity. Nevertheless our analyses often yield results applicable to the finite n case. To develop a preliminary understanding of the database alignment problem, we first study the closely related problem of planted matching with Gaussian weights of unit variance, and derive tight achievability bounds that match our converse bounds: Specifically we identify different inequalities between log n and the signal strength (which corresponds to the square of the difference between the mean weights of planted and non-planted edges) that guarantee upper bounds on the log of the expected number of errors. Then, we study the database alignment problem with Gaussian features in the low per-feature correlation setting where the number of dimensions of each feature scales as ω(log n): We derive inequalities between log n and signal strength (which, for database alignment, corresponds to the mutual information between correlated features) that guarantee error bounds matching those of the planted matching setting, supporting the claimed connection between the two problems. Then, relaxing the restriction on the number of dimensions of features, we derive conditions on signal strength and dimensionality that guarantee smaller upper bounds on the log of the expected number of errors. The stronger results in the O(log n)-dimensional-feature setting for Gaussian databases show how planted matching, while useful, is not a perfect substitute to understand the dynamics of the more complex problem of database alignment. For graph alignment, we focus on the correlated Erdős–Rényi graph model where the data point (i.e. edge) associated with each pair of users in a graph is a Bernoulli random variable that is correlated with the data point associated with the same pair in the other graph. We study a canonical labeling algorithm for alignment and identify conditions on the density of the graphs and correlation between edges across graphs that guarantees the recovery of the true alignment with high probability.
  • Item
    Scalable, Efficient, and Fair Algorithms for Structured Convex Optimization Problems
    (Georgia Institute of Technology, 2023-08-24) Ghadiri, Mehrdad
    The growth of machine learning and data science has necessitated the development of provably fast and scalable algorithms that incorporate ethical requirements. In this thesis, we present algorithms for fundamental optimization algorithms with theoretical guarantees on approximation quality and running time. We analyze the bit complexity and stability of efficient algorithms for problems including linear regression, $p$-norm regression, and linear programming by showing that a common subroutine, inverse maintenance, is backward stable and that iterative approaches for solving constrained weighted regression problems can be carried out with bounded-error pre-conditioners. We also present conjectures regarding the running time of computing symmetric factorizations for Hankel matrices that imply faster-than-matrix-multiplication time algorithms for solving sparse poly-conditioned linear programs. We present the first subquadratic algorithm for solving the Kronecker regression problem, which improves the running time of all steps of the alternating least squares algorithm for the Tucker decomposition of tensors. In addition, we introduce the Tucker packing problem for computing an approximately optimal core shape for the Tucker decomposition problem. We prove this problem is NP-hard and provide polynomial-time approximation schemes for it. Finally, we show that the popular $k$-means clustering algorithm (Lloyd's heuristic) can result in outcomes that are unfavorable to subgroups of data. We introduce the socially fair $k$-means problem for which we provide a very efficient and practical heuristic. For the more general problem of $(\ell_p,k)$-clustering problem, we provide bicriteria constant-factor approximation algorithms. Many of our algorithms improve the state-of-the-art in practice.
  • Item
    Hinted Data Structures with Applications to Optimization and Learning
    (Georgia Institute of Technology, 2023-07-27) Chen, Li
    This thesis investigates the interplay among data structures, graph algorithms, and machine learning, providing a fresh lens on conventional perspectives concerning worst-case scenarios and data structure performance. The thesis is divided into two main parts. The first part delves into the concept of Low Stretch Decomposition (LSD), a crucial component in graph algorithm design. The study applies LSDs to devise a nearly linear time algorithm to compute the terminal state of graph diffusions, specifically the 2-norm flow diffusion, and identify local clusters. It also pioneers the examination of LSD on dynamic graphs, leading to the creation of fully dynamic data structures for computing approximate cuts and distances. The second part of the thesis explores how data structures leverage 'hints' to enhance their efficiency. It demonstrates this by solving maximum flows and minimum-cost flow problems using dynamic LSD and an $\ell_1$ Interior Point Method (IPM). The hints derived from the IPM updates expedite the data structure and result in an almost linear time algorithm for the problem. This section also delves into learning-augmented B-trees, which benefit from advice produced by machine learning models. Throughout the thesis, a comprehensive understanding of how optimization algorithms and machine learning models interact with data structures in non-worst-case and non-adaptive ways is pursued.
  • Item
    Erdos-Posa theorems for undirected group-labelled graphs
    (Georgia Institute of Technology, 2022-06-14) Yoo, Youngho
    Erdős and Pósa proved in 1965 that cycles satisfy an approximate packing-covering duality. Finding analogous approximate dualities for other families of graphs has since become a highly active area of research due in part to its algorithmic applications. In this thesis we investigate the Erdős-Pósa property of various families of constrained cycles and paths by developing new structural tools for undirected group-labelled graphs. Our first result is a refinement of the flat wall theorem of Robertson and Seymour to undirected group-labelled graphs. This structure theorem is then used to prove the Erdős-Pósa property of A-paths of length 0 modulo p for a fixed odd prime p, answering a question of Bruhn and Ulmer. Further, we obtain a characterization of the abelian groups Γ and elements l ∈ Γ for which A-paths of weight l satisfy the Erdős-Pósa property. These results are from joint work with Robin Thomas. We extend our structural tools to graphs labelled by multiple abelian groups and consider the Erdős-Pósa property of cycles whose weights avoid a fixed finite subset in each group. We find three types of topological obstructions and show that they are the only obstructions to the Erdős-Pósa property of such cycles. This is a far-reaching generalization of a theorem of Reed that Escher walls are the only obstructions to the Erdős-Pósa property of odd cycles. Consequently, we obtain a characterization of the sets of allowable weights in this setting for which the Erdős-Pósa property holds for such cycles, unifying a large number of results in this area into a general framework. As a special case, we characterize the integer pairs (l, z) for which cycles of length l mod z satisfy the Erdős-Pósa property. This resolves a question of Dejter and Neumann-Lara from 1987. Further, our description of the obstructions allows us to obtain an analogous characterization of the Erdős-Pósa property of cycles in graphs embeddable on a fixed compact orientable surface. This is joint work with Pascal Gollin, Kevin Hendrey, O-joung Kwon, and Sang-il Oum.
  • Item
    The Extremal Function for K10 Minors
    (Georgia Institute of Technology, 2021-12-15) Zhu, Dantong
    We prove that every graph on n >= 8 vertices and at least 8n-35 edges either has a K10 minor or is isomorphic to some graph included in a few families of exceptional graphs.
  • Item
    A combinatorial approach to biological structures and networks in predictive medicine
    (Georgia Institute of Technology, 2021-08-09) Kirkpatrick, Anna
    This work concerns the study of combinatorial models for biological structures and networks as motivated by questions in predictive medicine. Through multiple examples, the power of combinatorial models to simplify problems and facilitate computation is explored. First, continuous time Markov models are used as a model to study the progression of Alzheimer’s disease and identify which variables best predict progression at each stage. Next, RNA secondary structures are modeled by a thermodynamic Gibbs distribution on plane trees. The limiting distribution (as the number of edges in the tree goes to infinity) is studied to gain insight into the limits of the model. Additionally, a Markov chain is developed to sample from the distribution in the finite case, creating a tool for understanding what tree properties emerge from the thermodynamics. Finally, knowledge graphs are used to encode relationships extracted from the biomedical literature, and algorithms for efficient computation on these graphs are explored.
  • Item
    Applications of monodromy in solving polynomial systems
    (Georgia Institute of Technology, 2021-07-14) Duff, Timothy
    Polynomial systems of equations that occur in applications frequently have a special structure. Part of that structure can be captured by an associated Galois/monodromy group. This makes numerical homotopy continuation methods that exploit this monodromy action an attractive choice for solving these systems; by contrast, other symbolic-numeric techniques do not generally see this structure. Naturally, there are trade-offs when monodromy is chosen over other methods. Nevertheless, there is a growing literature demonstrating that the trade can be worthwhile in practice. In this thesis, we consider a framework for efficient monodromy computation which rivals the state-of-the-art in homotopy continuation methods. We show how its implementation in the package MonodromySolver can be used to efficiently solve challenging systems of polynomial equations. Among many applications, we apply monodromy to computer vision---specifically, the study and classification of minimal problems used in RANSAC-based 3D reconstruction pipelines. As a byproduct of numerically computing their Galois/monodromy groups, we observe that several of these problems have a decomposition into algebraic subproblems. Although precise knowledge of such a decomposition is hard to obtain in general, we determine it in some novel cases.
  • Item
    Convex and structured nonconvex optimization for modern machine learning: Complexity and algorithms
    (Georgia Institute of Technology, 2020-07-22) Boob, Digvijay Pravin
    In this thesis, we investigate various optimization problems motivated by applications in modern-day machine learning. In the first part, we look at the computational complexity of training ReLU neural networks. We consider the following problem: given a fully-connected two hidden layer ReLU neural network with two ReLU nodes in the first layer and one ReLU node in the second layer, does there exists weights of the edges such that neural network fits the given data? We show that the problem is NP-hard to answer. The main contribution is the design of the gadget which allows for reducing the Separation by Two Hyperplane problem into ReLU neural network training problem. In the second part of the thesis, we look at the design and complexity analysis of algorithms for function constrained optimization problem in both convex and nonconvex settings. These problems are becoming more and more popular in machine learning due to their applications in multi-objective optimization, risk-averse learning among others. For the convex function constrained optimization problem, we propose a novel Constraint Extrapolation (ConEx) method, which uses linear approximations of the constraint functions to define the extrapolation (or acceleration) step. We show that this method is a unified algorithm that achieves the best-known rate of convergence for solving different function constrained convex composite problems, including convex or strongly convex, and smooth or nonsmooth problems with a stochastic objective and/or stochastic constraints. Many of these convergence rates were obtained for the first time in the literature. Besides, ConEx is a single-loop algorithm that does not involve any penalty subproblems. Contrary to existing dual methods, it does not require the projection of Lagrangian multipliers onto a (possibly unknown) bounded set. Moreover, in the stochastic function constraint setting, this is the first method that requires only bounded variance of the noise; a major relaxation over the restrictive assumption of subgaussian noise in the existing algorithms. In the third part of this thesis, we investigate a nonconvex nonsmooth function constrained optimization problem, where we introduce a new proximal point method which transforms the initial nonconvex problem into a sequence of convex function constrained subproblems. For this algorithm, we establish the asymptotic convergence as well as the rate of convergence to KKT points under different constraint qualifications. For practical use, we present inexact variants of this algorithm, in which approximate solutions of the subproblems are computed using the aforementioned ConEx method and establish their associated rate of convergence under a strong feasibility constraint qualification. In the fourth part, we identify an important class of nonconvex function constrained problem for statistical machine learning applications where sparsity is imperative. We consider various nonconvex sparsity-inducing constraints. These are tighter approximations of $\ell_0$-norm compared to $\ell_1$-norm convex relaxation. For this class of problems, we relax the requirement of strong feasibility constraint qualification to a weaker and a well-known constraint qualification and still prove convergence to KKT points at the rate of gradient descent for nonconvex regularized problems. This work performs a systematic study of the structure of nonconvex sparsity inducing constraints to obtain bounds over Lagrange multipliers and solve certain subproblems faster to achieve con- vergence rate that matches the rates of nonconvex regularized version under a relaxed constraint qualification which is satisfied by almost all the time. In the fifth part, we present a faster algorithm for solving mixed packing and covering (MPC) linear programs. The proposed algorithm is from a family of primal-dual type algorithm, similar to ConEx. Here, the main challenge comes from the feasible set of the primal variables which is $\ell_\infty$ ball for a general MPC. The diameter of the ball is at least $\Omega(\sqrt{n})$, where $n$ is the dimension of LP. This huge diameter also costs in the complexity. We give specialized treatment to this problem and use a new regularization function which is weaker than the strongly convex function and still obtains accelerated convergence rate. Using this regularizer, we replace the $poly(n)$ term in the complexity to a logarithmic term.
  • Item
    Dual algorithms for the densest subgraph problem
    (Georgia Institute of Technology, 2020-05-19) Sawlani, Saurabh Sunil Sunil
    Dense subgraph discovery is an important primitive for many real-world graph mining applications. The dissertation tackles the densest subgraph problem via its dual linear programming formulation. Particularly, our contributions in this thesis are the following: (i) We give a faster width-dependent algorithm to solve mixed packing and covering LPs, a class of problems that is fundamental to combinatorial optimization in computer science and operations research (the dual of the densest subgraph problem is an instance of this class of linear programs) . Our work utilizes the framework of area convexity introduced by Sherman [STOC `17] to obtain accelerated rates of convergence. (ii) We devise an iterative algorithm for the densest subgraph problem which naturally generalizes Charikar's greedy algorithm. Our algorithm draws insights from the iterative approaches from convex optimization, and also exploits the dual interpretation of the densest subgraph problem. We have empirical evidence that our algorithm is much more robust against the structural heterogeneities in real-world datasets, and converges to the optimal subgraph density even when the simple greedy algorithm fails. (iii) Lastly, we design the first fully-dynamic algorithm which maintains a $(1-\epsilon)$ approximate densest subgraph in worst-case $\text{poly}(\log n, \epsilon^{-1})$ time per update. Our result improves upon the previous best approximation factor of $(1/4 - \epsilon)$ for fully dynamic densest subgraph.
  • Item
    Convergence in min-max optimization
    (Georgia Institute of Technology, 2020-04-20) Lai, Kevin A.
    Min-max optimization is a classic problem with applications in constrained optimization, robust optimization, and game theory. This dissertation covers new convergence rate results in min-max optimization. We show that the classic fictitious play dynamic with lexicographic tiebreaking converges quickly for diagonal payoff matrices, partly answering a conjecture by Karlin from 1959. We also show that linear last-iterate convergence rates are possible for the Hamiltonian Gradient Descent algorithm for the class of “sufficiently bilinear” min-max problems. Finally, we explore higher-order methods for min-max optimization and monotone variational inequalities, showing improved iteration complexity compared to first-order methods such as Mirror Prox.