Randomized Kernel Rounding for Routing, Scheduling, and Machine Learning

Loading...
Thumbnail Image
Author(s)
Farhadi, Majid
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
School of Computer Science
School established in 2007
Series
Supplementary to:
Abstract
From Computer Science, Machine Learning, Communications, and Control Systems to Supply Chain, Finance, and Policy Making we make a sequence of choices, optimality of which can be decisive in maximizing revenue or critical for ensuring robustness, security, and reliability. In numerous cases, solving the exact problem is not feasible, e.g., due to limited computational and storage resources, missing key data, or intrinsic hardness barriers. Therefore, researchers develop algorithms to guarantee solutions that are approximately correct/optimal. In this line, we develop new algorithms with improved performance guarantees for some classical optimization problems. We show how simple modifications of the objective function can adapt these problems for various scenarios, while it may significantly affect the computational complexity of the problem. For example, the Traveling Salesman Problem is easier than its counterpart Traveling Repairman Problem (Chapter 2) in terms of approximability. Set Cover is harder than the Minimum Sum Set Cover (discussed in Chapter 3). We study a continuous spectrum of objective functions that connect such problems and further develop unified algorithms that provide close to optimal solutions for them. In this line, we discover that transforming intermediate solutions due to convex/linear relaxations of the problems, using kernels that can be adapted for best performance, can significantly improve approximation guarantees for multiple such problems. The first set of problems that we study, in Chapter 2, are routing problems for which we provide both combinatorial and linear programming rounding algorithms that post-process the solutions to mild relaxations of the problems. This post-processing stage is optimized for the best approximation factors. In Chapter 3 we study a further abstract set of problems, known as Multiple Intents Ranking, in which we are to order a set of items and minimize average delay in covering a subset of their power set. Our kernel rounding method significantly decreases the loss due to the rounding of temporary solutions to these problems due to their linear programming relaxations. We show the same framework, yet with different kernels, can improve approximation guarantees for multiple classical special cases of the set/vertex cover problem. In Chapter 4 we study a generalization of the problems in previous chapters, known as Minimum Linear Ordering Problem, where finding an ordering of a set of items is the goal, yet with an arbitrary function aggregated over prefixes of the ordering (instead of a structured delay in coverage). We specifically design a new approximation algorithm for the case when the cost function is monotonic and submodular, and investigate multiple special cases from a computational complexity perspective. In Chapter 5 we study a set of problems for which the relaxation is not fractional but high dimensional. For these problems that have appeared at the intersection of dimension reduction (in Machine Learning) and spectral graph theory, and the theory of Markov processes, we inspect two approaches for post-processing of the temporary solution, using random projections versus exploiting the structure of the underlying graph. For the latter, we provide tight NP-hardness results, that shed further light on the non-convex optimization landscape of studied problems.
Sponsor
Date
2022-05-04
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI