Title:
Algorithmic, game theoretic and learning theoretic aspects of distributed optimization

dc.contributor.advisor Balcan, Nina
dc.contributor.advisor Shamma, Jeff S.
dc.contributor.author Ehrlich, Steven Jeremy
dc.contributor.committeeMember Blekherman, Greg
dc.contributor.committeeMember Fortnow, Lance
dc.contributor.committeeMember Mansour, Yishay
dc.contributor.committeeMember Randall, Dana
dc.contributor.department Computer Science
dc.date.accessioned 2017-01-11T14:01:15Z
dc.date.available 2017-01-11T14:01:15Z
dc.date.created 2016-12
dc.date.issued 2016-08-26
dc.date.submitted December 2016
dc.date.updated 2017-01-11T14:01:15Z
dc.description.abstract Distributed systems are fundamental to today's world. Many modern problems involve multiple agents either competing or coordinating across a network, and even tasks that are not inherently distributed are often divided to accommodate today's computing resources. In this thesis we consider distributed optimization through the lens of several problems. We first consider the fragility of distributed systems, with an investigation in game theory. The inefficiency, relative to total cooperation, of agents acting myopically in their own interest is well studied as the so called the Price of Anarchy. We assess how much further the social welfare can degrade due to repeated small disruptions. We consider two models of disruptions. In the first, agents perceive costs subject to a small adversarial perturbation; in the second a small number of Byzantine players attempt to influence the system. For both models we improve upper and lower bounds on how much social welfare can degrade for several interesting classes of games. We next consider several problems in which agents have partial information and wish to efficiently coordinate on a solution. We measure the cost of their coordination by the amount of communication the agents must exchange. We next investigate a problem in active and semi-supervised learning. After providing a novel algorithm to learn it in the centralized case, we consider the communication cost of this algorithm when the examples are distributed amongst several agents. We then turn to the problem of clustering when the data set has been distributed among many agents. Here we devise an algorithm for coordinating on a global approximation that can be communicated efficiently by the use of coresets. Finally we consider a problem of submodular maximization where the objective function has been distributed among agents. We adapt a centralised approximation algorithm to the distributed setting with efficient communication between the agents.
dc.description.degree Ph.D.
dc.format.mimetype application/pdf
dc.identifier.uri http://hdl.handle.net/1853/56235
dc.language.iso en_US
dc.publisher Georgia Institute of Technology
dc.subject Distributed
dc.subject Learning Theory
dc.title Algorithmic, game theoretic and learning theoretic aspects of distributed optimization
dc.type Text
dc.type.genre Dissertation
dspace.entity.type Publication
local.contributor.corporatename College of Computing
local.contributor.corporatename School of Computer Science
relation.isOrgUnitOfPublication c8892b3c-8db6-4b7b-a33a-1b67f7db2021
relation.isOrgUnitOfPublication 6b42174a-e0e1-40e3-a581-47bed0470a1e
thesis.degree.level Doctoral
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
EHRLICH-DISSERTATION-2016.pdf
Size:
1.47 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
3.87 KB
Format:
Plain Text
Description: