Organizational Unit:
Algorithms and Randomness Center
Algorithms and Randomness Center
Permanent Link
Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record
Publication Search Results
Now showing
1  10 of 125

ItemGroup Fairness in Combinatorial Optimization( 20211108) Munagala, KameshConsider the following classical network design model. There are n clients in a multigraph with a single sink node. Each edge has a cost to buy, and a length if bought; typically, costlier edges have smaller lengths. There is a budget B on the total cost of edges bought. Given a set of bought edges, the distance of a client to the sink is the shortest path according to the edge lengths. Such a model captures buyatbulk network design and facility location as special cases. Rather than pose this as a standard optimization problem, we ask a different question: Suppose a provider is allocating budget B to build this network, how should it do so in a manner that is fair to the clients? We consider a classical model of group fairness termed the core in cooperative game theory: If each client contributes its share B/n amount of budget as tax money, no subset of clients should be able to pool their tax money to deviate and build a different network that simultaneously improves all their distances to the sink. The question is: Does such a solution always exist, or approximately exist? We consider an abstract “committee selection” model from social choice literature that captures not only the above problem, but other combinatorial optimization problems where we need to provision public resources subject to combinatorial constraints, in order to provide utility to clients. For this general model, we show that an approximately fair solution always exists, where the approximation scales down the tax money each client can use for deviation by only a constant factor. Our existence result relies on rounding an interesting fractional relaxation to this problem. In certain cases such as the facility location problem, it also implies a polynomial time algorithm. We also show that similar results when the approximation is on the utility that clients derive by deviating.

ItemSurprises in overparameterized linear classification( 20211025) Muthukumar, VidyaSeemingly counterintuitive phenomena in deep neural networks and kernel methods have prompted a recent reinvestigation of classical machine learning methods, like linear models. Of particular focus is sufficiently highdimensional setups in which interpolation of training data is possible. In this talk, we will first briefly review recent works showing that zero regularization, or fitting of noise, need not be harmful in regression tasks. Then, we will use this insight to uncover two new surprises for highdimensional linear classification: least2norm interpolation can classify consistently even when the corresponding regression task fails, and the supportvectormachine and least2norm interpolation solutions exactly coincide in sufficiently highdimensional linear model. These findings taken together imply that the linear SVM can generalize well in settings beyond those predicted by trainingdatadependent complexity measures.This is joint work with Misha Belkin, Daniel Hsu, Adhyyan Narang, Anant Sahai, Vignesh Subramanian, Christos Thrampoulidis, Ke Wang and Ji Xu.

ItemFinding and Counting kcuts in Graphs( 20211018) Gupta, AnupamFor an undirected graph with edge weights, a kcut is a set of edges whose deletion breaks the graph into at least k connected components. How fast can we find a minimumweight kcut? And how many minimum kcuts can a graph have? The two problems are closely linked. In 1996 Karger and Stein showed how to find a minimum kcut in approximately n^{2k2} time; their proof also bounded the number of minimum kcuts by n^{2k2}, using the probabilistic method. Prior to our work, these were the best results known. Moreover, both these results were not known to be tight, except for the case of k=2 (which is the classical problem of finding graph mincuts). In this talk, we show how both these results can be improved to approximately n^k. We discuss how extremal bounds for set systems, plus a refined analysis of the Karger's contraction algorithm, can give nearoptimal bounds. This is joint work with Euiwoong Lee (U.Michigan), Jason Li (CMU), and David Harris (Maryland).

ItemRecent Advances on the Maximum Flow Problem( 20211004) Sidford, AaronThe maximum flow problem is an incredibly wellstudied problem in combinatorial optimization. The problem encompasses a range of cut, matching, and scheduling problems and is a key a proving ground for new techniques in continuous optimization and algorithmic graph theory. In this talk I will survey recent, provably faster algorithms for solving this problem. Further, I will highlight how recent advances in solving mixed l2lp flows can be coupled with interior point methods to obtain improved running times for solving the problem on unitcapacity graphs. The maximum flow problem on unit capacity graphs encompasses fundamental combinatorial optimization problems including bipartite matching and computing disjoint paths and I will discuss how this line of work has led to stateoftheart, almost m^(4/3) time algorithms for solving these problems on medge graphs. This talk focuses on joint work with Yang P. Liu and Tarun Kathuria.

ItemDistribution testing: Classical and new paradigms( 20200302) Aliakbarpour, MaryamOne of the most fundamental problems in learning theory is to view input data as random samples from an unknown distribution and then to make statistical inferences about the underlying distribution. In this talk, we focus on a notable example of such a statistical task: testing properties of distributions. The goal is to design an algorithm that uses as few samples as possible from a distribution and distinguishes whether the distribution has the property, or it is $\epsilon$far in $\ell_1$distance from any distribution which has the property. In this talk, we explore several questions in the framework of distribution testing, such as (i) Is the distribution uniform? Or, is it far from being uniform? (ii) Is a pair of random variables independent or correlated? (iii) Is the distribution monotone? Moreover, we discuss extensions of the standard testing framework to more practical settings. For instance, we consider the case where the sensitivity of the input samples (e.g., patients’ medical records) requires the design of statistical tests that ensure the privacy of individuals. We address this case by designing differentially private testing algorithms for several testing questions with (nearly)optimal sample complexities.

ItemImproved Analysis of Higher Order Random Walks and Applications( 20200210) Alev, Vedat LeviLocal spectral expansion is a very useful method for arguing about the spectral properties of several random walk matrices over simplicial complexes. The motivation of this work is to extend this method to analyze the mixing times of Markov chains for combinatorial problems. Our main result is a sharp upper bound on the second eigenvalue of the downup walk on a pure simplicial complex, in terms of the second eigenvalues of its links. We show some applications of this result in analyzing mixing times of Markov chains including sampling independent sets of a graph. (https://arxiv.org/abs/2001.02827) Joint work with: Lap Chi Lau

ItemSpectral Independence in HighDimensional Expanders and Applications to the Hardcore Model( 20200127) Liu, KuikuiWe say a probability distribution µ is spectrally independent if an associated correlation matrix has a bounded largest eigenvalue for the distribution and all of its conditional distributions. We prove that if µ is spectrally independent, then the corresponding high dimensional simplicial complex is a local spectral expander. Using a line of recent works on mixing time of high dimensional walks on simplicial complexes [KM17; DK17; KO18; AL19], this implies that the corresponding Glauber dynamics mixes rapidly and generates (approximate) samples from µ. As an application, we show that natural Glauber dynamics mixes rapidly (in polynomial time) to generate a random independent set from the hardcore model up to the uniqueness threshold. This improves the quasipolynomial running time of Weitz’s deterministic correlation decay algorithm [Wei06] for estimating the hardcore partition function, also answering a longstanding open problem of mixing time of Glauber dynamics [LV97; LV99; DG00; Vig01; Eft+16]. Joint work with Nima Anari and Shayan Oveis Gharan.

ItemRobust Mean Estimation in NearlyLinear Time( 20191202) Hopkins, SamuelRobust mean estimation is the following basic estimation question: given i.i.d. copies of a random vector X in ddimensional Euclidean space of which a small constant fraction are corrupted, how well can you estimate the mean of the distribution? This is a classical problem in statistics, going back to the 60's and 70's, and has recently found application to many problems in reliable machine learning. However, in high dimensions, classical algorithms for this problem either were (1) computationally intractable, or (2) lost poly(d) factors in their accuracy guarantees. Recently, polynomial time algorithms have been demonstrated for this problem that still achieve (nearly) optimal error guarantees. However, the running times of these algorithms were either at least quadratic in dimension or in 1/(desired accuracy), running time overhead which renders them ineffective in practice. In this talk we give the first truly nearly linear time algorithm for robust mean estimation which achieves nearly optimal statistical performance. Our algorithm is based on the matrix multiplicative weights method. Based on joint work with Yihe Dong and Jerry Li, to appear in NeurIPS 2019.

ItemFast Approximation Algorithms and Complexity Analysis for Design of Networked Systems( 20191118) Yi, YuhaoThis talk focuses on network design algorithms for optimizing average consensus dynamics, dynamics that are widely used for information diﬀusion and distributed coordination in networked control systems. Network design algorithms seek to modify the network to improve the performance of the dynamical system. This can be achieved by controlling a subset of vertices or adding/removing edges in the network. We provide new algorithmic and hardness results for two network design problems. The ﬁrst problem is selecting at most k vertices as leaders so as to minimize the steadystate variance of the system. We prove the NPhardness of the problem, and propose a greedy algorithm with an approximation factor arbitrarily close to (1 k/(k1) 1/e), which runs in nearlylinear time of km, where m is the number of edges. The second problem is adding at most k edges from a candidate edge set to minimize network entropy. This problem is equivalent to maximizing the log number of spanning trees in a connected graph. We propose an algorithm that runs in nearlylinear time of m with an approximation factor arbitrarily close to (11/e), and we prove hardness of approximation of the problem. Finally, we summarize algorithmic and complexity results related to network design and discuss how our methods ﬁt into context and also propose some ideas for future work.

ItemRapidly Mixing Random Walks via LogConcave Polynomials (Part 2)( 20191106) Anari, Nima(This is Part 2, continuation of Tuesday's lecture.) A fundamental tool used in sampling, counting, and inference problems is the Markov Chain Monte Carlo method, which uses random walks to solve computational problems. The main parameter defining the efficiency of this method is how quickly the random walk mixes (converges to the stationary distribution). The goal of these talks is to introduce a new approach for analyzing the mixing time of random walks on highdimensional discrete objects. This approach works by directly relating the mixing time to analytic properties of a certain multivariate generating polynomial. As our main application we will analyze basisexchange random walks on the set of bases of a matroid. We will show that the corresponding multivariate polynomial is logconcave over the positive orthant, and use this property to show three progressively improving mixing time bounds: For a matroid of rank r on a ground set of n elements:  We will first show a mixing time of O(r^2 log n) by analyzing the spectral gap of the random walk (based on related works on highdimensional expanders).  Then we will show a mixing time of O(r log r + r log log n) based on the modified logsobolev inequality (MLSI), due to Cryan, Guo, Mousa.  We will then completely remove the dependence on n, and show the tight mixing time of O(r log r), by appealing to variants of wellstudied notions in discrete convexity. Timepermitting, I will discuss further recent developments, including relaxed notions of logconcavity of a polynomial, and applications to further sampling/counting problems. Based on joint works with Kuikui Liu, Shayan OveisGharan, and Cynthia Vinzant.