Organizational Unit:
Transdisciplinary Research Institute for Advancing Data Science
Transdisciplinary Research Institute for Advancing Data Science
Permanent Link
Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record
Publication Search Results
Now showing
1 - 10 of 16
-
ItemLecture 4: Mathematics for Deep Neural Networks: Statistical theory for deep ReLU networks( 2019-03-15) Schmidt-Hieber, Johannes ; Georgia Institute of Technology. Transdisciplinary Research Institute for Advancing Data Science ; University of Twente. Dept. of Applied MathematicsWe outline the theory underlying the recent bounds on the estimation risk of deep ReLU networks. In the lecture, we discuss specific properties of the ReLU activation function that relate to skipping connections and efficient approximation of polynomials. Based on this, we show how risk bounds can be obtained for sparsely connected networks.
-
ItemMean Estimation: Median of Means Tournaments(Georgia Institute of Technology, 2018-10-25) Lugosi, Gabor ; Georgia Institute of Technology. Transdisciplinary Research Institute for Advancing Data Science ; Georgia Institute of Technology. School of Mathematics ; Pompeu Fabra University ; Universitat Pompeu FabraIn these lectures we discuss some statistical problems with an interesting combinatorial structure behind. We start by reviewing the "hidden clique" problem, a simple prototypical example with a surprisingly rich structure. We also discuss various "combinatorial" testing problems and their connections to high-dimensional random geometric graphs. Time permitting, we study the problem of estimating the mean of a random variable.
-
ItemLecture 5: Inference and Uncertainty Quantification for Noise Matrix Completion( 2019-09-05) Chen, Yuxin ; Georgia Institute of Technology. Transdisciplinary Research Institute for Advancing Data Science ; Princeton University. Dept. of Electrical EngineeringNoisy matrix completion aims at estimating a low-rank matrix given only partial and corrupted entries. Despite substantial progress in designing efficient estimation algorithms, it remains largely unclear how to assess the uncertainty of the obtained estimates and how to perform statistical inference on the unknown matrix (e.g. constructing a valid and short confidence interval for an unseen entry). This talk takes a step towards inference and uncertainty quantification for noisy matrix completion. We develop a simple procedure to compensate for the bias of the widely used convex and nonconvex estimators. The resulting de-biased estimators admit nearly precise non-asymptotic distributional characterizations, which in turn enable optimal construction of confidence intervals / regions for, say, the missing entries and the low-rank factors. Our inferential procedures do not rely on sample splitting, thus avoiding unnecessary loss of data efficiency. As a byproduct, we obtain a sharp characterization of the estimation accuracy of our de-biased estimators, which, to the best of our knowledge, are the first tractable algorithms that provably achieve full statistical efficiency (including the preconstant). The analysis herein is built upon the intimate link between convex and nonconvex optimization. This is joint work with Cong Ma, Yuling Yan, Yuejie Chi, and Jianqing Fan.
-
ItemCompatibility and the Lasso(Georgia Institute of Technology, 2018-09-04) van de Geer, Sara ; Georgia Institute of Technology. Transdisciplinary Research Institute for Advancing Data Science ; Georgia Institute of Technology. School of Mathematics ; Eidgenössische Technische Hochschule Zürich ; ETH ZürichThere will be three lectures, which in principle will be independent units. Their common theme is exploiting sparsity in high-dimensional statistics. Sparsity means that the statistical model is allowed to have quite a few parameters, but that it is believed that most of these parameters are actually not relevant. We let the data themselves decide which parameters to keep by applying a regularization method. The aim is then to derive so-called sparsity oracle inequalities. In the first lecture, we consider a statistical procedure called M-estimation. "M" stands here for "minimum": one tries to minimize a risk function, in order to obtain the best fit to the data. Lease squares is a prominent example. Regularization is done by adding a sparsity inducing penalty that discourages too good a fit to the data. An example is the l₁-penalty which together with least squares gives to an estimation procedure called the Lasso. We address the question: why does the l₁-penalty lead to sparsity oracle inequalities and how does this generalize to other norms? We will see in the first lecture that one needs conditions which relate the penalty to the risk function. They have in a certain sense to be “compatible”. We discuss these compatibility conditions in the second lecture in the context of the Lasso, where the l₁-penalty needs to be compatible with the least squares risk, i.e. with the l₂-norm. We give as example the total variation penalty. For D := {x1,…,xn} ⊂ R an increasing sequence, the total variation of a function f : D -> R is the sum of the absolute values of its jump sizes. We derive compatibility and as a consequence a sparsity oracle inequality which shows adaptation to the number of jumps. In the third lecture we use sparsity to establish confidence intervals for a parameter of interest. The idea is to use the penalized estimator as an initial estimator in a one-step Newton-Raphson procedure. Functionals of this new estimator that can under certain conditions be shown to be asymptotically normally distributed. We show that in the high-dimensional case, one may further profit from sparsity conditions if the inverse Hessian of the problem is not sparse.
-
ItemLecture 3: Mathematics for Deep Neural Networks: Advantages of Additional Layers( 2019-03-13) Schmidt-Hieber, Johannes ; Georgia Institute of Technology. Transdisciplinary Research Institute for Advancing Data Science ; University of Twente. Dept. of Applied MathematicsWhy are deep networks better than shallow networks? We provide a survey of the existing ideas in the literature. In particular, we discuss localization of deep networks, functions that can be easily approximated by deep networks and finally discuss the Kolmogorov-Arnold representation theorem.
-
ItemLecture 3: Projected Power Method: An Efficient Algorithm for Joint Discrete Assignment( 2019-09-03) Chen, Yuxin ; Georgia Institute of Technology. Transdisciplinary Research Institute for Advancing Data Science ; Princeton University. Dept. of Electrical EngineeringVarious applications involve assigning discrete label values to a collection of objects based on some pairwise noisy data. Due to the discrete---and hence nonconvex---structure of the problem, computing the optimal assignment (e.g. maximum likelihood assignment) becomes intractable at first sight. This paper makes progress towards efficient computation by focusing on a concrete joint discrete alignment problem---that is, the problem of recovering n discrete variables given noisy observations of their modulo differences. We propose a low-complexity and model-free procedure, which operates in a lifted space by representing distinct label values in orthogonal directions, and which attempts to optimize quadratic functions over hypercubes. Starting with a first guess computed via a spectral method, the algorithm successively refines the iterates via projected power iterations. We prove that for a broad class of statistical models, the proposed projected power method makes no error---and hence converges to the maximum likelihood estimate---in a suitable regime. Numerical experiments have been carried out on both synthetic and real data to demonstrate the practicality of our algorithm. We expect this algorithmic framework to be effective for a broad range of discrete assignment problems. This is joint work with Emmanuel Candes.
-
ItemLecture 4: Spectral Methods Meets Asymmetry: Two Recent Stories( 2019-09-04) Chen, Yuxin ; Georgia Institute of Technology. Transdisciplinary Research Institute for Advancing Data Science ; Princeton University. Dept. of Electrical EngineeringThis talk is concerned with the interplay between asymmetry and spectral methods. Imagine that we have access to an asymmetrically perturbed low-rank data matrix. We attempt estimation of the low-rank matrix via eigen-decomposition --- an uncommon approach when dealing with non-symmetric matrices. We provide two recent stories to demonstrate the advantages and effectiveness of this approach. The first story is concerned with top-K ranking from pairwise comparisons, for which the spectral method enables un-improvable ranking accuracy. The second story is concern with matrix de-noising and spectral estimation, for which the eigen-decomposition method significantly outperforms the (unadjusted) SVD-based approach and is fully adaptive to heteroscedasticity without the need of careful bias correction. The first part of this talk is based on joint work with Cong Ma, Kaizheng Wang, and Jianqing Fan; the second part of this talk is based on joint work with Chen Cheng and Jianqing Fan.
-
ItemCombinatorial Testing Problems(Georgia Institute of Technology, 2018-10-15) Lugosi, Gabor ; Georgia Institute of Technology. Transdisciplinary Research Institute for Advancing Data Science ; Georgia Institute of Technology. School of Mathematics ; Pompeu Fabra University ; Universitat Pompeu FabraIn these lectures we discuss some statistical problems with an interesting combinatorial structure behind. We start by reviewing the "hidden clique" problem, a simple prototypical example with a surprisingly rich structure. We also discuss various "combinatorial" testing problems and their connections to high-dimensional random geometric graphs. Time permitting, we study the problem of estimating the mean of a random variable.
-
ItemLecture 5: Mathematics for Deep Neural Networks: Energy landscape and open problems( 2019-03-18) Schmidt-Hieber, Johannes ; Georgia Institute of Technology. Transdisciplinary Research Institute for Advancing Data Science ; University of Twente. Dept. of Applied MathematicsTo derive a theory for gradient descent methods, it is important to have some understanding of the energy landscape. In this lecture, an overview of existing results is given. The second part of the lecture is devoted to future challenges in the field. We describe important future steps needed for the future development of the statistical theory of deep networks.
-
ItemLecture 2: Mathematics for Deep Neural Networks: Theory for shallow networks( 2019-03-08) Schmidt-Hieber, Johannes ; Georgia Institute of Technology. Transdisciplinary Research Institute for Advancing Data Science ; University of Twente. Dept. of Applied MathematicsWe start with the universal approximation theorem and discuss several proof strategies that provide some insights into functions that can be easily approximated by shallow networks. Based on this, a survey on approximation rates for shallow networks is given. It is shown how this leads to estimation rates. In the lecture, we also discuss methods that fit shallow networks to data.