Title:
Second Order Machine Learning

No Thumbnail Available
Author(s)
Mahoney, Michael
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Series
Collections
Supplementary to
Abstract
A major challenge for large-scale machine learning, and one that will only increase in importance as we develop models that are more and more domain-informed, involves going beyond high-variance first-order optimization methods to more robust second order methods. Here, we consider the problem of minimizing the sum of a large number of functions over a convex constraint set, a problem that arises in many data analysis, machine learning, and more traditional scientific computing applications, as well as non-convex variants of these basic methods. While this is of interest in many situations, it has received attention recently due to challenges associated with training so-called deep neural networks. We establish improved bounds for algorithms that incorporate sub-sampling as a way to improve computational efficiency, while maintaining the original convergence properties of these algorithms. These methods exploit recent results from Randomized Linear Algebra on approximate matrix multiplication. Within the context of second order optimization methods, they provide quantitative convergence results for variants of Newton's methods, where the Hessian and/or the gradient is uniformly or non-uniformly sub-sampled, under much weaker assumptions than prior work.
Sponsor
Date Issued
2017-09-22
Extent
51:32 minutes
Resource Type
Moving Image
Resource Subtype
Lecture
Rights Statement
Rights URI