Title:
Learning to Optimize from Data: Faster, Better, and Guaranteed

dc.contributor.author Wang, Zhangyang
dc.contributor.corporatename Georgia Institute of Technology. Machine Learning en_US
dc.contributor.corporatename Texas A&M University. Dept. of Computer Science and Engineering en_US
dc.date.accessioned 2019-12-04T17:54:38Z
dc.date.available 2019-12-04T17:54:38Z
dc.date.issued 2019-11-20
dc.description Presented on November 20, 2019 at 12:15 p.m. in the Marcus Nanotechnology Building, Room 1116. en_US
dc.description Zhangyang (Atlas) Wang is an Assistant Professor of Computer Science and Engineering at Texas A&M University. Dr. Wang is broadly interested in the fields of machine learning, computer vision, optimization, and their interdisciplinary applications. His latest interests focus on addressing automated machine learning (AutoML), learning-based optimization, and efficient deep learning. en_US
dc.description Runtime: 67:35 minutes en_US
dc.description.abstract Learning and optimization are closely related: state-of-the-art learning problems hinge on the sophisticated design of optimizers. On the other hand, the optimization cannot be considered as independent from data, since data may implicitly contain important information that guides optimization, as seen in the recent waves of meta-learning or learning to optimize. This talk will discuss Learning Augmented Optimization (LAO), a nascent area that bridges classical optimization with the latest data-driven learning, by augmenting classical model-based optimization with learning-based components. By adapting their behavior to the properties of the input distribution, the ``augmented'' algorithms may reduce their complexities by magnitudes, and/or improve their accuracy, while still preserving favorable theoretical guarantees such as convergence. I will start by diving into a case study on exploiting deep learning to solve the convex LASSO problem, showing its linear convergence in addition to superior parameter efficiency. Then, our discussions will be extended to applying LAO approaches to solving plug-and-play (PnP) optimization, and population-based optimization. I will next demonstrate our recent results on ensuring the robustness of LAO, say how applicable the algorithm remains to be, if the testing problem instances deviate from the training problem distribution. The talk will be concluded by a few thoughts and reflections, as well as pointers to potential future directions. en_US
dc.format.extent 67:35 minutes
dc.identifier.uri http://hdl.handle.net/1853/62075
dc.language.iso en_US en_US
dc.relation.ispartofseries Machine Learning @ Georgia Tech (ML@GT) Seminar Series
dc.subject Machine learning en_US
dc.subject Optimization en_US
dc.title Learning to Optimize from Data: Faster, Better, and Guaranteed en_US
dc.title.alternative Learning Augmented Optimization: Faster, Better and Guaranteed en_US
dc.type Moving Image
dc.type.genre Lecture
dspace.entity.type Publication
local.contributor.corporatename Machine Learning Center
local.contributor.corporatename College of Computing
local.relation.ispartofseries ML@GT Seminar Series
relation.isOrgUnitOfPublication 46450b94-7ae8-4849-a910-5ae38611c691
relation.isOrgUnitOfPublication c8892b3c-8db6-4b7b-a33a-1b67f7db2021
relation.isSeriesOfPublication 9fb2e77c-08ff-46d7-b903-747cf7406244
Files
Original bundle
Now showing 1 - 4 of 4
No Thumbnail Available
Name:
zwang.mp4
Size:
541.92 MB
Format:
MP4 Video file
Description:
Download video
No Thumbnail Available
Name:
zwang_videostream.html
Size:
1.32 KB
Format:
Hypertext Markup Language
Description:
Streaming video
No Thumbnail Available
Name:
transcript.txt
Size:
59.59 KB
Format:
Plain Text
Description:
Transcription
Thumbnail Image
Name:
thumbnail.jpg
Size:
63.59 KB
Format:
Joint Photographic Experts Group/JPEG File Interchange Format (JFIF)
Description:
Thumbnail
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
3.13 KB
Format:
Item-specific license agreed upon to submission
Description:
Collections