Title:
Towards Secure and Interpretable AI: Scalable Methods, Interactive Visualizations, and Practical Tools

dc.contributor.author Chau, Duen Horng
dc.contributor.corporatename Georgia Institute of Technology. GVU Center en_US
dc.contributor.corporatename Georgia Institute of Technology. College of Computing en_US
dc.contributor.corporatename Georgia Institute of Technology. School of Computational Science and Engineering en_US
dc.date.accessioned 2019-09-10T17:00:03Z
dc.date.available 2019-09-10T17:00:03Z
dc.date.issued 2019-08-29
dc.description Presented on August 29, 2019 at 11:30 a.m.-1:00 p.m. in the Technology Square Research Building (TSRB), 1st Floor Auditorium, Georgia Institute of Technology. en_US
dc.description Polo Chau is an Associate Professor of Computing at Georgia Tech. He co-directs Georgia Tech's MS Analytics program. His research group bridges machine learning and visualization to synthesize scalable interactive tools for making sense of massive datasets, interpreting complex AI models, and solving real world problems in cybersecurity, human-centered AI, graph visualization and mining, and social good. His Ph.D. in Machine Learning from Carnegie Mellon University won CMU's Computer Science Dissertation Award, Honorable Mention. He received awards and grants from NSF, NIH, NASA, DARPA, Intel (Intel Outstanding Researcher), Symantec, Google, Nvidia, IBM, Yahoo, Amazon, Microsoft, eBay, LexisNexis; Raytheon Faculty Fellowship; Edenfield Faculty Fellowship; Outstanding Junior Faculty Award; The Lester Endowment Award; Symantec fellowship (twice); Best student papers at SDM'14 and KDD'16 (runner-up); Best demo at SIGMOD'17 (runner-up); Chinese CHI'18 Best paper. His research led to open-sourced or deployed technologies by Intel (for ISTC-ARSA: ShapeShifter, SHIELD, ADAGIO, MLsploit), Google, Facebook, Symantec (Polonium, AESOP protect 120M people from malware), and Atlanta Fire Rescue Department. His security and fraud detection research made headlines. en_US
dc.description Runtime: 59:01 minutes en_US
dc.description.abstract We have witnessed tremendous growth in Artificial Intelligence (AI) and machine learning (ML) recently. However, research shows that AI and ML models are often vulnerable to adversarial attacks, and their predictions can be difficult to understand, evaluate and ultimately act upon. Discovering real-world vulnerabilities of deep neural networks and countermeasures to mitigate such threats has become essential to successful deployment of AI in security settings. We present our joint works with Intel which include the first targeted physical adversarial attack (ShapeShifter) that fools state-of-the-art object detectors; a fast defense (SHIELD) that removes digital adversarial noise by stochastic data compression; and interactive systems (ADAGIO and MLsploit) that further democratize the study of adversarial machine learning and facilitate real-time experimentation for deep learning practitioners. Finally, we also present how scalable interactive visualization can be used to amplify people’s ability to understand and interact with large-scale data and complex models. We sample from projects where interactive visualization has provided key leaps of insight, from increased model interpretability (Gamut with Microsoft Research), to model explorability with models trained on millions of instances (ActiVis deployed with Facebook), increased usability for non-experts about state-of-the-art AI (GAN Lab open-sourced with Google Brain; went viral!), and our latest work Summit, an interactive system that scalably summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions. We conclude by highlighting the next visual analytics research frontiers in AI. en_US
dc.format.extent 59:01 minutes
dc.identifier.uri http://hdl.handle.net/1853/61842
dc.language.iso en_US en_US
dc.publisher Georgia Institute of Technology en_US
dc.relation.ispartofseries GVU Brown Bag
dc.subject AI en_US
dc.subject Cyber security en_US
dc.subject Deep learning en_US
dc.subject Visualization en_US
dc.title Towards Secure and Interpretable AI: Scalable Methods, Interactive Visualizations, and Practical Tools en_US
dc.title.alternative Towards Secure and Interpretable AI... en_US
dc.type Moving Image
dc.type.genre Lecture
dspace.entity.type Publication
local.contributor.author Chau, Duen Horng
local.contributor.corporatename GVU Center
local.relation.ispartofseries GVU Brown Bag Seminars
relation.isAuthorOfPublication fb5e00ae-9fb7-475d-8eac-50c48a46ea23
relation.isOrgUnitOfPublication d5666874-cf8d-45f6-8017-3781c955500f
relation.isSeriesOfPublication 34739bfe-749f-4bc5-a716-21883cd1bbd0
Files
Original bundle
Now showing 1 - 4 of 4
No Thumbnail Available
Name:
pchau.mp4
Size:
474.09 MB
Format:
MP4 Video file
Description:
Download Video
No Thumbnail Available
Name:
pchau_videostream.html
Size:
1.06 KB
Format:
Hypertext Markup Language
Description:
Streaming Video
No Thumbnail Available
Name:
transcript.txt
Size:
55.75 KB
Format:
Plain Text
Description:
Transcription Text
Thumbnail Image
Name:
thumbnail.jpg
Size:
50.82 KB
Format:
Joint Photographic Experts Group/JPEG File Interchange Format (JFIF)
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
3.13 KB
Format:
Item-specific license agreed upon to submission
Description:
Collections