Title:
Interactive Scalable Discovery Of Concepts, Evolutions, And Vulnerabilities In Deep Learning

Thumbnail Image
Author(s)
Park, Haekyu
Authors
Advisor(s)
Chau, Duen Horng
Advisor(s)
Editor(s)
Associated Organization(s)
Series
Supplementary to
Abstract
Deep Neural Networks (DNNs) are increasingly prevalent, but deciphering their operations is challenging. Such a lack of clarity undermines trust and problem-solving during deployment, highlighting the urgent need for interpretability. How can we efficiently summarize concepts models learn? How do these concepts evolve during training? When models are at risk from potential threats, how do we explain their vulnerabilities? We address these concerns with a human-centered approach, by developing novel systems to interpret learned concepts, their evolution, and potential vulnerabilities within deep learning. This thesis focuses on three key thrusts: (1) Scalable Automatic Visual Summarization of Concepts. We develop NeuroCartography, an interactive system that scalably summarizes and visualizes concepts learned by a large-scale DNN, such as InceptionV1 trained with 1.2M images. A large-scale human evaluation with 244 participants shows that NeuroCartography discovers coherent, human-meaningful concepts. (2) Insights to Reveal Model Vulnerabilities. We develop scalable interpretation techniques to visualize and identify internal elements in DNNs, which are susceptible to potential harms, aiming to understand how these defects lead to incorrect predictions. We develop first-of-its-kind interactive systems such as Bluff that visually compares the activation pathways for benign and attacked images in DNNs, and SkeletonVis that explains how attacks manipulate human joint detection in human action recognition models. (3) Scalable Discovery of Concept Evolution During Training. Our first-of-its-kind ConceptEvo unified interpretation framework holistically reveals the inception and evolution of learned concepts and their relationships during training. ConceptEvo enables powerful new ways to monitor model training and discover training issues, addressing critical limitations of existing post-training interpretation research. A large-scale human evaluation with 260 participants demonstrates that ConceptEvo identifies concept evolutions that are both meaningful to humans and important for class predictions. This thesis contributes to information visualization, deep learning, and crucially, their intersection. We have developed open-source interactive interfaces, scalable algorithms, and a unified framework for interpreting DNNs across different models. Our work impacts academia, industry, and the government. For example, our work has contributed to the DARPA GARD program (Garanteeing AI Robustness against Deception). Additionally, our work has been recognized through a J.P. Morgan AI PhD Fellowship and 2022 Rising Stars in IEEE EECS. NeuroCartography has been highlighted as a top visualization publication (top 1%) invited to SIGGRAPH.
Sponsor
Date Issued
2023-12-05
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI