Organizational Unit:
School of Computational Science and Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 1 of 1
  • Item
    Safe Explanations And Explainable Models For Neuroimaging Data Through A Framework Of Constraints
    (Georgia Institute of Technology, 2023-12-11) Lewis, Noah Jerome
    Neuroimaging data, which can be highly complex and occasionally inscrutable, requires robust, reproducible, and domain-specific methods. Deep learning and model explainability have become common methods for analyzing neuroimaging data. However, the complex, obscure, and sometimes flawed nature of both deep learning and explainability compound the difficulties in neuroimaging analysis. This dissertation addresses several of these issues with explainability by employing a framework of constraint-based solutions. These constraints span the entire modeling pipeline, including initialization, model parameters and gradients, and the loss functions. To familiarize the readers with the field, this dissertation will begin with a comprehensive investigation into current explainability methods both in general and specific to neuroimaging, then describe the three constraint-based methodologies that comprise this framework. First, we develop an attention-based constraint for recurrent models that resolves vanishing saliency. Vanishing saliency is closely related to vanishing gradients, a common issue for training, in which the gradients lose value during backpropagation. Our second proposed method is a set of initialization constraints that target underspecification and its implications for post-hoc explanations. Our final proposed method leverages inherent neuroimaging-based geometric information in the input to constrain the optimization approach to produce more interpretable models. These three constraint methods amount to a broad framework that provides a robust and reproducible explanatory system appropriate for neuroimaging.