Novel Explainability Approaches for Analyzing Functional Neuroinformatics Data with Supervised and Unsupervised Machine Learning

Author(s)
Ellis, Charles Anthony
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Wallace H. Coulter Department of Biomedical Engineering
The joint Georgia Tech and Emory department was established in 1997
Supplementary to:
Abstract
The use of artificial intelligence in healthcare is growing increasingly common. However, the use of artificial intelligence within the context of neuropsychiatric settings with functional neuroinformatics modalities like electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) is still in its infancy. While there are a number of obstacles preventing the development of diagnostic and prognostic tools for clinical neuroinformatics, a key obstacle is the lack of explainability approaches uniquely adapted to the field. This lack of explainability methods has implications both within a clinical context and within a biomedical research context. In this dissertation, we propose a series of novel explainability approaches for systematically evaluating what deep learning models trained for both classification and clustering have learned from raw EEG data. These explainability approaches provide insight into key spatial, spectral, temporal, and interaction features uncovered by models. Within the context of fMRI, this dissertation expands upon existing explainability approaches for neuroimaging classification by combining them with approaches that estimate the degree of model confidence in predictions. Additionally, this dissertation presents several novel analyses that can be applied to supervised fMRI classifier explanations to gain insight into models. This dissertation further presents several novel explainability approaches for insight into both hard and soft clustering algorithms applied to fMRI data.
Sponsor
Date
2023-05-01
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI