Self-Supervised Multimodal Representation Learning for Neuroimaging
Author(s)
Fedorov, Aleksandr
Advisor(s)
Plis, Sergey
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
In the pursuit of developing foundation models to understand neuropsychiatric disorders, we have to build computationally and data-efficient multimodal models while ensuring interpretability, robustness, and fairness. We have taken two significant steps toward this goal. First, we introduce a fast, accurate, and multimodal architecture for segmenting brain tissues and atlas regions that is scalable to large morphometry studies by training in a semi-supervised manner using imperfect labeling. Our second breakthrough leverages mutual information to learn self-supervised representations from structural and functional brain imaging data. Our models exhibit promising results in predicting neuropsychiatric disorders, identifying disorder-relevant brain regions, and multimodal links. We conclude this dissertation by highlighting potential advancements in multimodal self-supervision for neuroimaging.
Sponsor
Date
2023-09-05
Extent
Resource Type
Text
Resource Subtype
Dissertation