Generalizable and Explainable Methods for Learning from Physiological Data and Beyond
Author(s)
Liu, Ran
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
Deep learning (DL) methods have significantly advanced the fields of neuroscience and physiology. However, conventional DL methods that are tailored to specific populations and tasks are no longer adequate in comprehending large-scale, multimodal, and multitask physiological datasets. In this thesis, we propose methods that aim to improve DL methods from the perspective of: (i) Generalizability, enabling applications across diverse modalities, tasks, and subjects, and (ii) Explainability, enabling researchers to understand and potentially customize the learning process to suit specific distributions. These improvements are not only crucial for physiological datasets, which typically require domain knowledge to comprehend, but also improve deep learning methodologies in general.
In Section 3 and 4, our discussion pivots to the DL methods for time series datasets, with a particular emphasis on biosignals and neural recordings, where generalizable and explainable methods are particularly needed. This is because of the inherently limited volume of available data and the restricted diversity of training subjects; as well as the data’s intrinsic complexity, as they are typically governed by unknown signal functions. To navigate these challenges, we introduce and elaborate on the use of contrastive learning approaches and transformer architectures. We explore how these approaches can be synergistically applied to address the dual objectives of generalizability and explainability.
In Section 5 and 6, we shift towards the exploration of learning methodologies tailored for image datasets, which include both medical images and natural images, where we encounter distinct challenges impeding generalizability and explainability. We introduce a set of efforts and strategies to address such issues, including hierarchical modeling, innovative data augmentation techniques, and rigorous evaluation methods for assessing representation quality. We demonstrate that addressing generalizability and explainability is not only invaluable within the biomedical domain, but also benefits learning in a broader context.
Sponsor
Date
2024-05-31
Extent
Resource Type
Text
Resource Subtype
Dissertation