Title:
Multi-task learning for neural image classification and segmentation using a 3D/2D contextual U-Net model
Multi-task learning for neural image classification and segmentation using a 3D/2D contextual U-Net model
Author(s)
Miano, Joseph D.
Advisor(s)
Dyer, Eva L.
Editor(s)
Collections
Supplementary to
Permanent Link
Abstract
We present a 3D/2D Contextual U-Net model and apply it to segment and classify samples from a heterogeneous mouse brain dataset obtained via X-ray microtomography, which spans 4 distinct brain areas: Striatum, Ventral Posterior Thalamic Nucleus (VP), Cortex, and Zona Incerta (ZI). Our multi-task model takes in a 3D volume and outputs both a 2D segmentation of the central plane and the volume's brain area class, which can then be used to generate 3D reconstructions across samples with heterogeneous microstructure distributions. We investigate various properties of the model, including its quantitative segmentation and classification performance across the 4 brain regions, qualitative performance via the generation of 3D reconstructions, and interpretability via an investigation of the network's latent representations. Because our model performs both classification and segmentation, we also investigate how changing their relative weight during training, via a parameter we call lambda, affects performance and latent representations. Quantitative and qualitative results demonstrate that our model achieves reasonable segmentation and classification performance and can be scaled to large, heterogeneous brain regions. This technique could be used by neuroscientists seeking to automate the creation of multi-scale brain maps that incorporate both microstructure and brain area information.
Sponsor
Date Issued
2020-05
Extent
Resource Type
Text
Resource Subtype
Undergraduate Thesis