Missing Modality Robustness in Semi-supervised Multi-modal Semantic Segmentation
Loading...
Author(s)
Maheshwari, Harsh
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
Using multiple spatial modalities has been proven helpful in improving semantic segmentation performance. However, there are several real-world challenges that have yet to be addressed: (a) improving label efficiency and (b) enhancing robustness in realistic scenarios where modalities are missing at the test time. To address these challenges, we first propose a simple yet efficient multi-modal fusion mechanism Linear Fusion, that performs better than the state-of-the-art multi-modal models even with limited supervision. Second, we propose M3L: Multi-modal Teacher for Masked Modality Learning, a semi-supervised framework that not only improves the multi-modal performance but also makes the model robust to the realistic missing modality scenario using unlabeled data. We create the first benchmark for semi-supervised multi-modal semantic segmentation and also report the robustness to missing modalities. Our proposal shows an absolute improvement of up to 10% on robust mIoU above the most competitive baselines.
Sponsor
Date
2023-04-27
Extent
Resource Type
Text
Resource Subtype
Thesis