FreMIM: Fourier Transform Meets Masked Image Modeling for Medical Image Segmentation

WACV 2024

1University of Science and Technology Beijing 2University of Central Florida 3University of Birmingham

Abstract

The research community has witnessed the powerful potential of self-supervised Masked Image Modeling (MIM), which enables the models capable of learning visual representation from unlabeled data. In this paper, to incorporate both the crucial global structural information and local details for dense prediction tasks, we alter the perspective to the frequency domain and present a new MIM-based framework named FreMIM for self-supervised pre-training to better accomplish medical image segmentation tasks. Based on the observations that the detailed structural information mainly lies in the high-frequency components and the high-level semantics are abundant in the low-frequency counterparts, we further incorporate multi-stage supervision to guide the representation learning during the pre-training phase. Extensive experiments on three benchmark datasets show the superior advantage of our FreMIM over previous state-of-the-art MIM methods. Compared with various baselines trained from scratch, our FreMIM could consistently bring considerable improvements to model performance.

Methodology

MY ALT TEXT

The overall architecture of our proposed FreMIM. At first, the input medical image is corrupted by the foreground masking strategy and then fed into the encoder, which consists of several stages with a hierarchical structure. The captured feature maps at different stages (i.e., S1, S2, ... Sn) are fused by a bilateral aggregation decoder to generate the aggregated high-level and low-level feature representations (i.e., Ahigh and Alow). For the fused feature of each semantic level, an FMB is applied respectively to learn its recessive information in the frequency domain, resulting in the acquired Plow and Phigh. Finally, the low-pass and high-pass Fourier spectra are both adopted as the reconstruction target to better guide the model to capture local details and global information.

Quantitative Results

MY ALT TEXT

Focusing on solely exploiting the given training samples (\ie the pre-training data only includes the specific downstream datasets without introducing any extra data) for 2D medical image segmentation (e.g., solely BraTS 2019 is used for pre-training when evaluating brain tumor segmentation), extensive experiments on three benchmark datasets are conducted to fully verify the effectiveness of FreMIM. Note that the numbers between parenthesis represent the gains with respect to specific baselines trained from scratch, while the red and blue color denote accuracy increase and decrease respectively.

Qualitative Results

MY ALT TEXT

We compare the segmentation performance of different self-supervised methods, including MAE, DINO, and FreMIM on the BraTS 2019 dataset with visualization results. As shown in the figure, our method promotes the detailed pixel delineation of brain tumors and obtains more accurate predictions.

BibTeX

@inproceedings{wang2024fremim,
      title={FreMIM: Fourier Transform Meets Masked Image Modeling for Medical Image Segmentation},
      author={Wang, Wenxuan and Wang, Jing and Chen, Chen and Jiao, Jianbo and Cai, Yuanxiu and Song, Shanshan and Li, Jiangyun},
      booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
      pages={7860--7870},
      year={2024}
}