Continual Learning for Breast Lesion Segmentation on Mammograms via Mamba and CLIP
Project Leaders
Rongsheng Wang
Despite deep learning (DL) showing promising potential in medical image segmentation (e.g., breast lesion segmentation), continuous learning is necessary to adapt to improve its generalization for new tasks or new datasets. In clinical practice, however, due to the inaccessibility of previously used sensitive patient data, current DL methods often tend to forget previously learned tasks when trained on new tasks, which is known as Catastrophic Forgetting (CF). Thus, we introduce an innovative continual learning (CL) method, which leverages high-quality pseudo-labels derived from new tasks replacing previously inaccessible old data. We are the first to introduce Visual Mamba into continual learning in breast imaging, where it excels in feature extraction while guaranteeing efficient inference and maintaining low computational complexity. We further propose to incorporate the contrastive language-image pre-training (CLIP) embeddings that have been pre-trained on pairs of breast lesion images and their corresponding label texts into a specific head structure. These embeddings encapsulate semantic information for each class and provide informative content through extensive joint training of image-text pairs. We evaluate our methods on five public mammogram datasets. Results show that the proposed approach improves performance on new tasks while maintaining robust performance on previously learned tasks, outperforming existing methods.