SAMed-2: Selective Memory Enhanced Medical Segment Anything Model
Zhiling Yan, Sifan Song, Dingjie Song, Yiwei Li, Rong Zhou, Weixiang Sun, Zhennong Chen, Sekeun Kim, Hui Ren, Tianming Liu, Quanzheng Li, Xiang Li, Lifang He, Lichao Sun
2025-07-09
Summary
This paper talks about SAMed-2, a medical image segmentation model based on the SAM-2 architecture that is improved to handle complex medical images with different types and challenges. It adds a temporal adapter to capture changes over time and a confidence-based memory system to keep important information and avoid forgetting previous tasks.
What's the problem?
The problem is that medical images are diverse and noisy, and existing segmentation models can forget important knowledge when learning new tasks or dealing with multiple image types. This makes the models less reliable for clinical use.
What's the solution?
The researchers built SAMed-2 by adding a temporal adapter in the image encoder to use information from related image slices or frames and a confidence-driven memory that selectively remembers high-quality features through training. They also created a large dataset called MedBank-100k to train and test the model on many medical tasks and imaging types.
Why it matters?
This matters because SAMed-2 improves how AI analyzes medical images by being more robust and adaptable across various medical scenarios. This can lead to better support for doctors and faster, more accurate medical diagnoses.
Abstract
SAMed-2, an adaptation of SAM-2 for medical image segmentation, incorporates a temporal adapter and confidence-driven memory to handle noise and catastrophic forgetting, achieving superior performance across multiple tasks and modalities.