Item

A Federated Learning-Friendly Approach for Parameter-Efficient Fine-Tuning of SAM in 3D Segmentation

Asokan, Mothilal
Benjamin, Joseph Geo
Yaqub, Mohammad
Nandakumar, Karthik
Supervisor
Department
Computer Vision
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Adapting foundation models for medical image analysis requires finetuning them on a considerable amount of data because of extreme distribution shifts between natural (source) data used for pretraining and medical (target) data. However, collecting task-specific medical data for such finetuning at a central location raises many privacy concerns. Although Federated learning (FL) provides an effective means for training on private decentralized data, communication costs in federating large foundation models can quickly become a significant bottleneck, impacting the solution’s scalability. In this work, we address this problem of ‘efficient communication while ensuring effective learning in FL’ by combining the strengths of Parameter-Efficient Fine-tuning (PEFT) with FL. Specifically, we study plug-and-play Low-Rank Adapters (LoRA) in a federated manner to adapt the Segment Anything Model (SAM) for 3D medical image segmentation. Unlike prior works that utilize LoRA and finetune the entire decoder, we critically analyze the contribution of each granular component of SAM on finetuning performance. Thus, we identify specific layers to be federated that are very efficient in terms of communication cost while producing on-par accuracy. Our experiments show that retaining the parameters of the SAM model (including most of the decoder) in their original state during adaptation is beneficial because fine-tuning on small datasets tends to distort the inherent capabilities of the underlying foundation model. On Fed-KiTS, our approach decreases communication cost (?48×?) compared to full fine-tuning while increasing performance (?6%? Dice score) in 3D segmentation tasks. Our approach performs similar to SAMed while achieving ?2.8× reduction in communication and parameters to be finetuned. We further validate our approach with experiments on Fed-IXI and Prostate MRI datasets. Our code is available at https://github.com/BioMedIA-MBZUAI/FLAP-SAM.
Citation
M. Asokan, J. G. Benjamin, M. Yaqub, and K. Nandakumar, “A Federated Learning-Friendly Approach for Parameter-Efficient Fine-Tuning of SAM in 3D Segmentation,” pp. 226–235, Jul. 2025, doi: 10.1007/978-3-031-77610-6_21.
Source
Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 Workshops
Conference
Keywords
3D Medical Image Segmentation, Federated Learning, Foundation Model, Parameter-Efficient Fine-Tuning
Subjects
Source
Publisher
Springer Nature
Full-text link