Item

A Modular Federated Suite for Low-Rank, Expressive, and Efficient LLM Fine-Tuning

Vepakomma, Praneeth
Ponkshe, Kaustubh
Singhal, Raghav
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
This works presents a modular federated suite of four methods—LoRA-SilverBullet (LoRA-SB), ABBA, Fed-SilverBullet, and FedEx-LoRA that together enable low-rank, expressive, and resource-efficient fine-tuning of large language models (LLMs). LoRA-SB approximates full fine-tuning within low-rank subspaces via a principled initialization strategy, provably preserving gradient directions and reducing trainable parameters by 27–90× without any additional hyperparameter tuning. ABBA reparameterizes weight updates as the Hadamard product of two low-rank matrices, and formally increasing expressivity under a fixed parameter budget. FedEx-LoRA introduces a lightweight residual correction to recover exact LoRA adapter updates under standard federated averaging, preserving efficiency with minimal overhead. Fed-SB leverages the LoRA-SB’s low-rank update with FedEx-LoRA’s exact aggregation in differentially private federated learning, and cuts communication costs by up to 230x. We will detail theoretical results on convergence, reconstruction bounds, communication complexity, and privacy loss, alongside empirical evaluations on reasoning and language benchmarks. This suite offers a principled path to deploy LLM fine-tuning in resource-constrained and privacy-sensitive federated environments.
Citation
P. Vepakomma, K. Ponkshe, and R. Singhal, “A Modular Federated Suite for Low-Rank, Expressive, and Efficient LLM Fine-Tuning,” 2025 61st Allerton Conference on Communication, Control, and Computing Proceedings, Sep. 2025, [Online]. Available: https://hdl.handle.net/2142/130308
Source
Proceedings of The 61st Allerton Conference on Communication, Control, and Computing Proceedings
Conference
61st Allerton Conference on Communication, Control, and Computing Proceedings
Keywords
LLM Fine-tuning, Federated LLM Fine-tuning, Efficiency-Performance Trade-off
Subjects
Source
61st Allerton Conference on Communication, Control, and Computing Proceedings
Publisher
Allerton Conference on Communication, Control, and Computing
DOI
Full-text link