Collaborative and Efficient Personalization with Mixtures of Adaptors
Almansoori Abdulla Jasem ; Horváth Samuel ; Takáč Martin
Almansoori Abdulla Jasem
Horváth Samuel
Takáč Martin
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Heterogenous data is prevalent in real-world federated learning. We propose a parameter-efficient framework, Federated Low-Rank Adaptive Learning (FLoRAL), that allows clients to personalize in groups by mixing between low-rank adaptors, where the mixtures are client-specific. FLoRAL is a model parameterization that casts personalized federated learning as a multi-task learning problem, with weight sharing as an implicit regularizer. It is memory-efficient, as the personalized parameters (i.e., base model + adaptors) are all federated. Our results show that FLoRAL can generalize better than a mixture of full models when data are scarce. It can also consistently personalize better than models with a locally tuned adaptor per client. This demonstrates the benefits of “federated personalization” and its robustness against overfitting. We derive the convergence rates and show theoretically that FLoRAL can lead to better variance reduction of the base model’s gradients.
Citation
A. J. Almansoori, S. Horváth, and M. Takáč, “Collaborative and Efficient Personalization with Mixtures of Adaptors,” in Proc. 2nd Conf. Parsimony and Learning (CPAL), Stanford, CA, USA, Mar. 24–27, 2025, Proc. Mach. Learn. Res., vol. 280, pp. 1328–1364.
Source
Proceedings of Machine Learning Research
Conference
2nd Conference on Parsimony and Learning, CPAL 2025
Keywords
Collaborative Learning, Learning Systems, Learning To Rank, Multi-task Learning, Parameterization, Adaptive Learning, Base Models, Client Specific, Heterogenous Data, Learning Problem, Model Parameterization, Multitask Learning, Personalizations, Real-world, Regularizer, Federated Learning
Subjects
Source
2nd Conference on Parsimony and Learning, CPAL 2025
Publisher
ML Research Press
