HEXA: Heterogeneity-aware Exact Aggregation for Efficient Fine-Tuning in Federated Learning
Garofalo, Marco ; Villari, Massimo ; Karray, Fakhreddine
Garofalo, Marco
Villari, Massimo
Karray, Fakhreddine
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
License
Language
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Federated Learning (FL) combined with Parameter-Efficient Fine-Tuning (PEFT) methods, such as Low-Rank Adaptation (LoRA) has emerged as a promising approach to address data scarcity challenges in fine-tuning Large Language Models (LLMs) while ensuring privacy and computational efficiency. However, when applying LoRA in traditional FL, separately averaging the adapters during aggregation results in non-exact aggregation. While recent research has investigated this issue, its application to heterogeneous data settings remains largely unexplored. Data heterogeneity across clients can significantly affect the effectiveness of parameter-efficient adaptations and complicate the aggregation process. In this work, we explore the concept of exact aggregation in heterogeneous federated fine-tune settings, specifically focusing on LoRA-based approaches. We propose HEXA (Heterogeneity-aware EXact Aggregation), a novel method that mitigates the effects of data heterogeneity while preserving the benefits of exact aggregation in LoRA-enabled FL. We present a comprehensive theoretical framework for extending exact aggregation to heterogeneous settings and validate our approach through extensive empirical evaluation on the GLUE benchmark. Our results show that HEXA improves model performance in heterogeneous contexts while maintaining the computational efficiency of PEFT methods.
Citation
Source
2025 International Joint Conference on Neural Networks (IJCNN)
Conference
2025 International Joint Conference on Neural Networks (IJCNN)
Keywords
46 Information and Computing Sciences, 4605 Data Management and Data Science
Subjects
Source
2025 International Joint Conference on Neural Networks (IJCNN)
Publisher
IEEE
