Item

Advancing Parameter-Efficient Federated Learning via Low-Rank Decomposition and Independent Subnetworks

Alhammadi, Hamad Jassem Salem Jassem
Department
Machine Learning
Embargo End Date
2025-05-30
Type
Thesis
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Over the past years, a breakthrough was observed in Deep Learning, which included increased capabilities in both Computer Vision (CV) and Natural Language Processing (NLP) applications. This was attributed to the rapid development of Machine Learning (ML). The byproduct of this development is Large Language Models (LLMs), which demand a massive amount of highquality data that is not available due to regulation laws such as the General Data Protection Regulation (GDPR) or due to firms that salvage private data to prevent the growth of open source large models under the roof of privacy. To address these issues, Federated Learning (FL) emerged as a potential solution to combine the fundamentals of distributed learning and privacy in order to propose a feasible solution to train models using private and scarce data. However, the challenges that formed alongside these techniques are not trivially solved, as realworld edge devices often contain non-IID data, memory, computation, and network constraints. This thesis explores potential solutions to address these problems through Low Rank Decomposition (LRD) for model compression and speedup, lowrank Adaptation (LoRA) for efficient finetuning, and low-rank independent subnetwork training (IST) to tackle model heterogeneity. This study proposes an LRD residual approach that allows lowrank models that are significantly degraded from aggressive compression rates to recover by incorporating the information from the input through a residual block. This parameterfree modification increases the accuracy by 10%, demonstrating the method’s effectiveness over the Naive LoRA. Furthermore, the methods developed in this thesis pave the way for more efficient and robust Federated Learning systems by combining parameterefficient strategies and privacy-preserving practices. By integrating lowrank techniques with existing federated frameworks, we demonstrate a reduction in computation and storage requirements and a noticeable boost in resilience against data heterogeneity and limited communication bandwidth.
Citation
Hamad Jassem Salem Jassem Alhammadi, “Advancing Parameter-Efficient Federated Learning via Low-Rank Decomposition and Independent Subnetworks,” Master of Science thesis, Machine Learning, MBZUAI, 2025.
Source
Conference
Keywords
Federated Learning, Low Rank Decomposition, Independent Subnetwork Training, Low Rank Adaptation
Subjects
Source
Publisher
DOI
Full-text link