Communication-Efficient Federated Learning for Edge Computing with Gradient Leakage Defense
Yang, Xihong ; Cui, Haixia ; Dai, Feipeng ; Xie, Bo ; He, Yejun ; Guizani, Mohsen
Yang, Xihong
Cui, Haixia
Dai, Feipeng
Xie, Bo
He, Yejun
Guizani, Mohsen
Supervisor
Department
Machine Learning
Embargo End Date
Type
Journal article
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Federated learning (FL) has emerged as a promising paradigm for privacy-preserving model training across distributed edge devices, enabling local data utilization without explicit sharing. However, in edge computing environments characterized by heterogeneous resources and intermittent connectivity, FL remains vulnerable to gradient leakage attacks (GLA), where adversaries reconstruct private data from shared model updates. Although the existing defenses, such as differential privacy (DP) and gradient compression, offer partial mitigation, they often result in significant performance degradation or increased communication overhead. In this paper, we analyze that the risk of privacy leakage is highly sensitive to the client-side training configurations and gradient magnitudes. Based on this, we propose a risk-aware FL framework tailored for the edge scenarios, which not only performs per-device privacy risk assessment but also introduces subtractive dithering quantization to the inject controllable Gaussian noise into local models. Additionally, a noise-aware aggregation strategy is presented by adjusting each client’s contribution to preserve the global model utility. Experimental results on FashionMNIST and CIFAR-10 demonstrate that the proposed framework achieves strong defense against the GLA, reduces the communication costs by over 50%, and maintains the competitive accuracy.
Citation
X. Yang, H. Cui, F. Dai, B. Xie, Y. He and M. Guizani, "Communication-Efficient Federated Learning for Edge Computing with Gradient Leakage Defense," in IEEE Journal on Selected Areas in Communications, doi: 10.1109/JSAC.2025.3638286
Source
IEEE Journal on Selected Areas in Communications
Conference
Keywords
Communication efficiency, Dithering quantization, Federated learning, Gradient leakage attack
Subjects
Source
Publisher
IEEE
