Item

Flashback: Understanding and Mitigating Forgetting in Federated Learning

Aljahdali, Mohammed
Abdelmoniem, Ahmed M
Canini, Marco
Horvath, Samuel
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
License
Language
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Federated Learning (FL) addresses the growing need to perform large-scale model training directly on distributed data sources, eliminating the overhead and privacy risks of transferring data to a central location. However, in FL, forgetting (or the loss of knowledge across rounds) hampers algorithm convergence, especially in the presence of severe data heterogeneity among clients. This study explores the nuances of this issue, emphasizing the critical role of forgetting leading to FL's inefficient learning within heterogeneous data contexts. Knowledge loss occurs in both client-local updates and server-side aggregation steps; addressing one without the other fails to mitigate forgetting. We introduce a metric to measure forgetting granularly, ensuring distinct recognition amid new knowledge acquisition. Based on this, we propose Flashback, a novel FL algorithm with a dynamic distillation approach that regularizes the local models and effectively aggregates their knowledge. The results from extensive experimentation across different benchmarks show that Flashback mitigates forgetting and outperforms other state-of-the-art methods, achieving faster round-to-target accuracy by converging in 6 to 16 rounds, being up to $27 \times$ faster.
Citation
M. Aljahdali, A.M. Abdelmoniem, M. Canini, S. Horvath, "Flashback: Understanding and Mitigating Forgetting in Federated Learning," 2026, pp. 408-415.
Source
Conference
International Conference on Federated Learning Technologies and Applications (FLTA)
Keywords
46 Information and Computing Sciences, 4611 Machine Learning
Subjects
Source
International Conference on Federated Learning Technologies and Applications (FLTA)
Publisher
IEEE
Full-text link