Quasi-Newton Methods for Federated Learning with Error Feedback
Wu, Yanlin
Wu, Yanlin
Author
Supervisor
Department
Machine Learning
Embargo End Date
2024-01-01
Type
Thesis
Date
2024
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Federated learning (FL) facilitates collaborative model training across distributed devices without sharing raw data, preserving privacy of each participant. While in FL setting communication cost is fairly high compared with the computation cost, compression aims to address such bottleneck, promising improved efficiency and scalability. The integration of error feedback mechanisms is essential to ensure convergence when employing compression. In this paper, we introduce novel Quasi-Newton methods tailored for federated learning, integrating them with the error feedback framework, with a particular emphasis on the EF21 mechanism. EF21 offers a comprehensive theoretical understanding and demonstrates superior practical performance, effectively overcoming previous limitations associated with heavy reliance on strong assumptions and increased communication costs. Leveraging the efficiency of Quasi-Newton methods, especially the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, our proposed EF21+LBFGS method achieves a convergence rate of O( 1/T ) in nonconvex regimes and exhibits a linear convergence rate under the Polyak-Lojasiewicz condition. Through theoretical analysis and empirical evaluations, we demonstrate the efficacy of our approach, showcasing accelerated convergence rates and improved model performance compared to existing methods. Our findings suggest promising prospects for enhancing the effectiveness and scalability of federated learning in practical settings.
Citation
Y. Wu, "Quasi-Newton Methods for Federated Learning with Error Feedback", MS. Thesis, Machine Learning, MBZUAI, Abu Dhabi, UAE, 2024
