Item

Scafflsa: Taming heterogeneity in federated linear stochastic approximation and td learning

Mangold, Paul
Samsonov, Sergey
Labbi, Safwan
Levin, Ilya
Alami, Reda
Naumov, Alexey
Moulines, Eric
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
2024
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
In this paper, we analyze the sample and communication complexity of the federated linear stochastic approximation (FedLSA) algorithm. We explicitly quantify the effects of local training with agent heterogeneity. We show that the communication complexity of FedLSA scales polynomially with the inverse of the desired accuracy ?. To overcome this, we propose SCAFFLSA a new variant of FedLSA that uses control variates to correct for client drift, and establish its sample and communication complexities. We show that for statistically heterogeneous agents, its communication complexity scales logarithmically with the desired accuracy, similar to Scaffnew. An important finding is that, compared to the existing results for Scaffnew, the sample complexity scales with the inverse of the number of agents, a property referred to as linear speed-up. Achieving this linear speed-up requires completely new theoretical arguments. We apply the proposed method to federated temporal difference learning with linear function approximation and analyze the corresponding complexity improvements.
Citation
P. Mangold et al., “SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning,” Adv Neural Inf Process Syst, vol. 37, pp. 13927–13981, Dec. 2024.
Source
Advances in Neural Information Processing Systems (NeurIPS 2024)
Conference
Keywords
Federated linear stochastic approximation, Client heterogeneity, Communication complexity, Control variates, Temporal difference learning
Subjects
Source
Publisher
NEURIPS
DOI
Full-text link