Loading...
OFedED: One-shot Federated Learning with Model Ensemble and Dataset Distillation
Li, Xuhui ; Luo, Zhengquan ; Cui, Zihui ; Cao, Xin ; Xu, Zhiqiang
Li, Xuhui
Luo, Zhengquan
Cui, Zihui
Cao, Xin
Xu, Zhiqiang
Files
Loading...
3746252.3761309.pdf
Adobe PDF, 1.36 MB
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
One-shot federated learning (FL) has gained traction due to its communication efficiency and scalability. However, unlike traditional FL, which can frequently align client models through multiple rounds of client training and server aggregation, one-shot FL allows only a single communication round, causing each client to easily overfit its local data and leading to divergent objectives. Without any chance to iteratively correct these biases or mitigate heterogeneity, the aggregated model significantly deviates from the optimum achieved under dataset centralized training. To address this challenge, we propose OFedED, a one-shot FL framework that preserves privacy and fully exploits client data by combining local data distillation with server-side ensemble learning. Each client distills its own dataset into an ultracompact coreset that retains essential distributional characteristics; the server aggregates these coresets to guide ensemble training that captures inter-client heterogeneity, harnesses complementary knowledge, corrects local bias, and drives performance close to centralized training. In addition, we theoretically show that, under mild assumptions for local data distillations, the server can simulate a centralized optimization process by finetuning on the aggregated distilled data, effectively bypassing the need for multiple communication rounds, showing that properly distilled data can encode sufficient task-relevant information to support centralized-level optimization. Extensive experiments reveal that OFedED consistently and significantly outperforms SOTA methods, achieving an improvement of up to 9.17% on MNIST and 3.97% on CIFAR-10, the robustness being verified also by experiments using ResNet and various server-client architectures.
Citation
X. Li Zhengquan Luo et al., “OFedED: One-shot Federated Learning with Model Ensemble and Dataset Distillation,” Proceedings of the 34th ACM International Conference on Information and Knowledge Management, pp. 1706–1715, Nov. 2025, doi: 10.1145/3746252.3761309.
Source
CIKM '25: Proceedings of the 34th ACM International Conference on Information and Knowledge Management
Conference
34th ACM International Conference on Information and Knowledge Management
Keywords
One-shot Federated Learning, Dataset Distillation, Model Ensemble, Heterogeneous Client Distributions, Communication Efficiency, Differential Privacy, Synthetic Client Data Aggregation, Near-Centralised Training Performance
Subjects
Source
34th ACM International Conference on Information and Knowledge Management
Publisher
Association for Computing Machinery
