Sharpness-aware Federated Graph Learning
Li, Ruiyu ; Zhao, Peige ; Li, Guangxia ; Wu, Pengcheng ; Gao, Xingyu ; Xu, Zhiqiang
Li, Ruiyu
Zhao, Peige
Li, Guangxia
Wu, Pengcheng
Gao, Xingyu
Xu, Zhiqiang
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
License
Language
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
One of many impediments to applying graph neural networks (GNNs) in processing large-volume real-world graph-structured data is that it disapproves of a centralized training scheme which involves gathering data belonging to different organizations due to privacy concerns. As a distributed data processing scheme, federated graph learning (FGL) enables learning GNN models collaboratively without sharing participants' private data. Though theoretically feasible, a core challenge in FGL systems is the variation of local training data distributions among clients, also known as the data heterogeneity problem. Most existing solutions suffer from two problems: (1) The typical optimizer based on empirical risk minimization tends to cause local models to fall into sharp valleys and weakens their generalization to out-of-distribution graph data. (2) The prevalent dimensional collapse in the learned representations of local graph data has an adverse impact on the classification capacity of the GNN model. To this end, we formulate a novel optimization objective that is aware of the sharpness (i.e., the curvature of the loss surface) of local GNN models. By minimizing the loss function and its sharpness simultaneously, we seek out model parameters in a flat region with uniformly low loss values, thus improving the generalization over heterogeneous data. By introducing a regularizer based on the correlation matrix of local representations, we relax the correlations of representations generated by individual local graph samples, so as to alleviate the dimensional collapse of the learned model. The proposed Sharpness-aware fEderated grAph Learning (SEAL) algorithm can enhance the classification accuracy and generalization ability of local GNN models in federated graph learning. Experimental studies on several graph classification benchmarks show that SEAL consistently outperforms SOTA FGL baselines and provides gains for more participants.
Citation
R. Li, P. Zhao, G. Li, P. Wu, X. Gao, Z. Xu, "Sharpness-aware Federated Graph Learning," 2026, pp. 345-355.
Source
Conference
Proceedings of the Nineteenth ACM International Conference on Web Search and Data Mining
Keywords
46 Information and Computing Sciences, 4611 Machine Learning
Subjects
Source
Proceedings of the Nineteenth ACM International Conference on Web Search and Data Mining
Publisher
Association for Computing Machinery
