作者
Chenhan Zhang, Weiqi Wang, James J.Q. Yu, and Shui Yu*
发表
Proc. ACM ASIA Conference on Computer and Communications Security, Melbourne, Australia, July 2023
摘要
As graphs are getting larger and larger, federated graph learning (FGL) is increasingly adopted, which can train graph neural networks (GNNs) on distributed graph data. However, the privacy of graph data in FGL systems is an inevitable concern due to multiparty participation. Recent studies indicated that the gradient leakage of trained GNN can be used to infer private graph data information utilizing model inversion attacks (MIA). Moreover, the central server can legitimately access the local GNN gradients, which makes MIA difficult to counter if the attacker is at the central server. In this paper, we first identify a realistic crowdsourcing-based FGL scenario where MIA from the central server towards clients' subgraph structures is a nonnegligible threat. Then, we propose a defense scheme, Subgraph-Out-of-Subgraph (SOS), to mitigate such MIA and meanwhile, maintain the prediction accuracy. We leverage the information bottleneck (IB) principle to extract task-relevant subgraphs out of the clients' original subgraphs. The extracted IB-subgraphs are used for local GNN training and the local model updates will have less information about the original subgraphs, which renders the MIA harder to infer the original subgraph structure. Particularly, we devise a novel neural network-powered approach to overcome the intractability of graph data's mutual information estimation in IB optimization. Additionally, we design a subgraph generation algorithm for finally yielding reasonable IB-subgraphs from the optimization results. Extensive experiments demonstrate the efficacy of the proposed scheme, the FGL system trained on IB-subgraphs is more robust against MIA attacks with minuscule accuracy loss.