余剑峤
James Jianqiao Yu
首页 论文 服务 ENG

讲师(助理教授)

计算机科学系

约克大学

英国约克 YO10 5GH CSE/139

jqyu(at)ieee.org Google Scholar
PPGAN: Privacy-Preserving Generative Adversarial Network

作者
Yi Liu, Jialiang Peng, James J.Q. Yu, and Yi Wu

发表
Proc. IEEE International Conference on Parallel and Distributed Systems, Tianjin, China, December 2019

摘要
Generative Adversarial Network (GAN) and its variants serve as a perfect representation of the data generation model, providing researchers with a large amount of high-quality generated data. They illustrate a promising direction for research with limited data availability. When GAN learns the semantic-rich data distribution from a dataset, the density of the generated distribution tends to concentrate on the training data. Due to the gradient parameters of the deep neural network contain the data distribution of the training samples, they can easily remember the training samples. When GAN is applied to private or sensitive data, for instance, patient medical records, as private information may be leakage. To address this issue, we propose a Privacy-preserving Generative Adversarial Network (PPGAN) model, in which we achieve differential privacy in GANs by adding well-designed noise to the gradient during the model learning procedure. Besides, we introduced the Moments Accountant strategy in the PPGAN training process to improve the stability and compatibility of the model by controlling privacy loss. We also give a mathematical proof of the differential privacy discriminator. Through extensive case studies of the benchmark datasets, we demonstrate that PPGAN can generate high-quality synthetic data while retaining the required data available under a reasonable privacy budget.