Citation: | ZHOU Zhiping, QIAN Xinyu. Differential Privacy Algorithm under Deep Neural Networks[J]. Journal of Electronics & Information Technology, 2022, 44(5): 1773-1781. doi: 10.11999/JEIT210276 |
[1] |
刘睿瑄, 陈红, 郭若杨, 等. 机器学习中的隐私攻击与防御[J]. 软件学报, 2020, 31(3): 866–892. doi: 10.13328/j.cnki.jos.005904
LIU Ruixuan, CHEN Hong, GUO Ruoyang, et al. Survey on privacy attacks and defenses in machine learning[J]. Journal of Software, 2020, 31(3): 866–892. doi: 10.13328/j.cnki.jos.005904
|
[2] |
NASR M, SHOKRI R, and HOUMANSADR A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning[C]. 2019 IEEE Symposium on Security and Privacy, San Francisco, USA, 2019: 739–753. doi: 10.1109/SP.2019.00065.
|
[3] |
HITAJ B, ATENIESE G, and PEREZ-CRUZ F. Deep models under the GAN: Information leakage from collaborative deep learning[C]. The 2017 ACM SIGSAC Conference on Computer and Communications Security, New York, USA, 2017: 603–618.
|
[4] |
JUUTI M, SZYLLER S, MARCHAL S, et al. PRADA: Protecting against DNN model stealing attacks[C]. 2019 IEEE European Symposium on Security and Privacy (EuroS&P), Stockholm, Sweden, 2019: 512–527.
|
[5] |
冯登国, 张敏, 叶宇桐. 基于差分隐私模型的位置轨迹发布技术研究[J]. 电子与信息学报, 2020, 42(1): 74–88. doi: 10.11999/JEIT190632
FENG Dengguo, ZHANG Min, and YE Yutong. Research on differentially private trajectory data publishing[J]. Journal of Electronics &Information Technology, 2020, 42(1): 74–88. doi: 10.11999/JEIT190632
|
[6] |
ABADI M, CHU A, GOODFELLOW I, et al. Deep learning with differential privacy[C]. The 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, The Republic of Austria, 2016: 308–318.
|
[7] |
XU Chugui, REN Ju, ZHANG Deyu, et al. GANobfuscator: Mitigating information leakage under GAN via differential privacy[J]. IEEE Transactions on Information Forensics and Security, 2019, 14(9): 2358–2371. doi: 10.1109/TIFS.2019.2897874
|
[8] |
PHAN N, VU M N, LIU Yang, et al. Heterogeneous Gaussian mechanism: Preserving differential privacy in deep learning with provable robustness[C]. The Twenty-Eighth International Joint Conference on Artificial Intelligence, Macao, China, 2019: 4753–4759.
|
[9] |
PHAN N, WU Xintao, HU Han, et al. Adaptive Laplace mechanism: Differential privacy preservation in deep learning[C]. 2017 IEEE International Conference on Data Mining (ICDM), New Orleans, USA, 2017: 385–394.
|
[10] |
GONG Maoguo, PAN Ke, and XIE Yu. Differential privacy preservation in regression analysis based on relevance[J]. Knowledge-Based Systems, 2019, 173: 140–149. doi: 10.1016/j.knosys.2019.02.028
|
[11] |
ADESUYI T A and KIM B M. Preserving privacy in convolutional neural network: An ∈-tuple differential privacy approach[C]. 2019 IEEE 2nd International Conference on Knowledge Innovation and Invention (ICKII), Seoul, South Korea, 2019: 570–573.
|
[12] |
WU Bingzhe, ZHAO Shiwan, SUN Guangyu, et al. P3SGD: Patient privacy preserving SGD for regularizing deep CNNs in pathological image classification[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 2094–2103.
|
[13] |
ZHOU Yingxue, WU Zhiwei, and BANERJEE A. Bypassing the ambient dimension: Private SGD with gradient subspace identification[EB/OL]. https://arxiv.org/abs/2007.03813,2020.
|
[14] |
SUN Lichao, ZHOU Yingbo, YU P S, et al. Differentially private deep learning with smooth sensitivity[EB/OL]. https://arxiv.org/abs/2003.00505, 2020.
|
[15] |
THAKURTA A. Beyond worst case sensitivity in private data analysis[M]. KAO M Y. Encyclopedia of Algorithms. Boston: Springer, 2016: 192–199.
|
[16] |
XU Jincheng and DU Qingfeng. Adversarial attacks on text classification models using layer-wise relevance propagation[J]. International Journal of Intelligent Systems, 2020, 35(9): 1397–1415. doi: 10.1002/int.22260
|