Citation: | REN Yizhi, LIU Rongke, WANG Dong, YUAN Lifeng, SHEN Yanzhao, WU Guohua, WANG Qiuhua, YANG Changtian. A Study of Local Differential Privacy Mechanisms Based on Federated Learning[J]. Journal of Electronics & Information Technology, 2023, 45(3): 784-792. doi: 10.11999/JEIT221064 |
[1] |
WANG Shuai, KANG Bo, MA Jinlu, et al. A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19)[J]. European Radiology, 2021, 31(8): 6096–6104. doi: 10.1007/s00330-021-07715-1
|
[2] |
MCMAHAN H B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[EB/OL]. https://doi.org/10.48550/arXiv.1602.05629, 2016.
|
[3] |
杨强. AI与数据隐私保护: 联邦学习的破解之道[J]. 信息安全研究, 2019, 5(11): 961–965. doi: 10.3969/j.issn.2096-1057.2019.11.003
YANG Qiang. AI and data privacy protection: The way to federated learning[J]. Journal of Information Security Research, 2019, 5(11): 961–965. doi: 10.3969/j.issn.2096-1057.2019.11.003
|
[4] |
WARNAT-HERRESTHAL S, SCHULTZE H, SHASTRY K L, et al. Swarm Learning for decentralized and confidential clinical machine learning[J]. Nature, 2021, 594(7862): 265–270. doi: 10.1038/s41586-021-03583-3
|
[5] |
SONG Congzheng, RISTENPART T, and SHMATIKOV V. Machine learning models that remember too much[C]. 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, USA, 2017: 587–601.
|
[6] |
FREDRIKSON M, JHA S, and RISTENPART T. Model inversion attacks that exploit confidence information and basic countermeasures[C]. 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, USA, 2015: 1322–1333.
|
[7] |
SUN Lichao, QIAN Jianwei, and CHEN Xun. LDP-FL: Practical private aggregation in federated learning with local differential privacy[C]. The Thirtieth International Joint Conference on Artificial Intelligence, Montreal, Canada, 2021: 1571–1578.
|
[8] |
PAPERNOT N, ABADI M, ERLINGSSON Ú, et al. Semi-supervised knowledge transfer for deep learning from private training data[C]. 5th International Conference on Learning Representations, Toulon, France, 2017.
|
[9] |
PAPERNOT N, MCDANIEL P, SINHA A, et al. SoK: Security and privacy in machine learning[C]. 2018 IEEE European Symposium on Security and Privacy, London, UK, 2018: 399–414.
|
[10] |
TRAMÈR F, ZHANG Fan, JUELS A, et al. Stealing machine learning models via prediction APIs[C]. The 25th USENIX Conference on Security Symposium, Austin, USA, 2016: 601–618.
|
[11] |
WANG Binghui and GONG N Z. Stealing hyperparameters in machine learning[C]. 2018 IEEE Symposium on Security and Privacy, San Francisco, USA, 2018: 36–52.
|
[12] |
LYU Lingjuan, YU Han, MA Xingjun, et al. Privacy and robustness in federated learning: Attacks and defenses[J]. IEEE Transactions on Neural Networks and Learning Systems, To be published. doi: 10.1109/TNNLS.2022.3216981
|
[13] |
SUN Lichao and LYU Lingjuan. Federated model distillation with noise-free differential privacy[C]. The Thirtieth International Joint Conference on Artificial Intelligence, Montreal, Canada, 2021: 1563–1570.
|
[14] |
MCMAHAN H B, RAMAGE D, TALWAR K, et al. Learning differentially private recurrent language models[C]. 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
|
[15] |
GEYER R C, KLEIN T, and NABI M. Differentially private federated learning: A client level perspective[EB/OL]. https://doi.org/10.48550/arXiv.1712.07557, 2017.
|
[16] |
NGUYÊN T T, XIAO Xiaokui, YANG Yin, et al. Collecting and analyzing data from smart device users with local differential privacy[EB/OL]. https://doi.org/10.48550/arXiv.1606.05053, 2016.
|
[17] |
DUCHI J C, JORDAN M I, and WAINWRIGHT M J. Local privacy, data processing inequalities, and statistical minimax rates[EB/OL]. https://doi.org/10.48550/arXiv.1302.3203, 2013.
|
[18] |
WANG Ning, XIAO Xiaokui, YANG Yin, et al. Collecting and analyzing multidimensional data with local differential privacy[C]. 2019 IEEE 35th International Conference on Data Engineering (ICDE), Macao, China, 2019: 638–649,
|
[19] |
LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278–2324. doi: 10.1109/5.726791
|
[20] |
XIAO Han, RASUL K, and VOLLGRAF R. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms[EB/OL]. https://doi.org/10.48550/arXiv.1708.07747, 2017.
|
[21] |
KRIZHEVSKY A. Learning multiple layers of features from tiny images[R]. Technical Report TR-2009, 2009.
|
[22] |
邱晓慧, 杨波, 赵孟晨, 等. 联邦学习安全防御与隐私保护技术研究[J]. 计算机应用研究, 2022, 39(11): 3220–3231. doi: 10.19734/j.issn.1001-3695.2022.03.0164
QIU Xiaohui, YANG Bo, ZHAO Mengchen, et al. Survey on federated learning security defense and privacy protection technology[J]. Application Research of Computers, 2022, 39(11): 3220–3231. doi: 10.19734/j.issn.1001-3695.2022.03.0164
|
[23] |
ZHU Ligeng, LIU Zhijian, and HAN Song. Deep leakage from gradients[C]. The 33rd International Conference on Neural Information Processing Systems, Vancouver, Canada, 2019: 1323.
|
[24] |
YANG Ziqi, ZHANG Jiyi, CHANG E C, et al. Neural network inversion in adversarial setting via background knowledge alignment[C]. The 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 2019: 225–240.
|
[25] |
ZHANG Yuheng, JIA Ruoxi, PEI Hengzhi, et al. The secret revealer: Generative model-inversion attacks against deep neural networks[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 250–258,
|
[26] |
BHOWMICK A, DUCHI J, FREUDIGER J, et al. Protection against reconstruction and its applications in private federated learning[EB/OL]. https://doi.org/10.48550/arXiv.1812.00984, 2018.
|
[27] |
TRUEX S, LIU Ling, CHOW K H, et al. LDP-fed: Federated learning with local differential privacy[C]. The Third ACM International Workshop on Edge Systems, Analytics and Networking, Heraklion, Greece, 2020: 61–66.
|
[28] |
SHOKRI R, STRONATI M, SONG Congzheng, et al. Membership inference attacks against machine learning models[C]. 2017 IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2017: 3–18.
|