Advanced Search
Volume 46 Issue 2
Feb.  2024
Turn off MathJax
Article Contents
SUN Yu, YAN Yu, CUI Jian, XIONG Gaojian, LIU Jianhua. Review of Deep Gradient Inversion Attacks and Defenses in Federated Learning[J]. Journal of Electronics & Information Technology, 2024, 46(2): 428-442. doi: 10.11999/JEIT230541
Citation: SUN Yu, YAN Yu, CUI Jian, XIONG Gaojian, LIU Jianhua. Review of Deep Gradient Inversion Attacks and Defenses in Federated Learning[J]. Journal of Electronics & Information Technology, 2024, 46(2): 428-442. doi: 10.11999/JEIT230541

Review of Deep Gradient Inversion Attacks and Defenses in Federated Learning

doi: 10.11999/JEIT230541
Funds:  The National Natural Science Foundation of China (32071775)
  • Received Date: 2023-06-01
  • Rev Recd Date: 2023-12-01
  • Available Online: 2023-12-23
  • Publish Date: 2024-02-29
  • As a distributed machine learning approach that preserves data ownership while releasing data usage rights, federated learning overcomes the challenge of data silos that hinder large-scale modeling with big data. However, the characteristic of only sharing gradients without training data during the federated training process does not guarantee the confidentiality of users’ training data. In recent years, novel deep gradient inversion attacks have demonstrated the ability of adversaries to reconstruct private training data from shared gradients, which poses a serious threat to the privacy of federated learning. With the evolution of gradient inversion techniques, adversaries are increasingly capable of reconstructing large volumes of data from deep neural networks, which challenges the Privacy-Preserving Federated Learning (PPFL) with encrypted gradients. Effective defenses mainly rely on perturbation transformations to obscure original gradients, inputs, or features to conceal sensitive information. Firstly, the gradient inversion vulnerability in PPFL is highlighted and the threat model in gradient inversion is presented. Then a detailed review of deep gradient inversion attacks is conducted from the perspectives of paradigms, capabilities, and targets. The perturbation-based defenses are divided into three categories according to the perturbed objects: gradient perturbation, input perturbation, and feature perturbation. The representative works in each category are analyzed in detail. Finally, an outlook on future research directions is provided.
  • loading
  • [1]
    JORDAN M I and MITCHELL T M. Machine learning: Trends, perspectives, and prospects[J]. Science, 2015, 349(6245): 255–260. doi: 10.1126/science.aaa8415.
    [2]
    LECUN Y, BENGIO Y, and HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436–444. doi: 10.1038/nature14539.
    [3]
    FANG Binxing. Breaking the conflict between data element flows and privacy protection[EB/OL]. http://event.chinaaet.com/huodong/cite2022/, 2022.
    [4]
    MCMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]. The 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, USA, 2017: 1273–1282.
    [5]
    YANG Qiang, LIU Yang, CHENG Yong, et al. Federated Learning[M]. San Rafael: Morgan & Claypool, 2020: 1–207.
    [6]
    YANG Qiang, LIU Yang, CHEN Tianjian, et al. Federated machine learning: Concept and applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): 12. doi: 10.1145/3298981.
    [7]
    LIU Yang, FAN Tao, CHEN Tianjian, et al. FATE: An industrial grade platform for collaborative learning with data protection[J]. Journal of Machine Learning Research, 2021, 22: 1–1.
    [8]
    马艳军, 于佃海, 吴甜, 等. 飞桨: 源于产业实践的开源深度学习平台[J]. 数据与计算发展前沿, 2019, 1(1): 105–115. doi: 10.11871/jfdc.issn.2096.742X.2019.01.011.

    MA Yanjun, YU Dianhai, WU Tian, et al. Paddlepaddle: An open-source deep learning platform from industrial practice[J]. Frontiers of Data and Computing, 2019, 1(1): 105–115. doi: 10.11871/jfdc.issn.2096.742X.2019.01.011.
    [9]
    BONAWITZ K A, EICHNER H, GRIESKAMP W, et al. Towards federated learning at scale: System design[C]. Machine Learning and Systems 2019, Stanford, USA, 2019: 374–388. doi: 10.48550/arXiv.1902.01046.
    [10]
    RYFFEL T, TRASK A, DAHL M, et al. A generic framework for privacy preserving deep learning[EB/OL]. https://arxiv.org/pdf/1811.04017v2.pdf, 2018.
    [11]
    HAO Meng, LI Hongwei, LUO Xizhao, et al. Efficient and privacy-enhanced federated learning for industrial artificial intelligence[J]. IEEE Transactions on Industrial Informatics, 2020, 16(10): 6532–6542. doi: 10.1109/TII.2019.2945367.
    [12]
    RIEKE N, HANCOX J, LI Wenqi, et al. The future of digital health with federated learning[J]. NPJ Digital Medicine, 2020, 3: 119. doi: 10.1038/s41746-020-00323-1.
    [13]
    XU Jie, GLICKSBERG B S, SU Chang, et al. Federated learning for healthcare informatics[J]. Journal of Healthcare Informatics Research, 2021, 5(1): 1–19. doi: 10.1007/s41666-020-00082-4.
    [14]
    MILLS J, HU Jia, and MIN Geyong. Communication-efficient federated learning for wireless edge intelligence in iot[J]. IEEE Internet of Things Journal, 2020, 7(7): 5986–5994. doi: 10.1109/JIOT.2019.2956615.
    [15]
    YANG Wensi, ZHANG Yuhang, YE Kejiang, et al. FFD: A federated learning based method for credit card fraud detection[C]. Proceedings of the 8th International Conference on Big Data, San Diego, USA, 2019: 18–32. doi: 10.1007/978-3-030-23551-2_2.
    [16]
    LONG Guodong, TAN Yue, JIANG Jing, et al. Federated learning for open banking[M]. YANG Qiang, FAN Lixin, and YU Han. Federated Learning: Privacy and Incentive. Cham: Springer, 2020: 240–254. doi 10.1007/978-3-030-63076-8_17.
    [17]
    NASR M, SHOKRI R, and HOUMANSADR A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning[C]. 2019 IEEE Symposium on Security and Privacy, San Francisco, USA, 2019: 739–753. doi: 10.1109/SP.2019.00065.
    [18]
    MELIS L, SONG Congzheng, DE CRISTOFARO E, et al. Exploiting unintended feature leakage in collaborative learning[C]. 2019 IEEE Symposium on Security and Privacy, San Francisco, USA, 2019: 691–706. doi: 10.1109/SP.2019.00029.
    [19]
    WANG Zhibo, SONG Mengkai, ZHANG Zhifei, et al. Beyond inferring class representatives: User-level privacy leakage from federated learning[C]. Proceedings of 2019 IEEE Conference on Computer Communications, Paris, France, 2019: 2512–2520. doi: 10.1109/INFOCOM.2019.8737416.
    [20]
    ZHU Ligeng, LIU Zhijian, and HAN Song. Deep leakage from gradients[C]. Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, Canada, 2019: 1323. doi: 10.5555/3454287.3455610.
    [21]
    PHONG L T, AONO Y, HAYASHI T, et al. Privacy-preserving deep learning via additively homomorphic encryption[J]. IEEE Transactions on Information Forensics and Security, 2018, 13(5): 1333–1345. doi: 10.1109/TIFS.2017.2787987.
    [22]
    DONG Ye, CHEN Xiaojun, SHEN Liyan, et al. Eastfly: Efficient and secure ternary federated learning[J]. Computers & Security, 2020, 94: 101824. doi: 10.1016/j.cose.2020.101824.
    [23]
    ZHANG Chengliang, LI Suyi, XIA Junzhe, et al. Batchcrypt: Efficient homomorphic encryption for cross-silo federated learning[C/OL]. 2020 USENIX Annual Technical Conference, 2020: 493–506.
    [24]
    ZHU Hangyu, WANG Rui, JIN Yaochu, et al. Distributed additive encryption and quantization for privacy preserving federated deep learning[J]. Neurocomputing, 2021, 463: 309–327. doi: 10.1016/j.neucom.2021.08.062.
    [25]
    ZHANG Jiale, CHEN Bing, YU Shui, et al. PEFL: A privacy-enhanced federated learning scheme for big data analytics[C]. 2019 IEEE Global Communications Conference, Waikoloa, USA, 2019: 1–6. doi: 10.1109/GLOBECOM38437.2019.9014272.
    [26]
    BONAWITZ K, IVANOV V, KREUTER B, et al. Practical secure aggregation for privacy-preserving machine learning[C]. 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, USA, 2017: 1175–1191. doi: 10.1145/3133956.3133982.
    [27]
    XU Guowen, LI Hongwei, LIU Sen, et al. Verifynet: Secure and verifiable federated learning[J]. IEEE Transactions on Information Forensics and Security, 2019, 15: 911–926. doi: 10.1109/TIFS.2019.2929409.
    [28]
    GUO Xiaojie, LIU Zheli, LI Jin, et al. VeriFL: Communication-efficient and fast verifiable aggregation for federated learning[J]. IEEE Transactions on Information Forensics and Security, 2021, 16: 1736–1751. doi: 10.1109/TIFS.2020.3043139.
    [29]
    LUO Fucai, AL-KUWARI S, and DING Yong. SVFL: Efficient secure aggregation and verification for cross-silo federated learning[J]. IEEE Transactions on Mobile Computing, 2024, 23(1): 850–864. doi: 10.1109/TMC.2022.3219485.
    [30]
    HAHN C, KIM H, KIM M, et al. VerSA: Verifiable secure aggregation for cross-device federated learning[J]. IEEE Transactions on Dependable and Secure Computing, 2023, 20(1): 36–52. doi: 10.1109/TDSC.2021.3126323.
    [31]
    WANG Yijue, DENG Jieren, GUO Dan, et al. SAPAG: A self-adaptive privacy attack from gradients[EB/OL]. https://arxiv.org/pdf/2009.06228.pdf, 2020.
    [32]
    WEI Wenqi, LIU Ling, LOPER M, et al. A framework for evaluating gradient leakage attacks in federated learning[EB/OL]. https://arxiv.org/pdf/2004.10397v2.pdf, 2020.
    [33]
    GEIPING Jonas, BAUERMEISTER H, DRÖGE H, et al. Inverting gradients - how easy is it to break privacy in federated learning?[C]. The 34th Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 16937–16947. doi: 10.48550/arXiv.2003.14053.
    [34]
    YIN Hongxu, MALLYA A, VAHDAT A, et al. See through gradients: Image batch recovery via gradinversion[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 16332–16341. doi: 10.1109/CVPR46437.2021.01607.
    [35]
    HATAMIZADEH A, YIN Hongxu, ROTH H, et al. GradViT: Gradient inversion of vision transformers[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 10011–10020. doi: 10.1109/CVPR52688.2022.00978.
    [36]
    JEON J, KIM J, LEE K, et al. Gradient inversion with generative image prior[C/OL]. The 35th Conference on Neural Information Processing Systems, 2021: 29898–29908.
    [37]
    LI Zhuohang, ZHANG Jiaxin, LIU Luyang, et al. Auditing privacy defenses in federated learning via generative gradient leakage[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 10122–10132. doi: 10.1109/CVPR52688.2022.00989.
    [38]
    HUANG Yangsibo, GUPTA S, SONG Zhao, et al. Evaluating gradient inversion attacks and defenses in federated learning[C/OL]. The 35th Conference on Neural Information Processing Systems, 2021: 7232–7241.
    [39]
    YANG Haomiao, GE Mengyu, XIANG Kunlan, et al. Using highly compressed gradients in federated learning for data reconstruction attacks[J]. IEEE Transactions on Information Forensics and Security, 2022, 18: 818–830. doi: 10.1109/TIFS.2022.3227761.
    [40]
    SUN Jingwei, LI Ang, WANG Binghui, et al. Soteria: Provable defense against privacy leakage in federated learning from representation perspective[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 9307–9315. doi: 10.1109/CVPR46437.2021.00919.
    [41]
    DENG Jieren, WANG Yijue, LI Ji, et al. TAG: Gradient attack on transformer-based language models[C]. Findings of the Association for Computational Linguistics, Punta Cana, Dominican Republic, 2021: 3600–3610. doi: 10.18653/v1/2021.findings-emnlp.305.
    [42]
    BALUNOVIĆ M, DIMITROV D I, JOVANOVIĆ N, et al. LAMP: Extracting text from gradients with language model priors[C]. The 36th Conference on Neural Information Processing Systems, New Orleans, USA, 2022: 7641–7654. doi: 10.48550/arXiv.2202.08827.
    [43]
    LI Zhuohang, ZHANG Jiaxin, and LIU Jian. Speech privacy leakage from shared gradients in distributed learning[C]. 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, Rhodes Island, Greece, 2023: 1–5. doi: 10.1109/ICASSP49357.2023.10095443.
    [44]
    VERO M, BALUNOVIĆ M, DIMITROV D I, et al. TabLeak: Tabular data leakage in federated learning[C]. The 40th International Conference on Machine Learning, Hawaii, USA, 2023: 1460. doi: 10.5555/3618408.3619868.
    [45]
    ZHU Junyi and BLASCHKO M B. R-Gap: Recursive gradient attack on privacy[C/OL]. The 9th International Conference on Learning Representations, 2021: 1–17.
    [46]
    CHEN Cangxiong and CAMPBELL N D F. Understanding training-data leakage from gradients in neural networks for image classification[EB/OL]. https://arxiv.org/pdf/2111.10178.pdf, 2021.
    [47]
    KARIYAPPA S, GUO Chuan, MAENG K, et al. Cocktail party attack: Breaking aggregation-based privacy in federated learning using independent component analysis[C]. The 40th International Conference on Machine Learning, Honolulu, USA, 2023: 651.
    [48]
    GUPTA S, HUANG Yangsibo, ZHONG Zexuan, et al. Recovering private text in federated learning of language models[C]. The 36th Conference on Neural Information Processing Systems, New Orleans, USA, 2022: 8130–8143.
    [49]
    LAM M, WEI G Y, BROOKS D, et al. Gradient disaggregation: Breaking privacy in federated learning by reconstructing the user participant matrix[C/OL]. The 38th International Conference on Machine Learning, 2021: 5959–5968.
    [50]
    BOENISCH F, DZIEDZIC A, SCHUSTER R, et al. When the curious abandon honesty: Federated learning is not private[C]. The 2023 IEEE 8th European Symposium on Security and Privacy, Delft, Netherlands, 2021: 175–199,doi: 10.1109/EuroSP57164.2023.00020.
    [51]
    WEN Yuxin, GEIPING J A, FOWL L, et al. Fishing for user data in large-batch federated learning via gradient magnification[C]. The 39th International Conference on Machine Learning, Baltimore, USA, 2022: 23668–23684. doi: 10.48550/arXiv.2202.00580.
    [52]
    PASQUINI D, FRANCATI D, and ATENIESE G. Eluding secure aggregation in federated learning via model inconsistency[C]. 2022 ACM SIGSAC Conference on Computer and Communications Security, Los Angeles, USA, 2022: 2429–2443. doi: 10.1145/3548606.3560557.
    [53]
    FOWL L, GEIPING J, CZAJA W, et al. Robbing the fed: Directly obtaining private data in federated learning with modified models[C/OL]. The 10th International Conference on Learning Representations, 2021: 1–25. doi: 10.48550/arXiv.2110.13057.
    [54]
    ZHAO J C, ELKORDY A R, SHARMA A, et al. The resource problem of using linear layer leakage attack in federated learning[C]. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 3974–3983. doi: 10.1109/CVPR52729.2023.00387.
    [55]
    PAN Xudong, ZHANG Mi, YAN Yifan, et al. Exploring the security boundary of data reconstruction via neuron exclusivity analysis[C]. The 31st USENIX Security Symposium, Boston, USA, 2020: 3989–4006. doi: 10.48550/arXiv.2010.13356.
    [56]
    FOWL L H, GEIPING J, REICH S, et al. Decepticons: Corrupted transformers breach privacy in federated learning for language models[C]. The 11th International Conference on Learning Representations, Kigali, Rwanda, 2022: 1–23. doi: 10.48550/arXiv.2201.12675.
    [57]
    ZHAO Bo, MOPURI K R, and BILEN H. iDLG: Improved deep leakage from gradients[EB/OL]. https://arxiv.org/pdf/2001.02610.pdf, 2020.
    [58]
    DANG T, THAKKAR O, RAMASWAMY S, et al. Revealing and protecting labels in distributed training[C]. The 35th Conference on Neural Information Processing Systems, Sydney, Australia, 2021: 1727–1738. doi: 10.48550/arXiv.2111.00556.
    [59]
    MA Kailang, SUN Yu, CUI Jian, et al. Instance-wise batch label restoration via gradients in federated learning[C]. The 11th International Conference on Learning Representations, Kigali, Rwanda, 2023: 1–15.
    [60]
    ABADI M, CHU A, GOODFELLOW I, et al. Deep learning with differential privacy[C]. 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 2016: 308–318. doi: 10.1145/2976749.2978318.
    [61]
    WEI Wenqi, LIU Ling, WU Yanzhao, et al. Gradient-leakage resilient federated learning[C]. The 2021 IEEE 41st International Conference on Distributed Computing Systems, Washington, USA, 2021: 797–807. doi: 10.1109/ICDCS51616.2021.00081.
    [62]
    WEI Wenqi and LIU Ling. Gradient leakage attack resilient deep learning[J]. IEEE Transactions on Information Forensics and Security, 2021, 17: 303–316. doi: 10.1109/TIFS.2021.3139777.
    [63]
    WANG Junxiao, GUO Song, XIE Xin, et al. Protect privacy from gradient leakage attack in federated learning[C]. 2022 IEEE Conference on Computer Communications, London, UK, 2022: 580–589. doi: 10.1109/INFOCOM48880.2022.9796841.
    [64]
    HUANG Yangsibo, SONG Zhao, LI Kai, et al. InstaHide: Instance-hiding schemes for private distributed learning[C/OL]. The 37th International Conference on Machine Learning, 2020: 419. doi: 10.5555/3524938.3525357.
    [65]
    GAO Wei, GUO Shangwei, ZHANG Tianwei, et al. Privacy-preserving collaborative learning with automatic transformation search[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 114–123. doi: 10.1109/CVPR46437.2021.00018.
    [66]
    MELLOR J, TURNER J, STORKEY A, et al. Neural architecture search without training[C/OL]. The 38th International Conference on Machine Learning, 2021: 7588–7598.
    [67]
    CUBUK E D, ZOPH B, MANÉ D, et al. Autoaugment: Learning augmentation strategies from data[C]. The 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 113–123. doi: 10.1109/CVPR.2019.00020.
    [68]
    HUANG Yangsibo, SONG Zhao, CHEN Danqi, et al. TextHide: Tackling data privacy in language understanding tasks[C/OL]. Findings of the Association for Computational Linguistics, 2020: 1368–1382. doi: 10.18653/v1/2020.findings-emnlp.123.
    [69]
    SCHELIGA D, MÄDER P, and SEELAND M. PRECODE - a generic model extension to prevent deep gradient leakage[C]. 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, USA, 2022: 3605–3614. doi: 10.1109/WACV51458.2022.00366.
    [70]
    BALUNOVIĆ M, DIMITROV D I, STAAB R, et al. Bayesian framework for gradient leakage[C/OL]. The 10th International Conference on Learning Representations, 2021: 1–16. doi: 10.48550/arXiv.2111.04706.
    [71]
    CARLINI N, DENG S, GARG S, et al. Is private learning possible with instance encoding?[C]. Proceedings of 2021 IEEE Symposium on Security and Privacy, San Francisco, USA, 2021: 410–427. doi: 10.1109/SP40001.2021.00099.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(3)  / Tables(3)

    Article Metrics

    Article views (1264) PDF downloads(296) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return