Advanced Search
Volume 46 Issue 3
Mar.  2024
Turn off MathJax
Article Contents
CHEN Zhuo, JIANG Hui, ZHOU Yang. A Selective Defense Strategy for Federated Learning Against Attacks[J]. Journal of Electronics & Information Technology, 2024, 46(3): 1119-1127. doi: 10.11999/JEIT230137
Citation: CHEN Zhuo, JIANG Hui, ZHOU Yang. A Selective Defense Strategy for Federated Learning Against Attacks[J]. Journal of Electronics & Information Technology, 2024, 46(3): 1119-1127. doi: 10.11999/JEIT230137

A Selective Defense Strategy for Federated Learning Against Attacks

doi: 10.11999/JEIT230137
Funds:  The National Natural Science Foundation of China (61471089, 61401076)
  • Received Date: 2023-03-07
  • Accepted Date: 2023-08-15
  • Rev Recd Date: 2023-08-03
  • Available Online: 2023-08-18
  • Publish Date: 2024-03-27
  • Federated Learning (FL) performs model training based on local training on clients and continuous model parameters interaction between terminals and server, which effectively solving data leakage and privacy risks in centralized machine learning models. However, since multiple malicious terminals participating in FL can achieve adversarial attacks by inputting small perturbations in the process of local learning, and then lead to incorrect results output by the global model. An effective federated defense strategy – SelectiveFL is proposed in this paper. This strategy first establishes a selective federated defense framework, and then updates the uploaded local model on the server on the basis of extracting attack characteristics through adversarial training at the terminals. At the same time, selective aggregation is carried out according to the attack characteristics, and finally multiple adaptive defense models can be obtained. Finally, the proposed defense method is evaluated on several representative benchmark datasets. The experimental results show that compared with the existing research work, the accuracy of the model can be improved by 2% to 11%.
  • loading
  • [1]
    WU Yulei, DAI Hongning, and WANG Hao. Convergence of blockchain and edge computing for secure and scalable IIoT critical infrastructures in industry 4.0[J]. IEEE Internet of Things Journal, 2021, 8(4): 2300–2317. doi: 10.1109/JIOT.2020.3025916.
    [2]
    LIU Yi, YU J J Q, KANG Jiawen, et al. Privacy-preserving traffic flow prediction: A federated learning approach[J]. IEEE Internet of Things Journal, 2020, 7(8): 7751–7763. doi: 10.1109/JIOT.2020.2991401.
    [3]
    KHAN L U, YAQOOB I, TRAN N H, et al. Edge-computing-enabled smart cities: A comprehensive survey[J]. IEEE Internet of Things Journal, 2020, 7(10): 10200–10232. doi: 10.1109/JIOT.2020.2987070.
    [4]
    WAN C P and CHEN Qifeng. Robust federated learning with attack-adaptive aggregation[EB/OL]. https://doi.org/10.48550/arXiv.2102.05257, 2021.
    [5]
    HONG Junyuan, WANG Haotao, WANG Zhangyang, et al. Federated robustness propagation: Sharing adversarial robustness in federated learning[C/OL]. The Tenth International Conference on Learning Representations, 2022.
    [6]
    REN Huali, HUANG Teng, and YAN Hongyang. Adversarial examples: Attacks and defenses in the physical world[J]. International Journal of Machine Learning and Cybernetics, 2021, 12(11): 3325–3336. doi: 10.1007/s13042–020-01242-z.
    [7]
    GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015: 1–11.
    [8]
    KURAKIN A, GOODFELLOW I J, and BENGIO S. Adversarial examples in the physical world[C]. The 5th International Conference on Learning Representations, Toulon, France, 2017: 1–14.
    [9]
    MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]. The 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
    [10]
    DONG Yinpeng, LIAO Fangzhou, PANG Tianyu, et al. Boosting adversarial attacks with momentum[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 9185–9193.
    [11]
    MOOSAVI-DEZFOOLI S M, FAWZI A, and FROSSARD P. DeepFool: A simple and accurate method to fool deep neural networks[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 2574–2582.
    [12]
    CARLINI N and WAGNER D. Towards evaluating the robustness of neural networks[C]. 2017 IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2017: 39–57.
    [13]
    CHEN Peng, DU Xin, LU Zhihui, et al. Universal adversarial backdoor attacks to fool vertical federated learning in cloud-edge collaboration[EB/OL]. https://doi.org/10.48550/arXiv.2304.11432, 2023.
    [14]
    CHEN Jinyin, HUANG Guohan, ZHENG Haibin, et al. Graph-fraudster: Adversarial attacks on graph neural network-based vertical federated learning[J]. IEEE Transactions on Computational Social Systems, 2023, 10(2): 492–506. doi: 10.1109/TCSS.2022.3161016.
    [15]
    PAPERNOT N, MCDANIEL P, WU Xi, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]. 2016 IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2016: 582–597.
    [16]
    GUO Feng, ZHAO Qingjie, LI Xuan, et al. Detecting adversarial examples via prediction difference for deep neural networks[J]. Information Sciences, 2019, 501: 182–192. doi: 10.1016/j.ins.2019.05.084.
    [17]
    CHEN Chen, LIU Yuchen, MA Xingjun, et al. CalFAT: Calibrated federated adversarial training with label skewness[EB/OL]. https://doi.org/10.48550/arXiv.2205.14926, 2022.
    [18]
    IBITOYE O, SHAFIQ M O, and MATRAWY A. Differentially private self-normalizing neural networks for adversarial robustness in federated learning[J]. Computers & Security, 2022, 116: 102631. doi: 10.1016/j.cose.2022.102631.
    [19]
    SONG Yunfei, LIU Tian, WEI Tongquan, et al. FDA3: Federated defense against adversarial attacks for cloud-based IIoT applications[J]. IEEE Transactions on Industrial Informatics, 2021, 17(11): 7830–7838. doi: 10.1109/TII.2020.3005969.
    [20]
    FENG Jun, YANG L T, ZHU Qing, et al. Privacy-preserving tensor decomposition over encrypted data in a federated cloud environment[J]. IEEE Transactions on Dependable and Secure Computing, 2020, 17(4): 857–868. doi: 10.1109/TDSC.2018.2881452.
    [21]
    FENG Jun, YANG L T, REN Bocheng, et al. Tensor recurrent neural network with differential privacy[J]. IEEE Transactions on Computers, 2023: 1–11.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(5)  / Tables(5)

    Article Metrics

    Article views (523) PDF downloads(85) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return