| Citation: | CHEN Zhuo, JIANG Hui, ZHOU Yang. A Selective Defense Strategy for Federated Learning Against Attacks[J]. Journal of Electronics & Information Technology, 2024, 46(3): 1119-1127. doi: 10.11999/JEIT230137 | 
 
	                | [1] | WU Yulei, DAI Hongning, and WANG Hao. Convergence of blockchain and edge computing for secure and scalable IIoT critical infrastructures in industry 4.0[J]. IEEE Internet of Things Journal, 2021, 8(4): 2300–2317. doi:  10.1109/JIOT.2020.3025916. | 
| [2] | LIU Yi, YU J J Q, KANG Jiawen, et al. Privacy-preserving traffic flow prediction: A federated learning approach[J]. IEEE Internet of Things Journal, 2020, 7(8): 7751–7763. doi:  10.1109/JIOT.2020.2991401. | 
| [3] | KHAN L U, YAQOOB I, TRAN N H, et al. Edge-computing-enabled smart cities: A comprehensive survey[J]. IEEE Internet of Things Journal, 2020, 7(10): 10200–10232. doi:  10.1109/JIOT.2020.2987070. | 
| [4] | WAN C P and CHEN Qifeng. Robust federated learning with attack-adaptive aggregation[EB/OL]. https://doi.org/10.48550/arXiv.2102.05257, 2021. | 
| [5] | HONG Junyuan, WANG Haotao, WANG Zhangyang, et al. Federated robustness propagation: Sharing adversarial robustness in federated learning[C/OL]. The Tenth International Conference on Learning Representations, 2022. | 
| [6] | REN Huali, HUANG Teng, and YAN Hongyang. Adversarial examples: Attacks and defenses in the physical world[J]. International Journal of Machine Learning and Cybernetics, 2021, 12(11): 3325–3336. doi:  10.1007/s13042–020-01242-z. | 
| [7] | GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015: 1–11. | 
| [8] | KURAKIN A, GOODFELLOW I J, and BENGIO S. Adversarial examples in the physical world[C]. The 5th International Conference on Learning Representations, Toulon, France, 2017: 1–14. | 
| [9] | MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]. The 6th International Conference on Learning Representations, Vancouver, Canada, 2018. | 
| [10] | DONG Yinpeng, LIAO Fangzhou, PANG Tianyu, et al. Boosting adversarial attacks with momentum[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 9185–9193. | 
| [11] | MOOSAVI-DEZFOOLI S M, FAWZI A, and FROSSARD P. DeepFool: A simple and accurate method to fool deep neural networks[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 2574–2582. | 
| [12] | CARLINI N and WAGNER D. Towards evaluating the robustness of neural networks[C]. 2017 IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2017: 39–57. | 
| [13] | CHEN Peng, DU Xin, LU Zhihui, et al. Universal adversarial backdoor attacks to fool vertical federated learning in cloud-edge collaboration[EB/OL]. https://doi.org/10.48550/arXiv.2304.11432, 2023. | 
| [14] | CHEN Jinyin, HUANG Guohan, ZHENG Haibin, et al. Graph-fraudster: Adversarial attacks on graph neural network-based vertical federated learning[J]. IEEE Transactions on Computational Social Systems, 2023, 10(2): 492–506. doi:  10.1109/TCSS.2022.3161016. | 
| [15] | PAPERNOT N, MCDANIEL P, WU Xi, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]. 2016 IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2016: 582–597. | 
| [16] | GUO Feng, ZHAO Qingjie, LI Xuan, et al. Detecting adversarial examples via prediction difference for deep neural networks[J]. Information Sciences, 2019, 501: 182–192. doi:  10.1016/j.ins.2019.05.084. | 
| [17] | CHEN Chen, LIU Yuchen, MA Xingjun, et al. CalFAT: Calibrated federated adversarial training with label skewness[EB/OL]. https://doi.org/10.48550/arXiv.2205.14926, 2022. | 
| [18] | IBITOYE O, SHAFIQ M O, and MATRAWY A. Differentially private self-normalizing neural networks for adversarial robustness in federated learning[J]. Computers & Security, 2022, 116: 102631. doi:  10.1016/j.cose.2022.102631. | 
| [19] | SONG Yunfei, LIU Tian, WEI Tongquan, et al. FDA3: Federated defense against adversarial attacks for cloud-based IIoT applications[J]. IEEE Transactions on Industrial Informatics, 2021, 17(11): 7830–7838. doi:  10.1109/TII.2020.3005969. | 
| [20] | FENG Jun, YANG L T, ZHU Qing, et al. Privacy-preserving tensor decomposition over encrypted data in a federated cloud environment[J]. IEEE Transactions on Dependable and Secure Computing, 2020, 17(4): 857–868. doi:  10.1109/TDSC.2018.2881452. | 
| [21] | FENG Jun, YANG L T, REN Bocheng, et al. Tensor recurrent neural network with differential privacy[J]. IEEE Transactions on Computers, 2023: 1–11. | 
