Advanced Search
Volume 45 Issue 12
Dec.  2023
Turn off MathJax
Article Contents
HU Jun, SHI Yijie. Adversarial Defense Algorithm Based on Momentum Enhanced Future Map[J]. Journal of Electronics & Information Technology, 2023, 45(12): 4548-4555. doi: 10.11999/JEIT221414
Citation: HU Jun, SHI Yijie. Adversarial Defense Algorithm Based on Momentum Enhanced Future Map[J]. Journal of Electronics & Information Technology, 2023, 45(12): 4548-4555. doi: 10.11999/JEIT221414

Adversarial Defense Algorithm Based on Momentum Enhanced Future Map

doi: 10.11999/JEIT221414
Funds:  The National Natural Science Foundation of China (61936001, 62276038), The Key Cooperation Project of Chongqing Municipal Education Commission (HZ2021008), The National Natural Science Foundation of Chongqing (cstc2019jcyj-cxttX0002, cstc2021ycjh-bgzxm0013)
  • Received Date: 2022-11-09
  • Rev Recd Date: 2023-03-05
  • Available Online: 2023-03-10
  • Publish Date: 2023-12-26
  • Deep Neural Networks (DNN) are widely used due to their excellent performance, but the problem of being vulnerable to adversarial examples makes them face huge security risks.Through the visualization of the convolution process of DNN, it is found that with the deepening of the convolution layers, the disturbance of the original input caused by the adversarial attack becomes more obvious. Based on this finding, a defense algorithm based on Momentum Enhanced Feature maps (MEF) is proposed by adopting the idea of revising the backward results by the forward results in the momentum method. The MEF algorithm deploys a feature enhancement layer on the convolutional layer of the DNN to form a Feature Enhancement Block (FEB). The FEB combines the original input and the feature map of the shallow convolutional layer to generate a feature enhancement map, and then uses the feature enhancement map to enhance the deep features map. While, in order to ensure the effectiveness of the feature enhancement map of each layer, the enhanced feature map will further update the feature enhancement map. In order to verify the effectiveness of the MEF algorithm, various white-box and black-box attacks are used to attack the DNN model deployed with the MEF algorithm, the results show that in the Project Gradient Descent (PGD) and Fast Gradient Sign Method (FGSM) attack experiment, the recognition accuracy of MEF algorithm for adversarial samples is 3%~5% higher than that of Adversarial Training (AT), and the recognition accuracy of clean samples is also improved. Furthermore, when tested with stronger adversarial attack methods than training, the MEF algorithm exhibits stronger robustness compared with the currently advanced Parametric Noise Injection algorithm (PNI) and Learn2Perturb algorithm (L2P).
  • loading
  • [1]
    MAULUD D H, ZEEBAREE S R M, JACKSI K, et al. State of art for semantic analysis of natural language processing[J]. Qubahan Academic Journal, 2021, 1(2): 21–28. doi: 10.48161/qaj.v1n2a44
    [2]
    ALHARBI S, ALRAZGAN M, ALRASHED A, et al. Automatic speech recognition: Systematic literature review[J]. IEEE Access, 2021, 9: 131858–131876. doi: 10.1109/ACCESS.2021.3112535
    [3]
    陈怡, 唐迪, 邹维. 基于深度学习的Android恶意软件检测: 成果与挑战[J]. 电子与信息学报, 2020, 42(9): 2082–2094. doi: 10.11999/JEIT200009

    CHEN Yi, TANG Di, and ZOU Wei. Android malware detection based on deep learning: Achievements and challenges[J]. Journal of Electronics &Information Technology, 2020, 42(9): 2082–2094. doi: 10.11999/JEIT200009
    [4]
    SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[C]. 2nd International Conference on Learning Representations, Banff, Canada, 2014.
    [5]
    GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[C]. 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [6]
    MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]. 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
    [7]
    CARLINI N and WAGNER D. Towards evaluating the robustness of neural networks[C]. 2017 IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2017: 39–57.
    [8]
    LIU Yanpei, CHEN Xinyun, LIU Chang, et al. Delving into transferable adversarial examples and black-box attacks[C]. 5th International Conference on Learning Representations, Toulon, France, 2017.
    [9]
    ANDRIUSHCHENKO M, CROCE F, FLAMMARION N, et al. Square attack: A query-efficient black-box adversarial attack via random search[C]. 16th European Conference on Computer Vision, Glasgow, UK, 2020: 484–501.
    [10]
    LIN Jiadong, SONG Chuanbiao, HE Kun, et al. Nesterov accelerated gradient and scale invariance for adversarial attacks[C]. 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.
    [11]
    邹军华, 段晔鑫, 任传伦, 等. 基于噪声初始化、Adam-Nesterov方法和准双曲动量方法的对抗样本生成方法[J]. 电子学报, 2022, 50(1): 207–216. doi: 10.12263/DZXB.20200839

    ZOU Junhua, DUAN Yexin, REN Chuanlun, et al. Perturbation initialization, Adam-Nesterov and Quasi-Hyperbolic momentum for adversarial examples[J]. Acta Electronica Sinica, 2022, 50(1): 207–216. doi: 10.12263/DZXB.20200839
    [12]
    XU Weilin, EVANS D, and QI Yanjun. Feature squeezing: Detecting adversarial examples in deep neural networks[C]. 2018 Network and Distributed System Security Symposium (NDSS), San Diego, USA, 2018.
    [13]
    SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: A simple way to prevent neural networks from overfitting[J]. The Journal of Machine Learning Research, 2014, 15(1): 1929–1958.
    [14]
    DHILLON G S, AZIZZADENESHELI K, LIPTON Z C, et al. Stochastic activation pruning for robust adversarial defense[C]. 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
    [15]
    VIEVK B S and BABU R V. Single-step adversarial training with dropout scheduling[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 947–956.
    [16]
    MENG Dongyu and CHEN Hao. MagNet: A two-pronged defense against adversarial examples[C]. The 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, USA, 2017: 135–147.
    [17]
    SONG Yang, KIM T, NOWOZIN S, et al. PixelDefend: Leveraging generative models to understand and defend against adversarial examples[C]. 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
    [18]
    PAPERNOT N, MCDANIEL P, WU Xi, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]. 2016 IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2016: 582–597.
    [19]
    XIE Cihang, WU Yuxin, VAN DER MAATEN L, et al. Feature denoising for improving adversarial robustness[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 501–509.
    [20]
    HE Zhezhi, RAKIN A S, and FAN Deliang. Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 588–597.
    [21]
    JEDDI A, SHAFIEE M J, KARG M, et al. Learn2Perturb: An end-to-end feature perturbation learning to improve adversarial robustness[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 1238–1247.
    [22]
    ZHANG Xiaoqin, WANG Jinxin, WANG Tao, et al. Robust feature learning for adversarial defense via hierarchical feature alignment[J]. Information Sciences, 2021, 560: 256–270. doi: 10.1016/J.INS.2020.12.042
    [23]
    XIAO Chang and ZHENG Changxi. One man's trash is another man's treasure: Resisting adversarial examples by adversarial examples[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 409–418.
    [24]
    ATHALYE A, CARLINI N, and WAGNER D. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples[C]. The 35th International Conference on Machine Learning, Stockholm, Sweden, 2018: 274–283.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(7)

    Article Metrics

    Article views (203) PDF downloads(84) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return