Advanced Search
Turn off MathJax
Article Contents
QIAN Yaguan, KONG Yaxin, CHEN Kecheng, SHEN Yunkai, BAO Qiqi, JI Shouling. Adversarial Transferability Attack on Deep Neural Networks Through Spectral Coefficient Decay[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250157
Citation: QIAN Yaguan, KONG Yaxin, CHEN Kecheng, SHEN Yunkai, BAO Qiqi, JI Shouling. Adversarial Transferability Attack on Deep Neural Networks Through Spectral Coefficient Decay[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250157

Adversarial Transferability Attack on Deep Neural Networks Through Spectral Coefficient Decay

doi: 10.11999/JEIT250157 cstr: 32379.14.JEIT250157
Funds:  The Special Fund for Basic Scientific Research of Zhejiang University of Science and Technology (2025QN060), The National Key Research and Development Program of China (2022YFB3102100), The National Natural Science Foundation of China (62476250), Zhejiang Provincial Natural Science Foundation Key Project (LZ22F020007, LQN25F010018)
  • Received Date: 2025-03-12
  • Rev Recd Date: 2025-07-16
  • Available Online: 2025-07-25
  •   Objective  The rapid advancement of machine learning has accelerated the development of artificial intelligence applications based on big data. Deep Neural Networks (DNNs) are now widely used in real-world systems, including autonomous driving and facial recognition. Despite their advantages, DNNs remain vulnerable to adversarial attacks, particularly in image recognition, which exposes significant security risks. Attackers can generate carefully crafted adversarial examples that appear visually indistinguishable from benign inputs but cause incorrect model predictions. In critical applications, such attacks may lead to system failures. Studying adversarial attack strategies is crucial for designing effective defenses and improving model robustness in practical deployments.  Methods  This paper proposes a black-box adversarial attack method termed Spectral Coefficient Attenuation (SCA), based on Spectral Coefficient Decay (SCD). SCA perturbs spectral coefficients during the iterative generation of adversarial examples by attenuating the amplitude of each frequency component. This attenuation introduces diverse perturbation patterns and reduces the dependency of adversarial examples on the source model, thereby improving their transferability across models.  Results and Discussions  This study generated adversarial examples using three naturally trained deep models based on Convolutional Neural Network (CNN) architectures and evaluated their attack success rates against several widely used convolutional image recognition models. Compared with MI-FGSM, the proposed method achieved average increases in attack success rates of 4.7%, 4.1%, and 4.9%, respectively. To further assess the effectiveness of the SCA algorithm, additional experiments were conducted on vision models based on Transformer architectures. In these settings, the SCA method yielded average success rate improvements of 4.6%, 4.5%, and 3.7% compared with MI-FGSM.  Conclusions  The proposed method attenuates the energy of individual spectral components during the iterative generation of adversarial examples in the frequency domain. This reduces reliance on specific features of the source model, mitigates the risk of overfitting, and enhances black-box transferability across models with different architectures. Extensive experiments on the ImageNet validation set confirm the effectiveness of the SCA method. Whether targeting CNN-based or Transformer-based models, the SCA algorithm consistently achieves high transferability and robust attack performance.
  • loading
  • [1]
    TSUZUKU Y and SATO I. On the structural sensitivity of deep convolutional networks to the directions of Fourier basis functions[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Beach, USA, 2019: 51–60. doi: 10.1109/CVPR.2019.00014.
    [2]
    YIN Dong, LOPES R G, SHLENS J, et al. A Fourier perspective on model robustness in computer vision[C]. The 33rd International Conference on Neural Information Processing Systems, Vancouver, Canada, 2019: 1189.
    [3]
    PANNU A and STUDENT M T. Artificial intelligence and its application in different areas[J]. International Journal of Engineering and Innovative Technology, 2015, 4(10): 79–84.
    [4]
    WANG Haohan, WU Xindi, HUANG Zeyi, et al. High-frequency component helps explain the generalization of convolutional neural networks[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 8681–8691. doi: 10.1109/CVPR42600.2020.00871.
    [5]
    GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[C]. 3rd International Conference on Learning Representations, San Diego, USA, 2015: 278293.
    [6]
    ZHANG Jiaming, YI Qi, and SANG Jitao. Towards adversarial attack on vision-language pre-training models[C]. The 30th ACM International Conference on Multimedia, Lisboa, Portugal, 2022: 5005–5013. doi: 10.1145/3503161.3547801.
    [7]
    LIU Yanpei, CHEN Xinyun, LIU Chang, et al. Delving into transferable adversarial examples and black-box attacks[C]. 5th International Conference on Learning Representations, Toulon, France, 2017.
    [8]
    GAO Sensen, JIA Xiaojun, REN Xuhong, et al. Boosting transferability in vision-language attacks via diversification along the intersection region of adversarial trajectory[C]. 18th European Conference on Computer Vision, Milan, Italy, 2024: 442–460. doi: 10.1007/978-3-031-72998-0_25.
    [9]
    BRUNA J, SZEGEDY C, SUTSKEVER I, et al. Intriguing properties of neural networks[C]. International Conference on Learning Representations, Banff, Canada, 2014.
    [10]
    KURAKIN A, GOODFELLOW I J, and BENGIO S. Adversarial examples in the physical world[C]. 5th International Conference on Learning Representations, Toulon, France, 2017.
    [11]
    DONG Yinpeng, LIAO Fangzhou, PANG Tianyu, et al. Boosting adversarial attacks with momentum[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 9185–9193. doi: 10.1109/CVPR.2018.00957.
    [12]
    MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]. 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
    [13]
    CARLINI N and WAGNER D. Towards evaluating the robustness of neural networks[C]. 2017 IEEE Symposium on Security and Privacy, San Jose, USA, 2017: 39–57. doi: 10.1109/SP.2017.49.
    [14]
    MOOSAVI-DEZFOOLI S M, FAWZI A, and FROSSARD P. DeepFool: A simple and accurate method to fool deep neural networks[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 2574–2582. doi: 10.1109/CVPR.2016.282.
    [15]
    ZHAO Yunqing, PANG Tianyu, DU Chao, et al. On evaluating adversarial robustness of large vision-language models[C]. The 37th International Conference on Neural Information Processing Systems, New Orleans, USA, 2023: 2355.
    [16]
    QIAN Yaguan, HE Shuke, ZHAO Chenyu, et al. LEA2: A lightweight ensemble adversarial attack via non-overlapping vulnerable frequency regions[C]. The IEEE/CVF International Conference on Computer Vision, Paris, France, 2023: 4487–4498. doi: 10.1109/ICCV51070.2023.00416.
    [17]
    QIAN Yaguan, CHEN Kecheng, WANG Bin, et al. Enhancing transferability of adversarial examples through mixed-frequency inputs[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 7633–7645. doi: 10.1109/TIFS.2024.3430508.
    [18]
    LIN Jiadong, SONG Chuanbiao, HE Kun, et al. Nesterov accelerated gradient and scale invariance for adversarial attacks[C]. International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.
    [19]
    WANG Xiaosen and HE Kun. Enhancing the transferability of adversarial attacks through variance tuning[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 1924–1933. doi: 10.1109/CVPR46437.2021.00196.
    [20]
    ZHU Hegui, REN Yuchen, SUI Xiaoyan, et al. Boosting adversarial transferability via gradient relevance attack[C]. 2023 IEEE/CVF International Conference on Computer Vision, Paris, France, 2023: 4718–4727. doi: 10.1109/ICCV51070.2023.00437.
    [21]
    XIE Cihang, ZHANG Zhishuai, ZHOU Yuyin, et al. Improving transferability of adversarial examples with input diversity[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 2725–2734. doi: 10.1109/CVPR.2019.00284.
    [22]
    LONG Yuyang, ZHANG Qilong, ZENG Boheng, et al. Frequency domain model augmentation for adversarial attack[C]. 17th European Conference on Computer Vision, Tel Aviv, Israel, 2022: 549–566. doi: 10.1007/978-3-031-19772-7_32.
    [23]
    WANG Xiaosen, ZHANG Zeliang, and ZHANG Jianping. Structure invariant transformation for better adversarial transferability[C]. 2023 IEEE/CVF International Conference on Computer Vision, Paris, France, 2023: 4584–4596. doi: 10.1109/ICCV51070.2023.00425.
    [24]
    ZHANG Jianping, HUANG J T, WANG Wenxuan, et al. Improving the transferability of adversarial samples by path-augmented method[C]. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 8173–8182. doi: 10.1109/CVPR52729.2023.00790.
    [25]
    ZHAO Hongzhi, HAO Lingguang, HAO Kuangrong, et al. Remix: Towards the transferability of adversarial examples[J]. Neural Networks, 2023, 163: 367–378. doi: 10.1016/j.neunet.2023.04.012.
    [26]
    XU Yonghao and GHAMISI P. Universal adversarial examples in remote sensing: Methodology and benchmark[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5619815. doi: 10.1109/TGRS.2022.3156392.
    [27]
    XIAO Jun, LYU Zihang, ZHANG Cong, et al. Towards progressive multi-frequency representation for image warping[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2024: 2995–3004. doi: 10.1109/CVPR52733.2024.00289.
    [28]
    LIN Xinmiao, LI Yikang, HSIAO J, et al. Catch missing details: Image reconstruction with frequency augmented variational autoencoder[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 1736–1745. doi: 10.1109/CVPR52729.2023.00173.
    [29]
    RAO K R and YIP P. Discrete Cosine Transform: Algorithms, Advantages, Applications[M]. San Diego, USA: Academic Press Professional, Inc. , 1990.
    [30]
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    [31]
    HUANG Gao, LIU Zhuang, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]. IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 2261–2269. doi: 10.1109/CVPR.2017.243.
    [32]
    SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [33]
    DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[C]. International Conference on Learning Representations, Vienna, Austria, 2021.
    [34]
    TOUVRON H, CORD M, SABLAYROLLES A, et al. Going deeper with image transformers[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 32–42. doi: 10.1109/ICCV48922.2021.00010.
    [35]
    CHEN Zhengsu, XIE Lingxi, NIU Jianwei, et al. Visformer: The vision-friendly transformer[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 569–578. doi: 10.1109/ICCV48922.2021.00063.
    [36]
    D’ASCOLI S, TOUVRON H, LEAVITT M L, et al. ConViT: Improving vision transformers with soft convolutional inductive biases[C]. The 38th International Conference on Machine Learning, 2021: 2286–2296.
    [37]
    LIU Ze, LIN Yutong, CAO Yue, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 9992–10002. doi: 10.1109/ICCV48922.2021.00986.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(5)

    Article Metrics

    Article views (114) PDF downloads(9) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return