Citation: | QIAN Yaguan, KONG Yaxin, CHEN Kecheng, SHEN Yunkai, BAO Qiqi, JI Shouling. Adversarial Transferability Attack on Deep Neural Networks Through Spectral Coefficient Decay[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250157 |
[1] |
TSUZUKU Y and SATO I. On the structural sensitivity of deep convolutional networks to the directions of Fourier basis functions[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Beach, USA, 2019: 51–60. doi: 10.1109/CVPR.2019.00014.
|
[2] |
YIN Dong, LOPES R G, SHLENS J, et al. A Fourier perspective on model robustness in computer vision[C]. The 33rd International Conference on Neural Information Processing Systems, Vancouver, Canada, 2019: 1189.
|
[3] |
PANNU A and STUDENT M T. Artificial intelligence and its application in different areas[J]. International Journal of Engineering and Innovative Technology, 2015, 4(10): 79–84.
|
[4] |
WANG Haohan, WU Xindi, HUANG Zeyi, et al. High-frequency component helps explain the generalization of convolutional neural networks[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 8681–8691. doi: 10.1109/CVPR42600.2020.00871.
|
[5] |
GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[C]. 3rd International Conference on Learning Representations, San Diego, USA, 2015: 278293.
|
[6] |
ZHANG Jiaming, YI Qi, and SANG Jitao. Towards adversarial attack on vision-language pre-training models[C]. The 30th ACM International Conference on Multimedia, Lisboa, Portugal, 2022: 5005–5013. doi: 10.1145/3503161.3547801.
|
[7] |
LIU Yanpei, CHEN Xinyun, LIU Chang, et al. Delving into transferable adversarial examples and black-box attacks[C]. 5th International Conference on Learning Representations, Toulon, France, 2017.
|
[8] |
GAO Sensen, JIA Xiaojun, REN Xuhong, et al. Boosting transferability in vision-language attacks via diversification along the intersection region of adversarial trajectory[C]. 18th European Conference on Computer Vision, Milan, Italy, 2024: 442–460. doi: 10.1007/978-3-031-72998-0_25.
|
[9] |
BRUNA J, SZEGEDY C, SUTSKEVER I, et al. Intriguing properties of neural networks[C]. International Conference on Learning Representations, Banff, Canada, 2014.
|
[10] |
KURAKIN A, GOODFELLOW I J, and BENGIO S. Adversarial examples in the physical world[C]. 5th International Conference on Learning Representations, Toulon, France, 2017.
|
[11] |
DONG Yinpeng, LIAO Fangzhou, PANG Tianyu, et al. Boosting adversarial attacks with momentum[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 9185–9193. doi: 10.1109/CVPR.2018.00957.
|
[12] |
MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]. 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
|
[13] |
CARLINI N and WAGNER D. Towards evaluating the robustness of neural networks[C]. 2017 IEEE Symposium on Security and Privacy, San Jose, USA, 2017: 39–57. doi: 10.1109/SP.2017.49.
|
[14] |
MOOSAVI-DEZFOOLI S M, FAWZI A, and FROSSARD P. DeepFool: A simple and accurate method to fool deep neural networks[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 2574–2582. doi: 10.1109/CVPR.2016.282.
|
[15] |
ZHAO Yunqing, PANG Tianyu, DU Chao, et al. On evaluating adversarial robustness of large vision-language models[C]. The 37th International Conference on Neural Information Processing Systems, New Orleans, USA, 2023: 2355.
|
[16] |
QIAN Yaguan, HE Shuke, ZHAO Chenyu, et al. LEA2: A lightweight ensemble adversarial attack via non-overlapping vulnerable frequency regions[C]. The IEEE/CVF International Conference on Computer Vision, Paris, France, 2023: 4487–4498. doi: 10.1109/ICCV51070.2023.00416.
|
[17] |
QIAN Yaguan, CHEN Kecheng, WANG Bin, et al. Enhancing transferability of adversarial examples through mixed-frequency inputs[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 7633–7645. doi: 10.1109/TIFS.2024.3430508.
|
[18] |
LIN Jiadong, SONG Chuanbiao, HE Kun, et al. Nesterov accelerated gradient and scale invariance for adversarial attacks[C]. International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.
|
[19] |
WANG Xiaosen and HE Kun. Enhancing the transferability of adversarial attacks through variance tuning[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 1924–1933. doi: 10.1109/CVPR46437.2021.00196.
|
[20] |
ZHU Hegui, REN Yuchen, SUI Xiaoyan, et al. Boosting adversarial transferability via gradient relevance attack[C]. 2023 IEEE/CVF International Conference on Computer Vision, Paris, France, 2023: 4718–4727. doi: 10.1109/ICCV51070.2023.00437.
|
[21] |
XIE Cihang, ZHANG Zhishuai, ZHOU Yuyin, et al. Improving transferability of adversarial examples with input diversity[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 2725–2734. doi: 10.1109/CVPR.2019.00284.
|
[22] |
LONG Yuyang, ZHANG Qilong, ZENG Boheng, et al. Frequency domain model augmentation for adversarial attack[C]. 17th European Conference on Computer Vision, Tel Aviv, Israel, 2022: 549–566. doi: 10.1007/978-3-031-19772-7_32.
|
[23] |
WANG Xiaosen, ZHANG Zeliang, and ZHANG Jianping. Structure invariant transformation for better adversarial transferability[C]. 2023 IEEE/CVF International Conference on Computer Vision, Paris, France, 2023: 4584–4596. doi: 10.1109/ICCV51070.2023.00425.
|
[24] |
ZHANG Jianping, HUANG J T, WANG Wenxuan, et al. Improving the transferability of adversarial samples by path-augmented method[C]. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 8173–8182. doi: 10.1109/CVPR52729.2023.00790.
|
[25] |
ZHAO Hongzhi, HAO Lingguang, HAO Kuangrong, et al. Remix: Towards the transferability of adversarial examples[J]. Neural Networks, 2023, 163: 367–378. doi: 10.1016/j.neunet.2023.04.012.
|
[26] |
XU Yonghao and GHAMISI P. Universal adversarial examples in remote sensing: Methodology and benchmark[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5619815. doi: 10.1109/TGRS.2022.3156392.
|
[27] |
XIAO Jun, LYU Zihang, ZHANG Cong, et al. Towards progressive multi-frequency representation for image warping[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2024: 2995–3004. doi: 10.1109/CVPR52733.2024.00289.
|
[28] |
LIN Xinmiao, LI Yikang, HSIAO J, et al. Catch missing details: Image reconstruction with frequency augmented variational autoencoder[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 1736–1745. doi: 10.1109/CVPR52729.2023.00173.
|
[29] |
RAO K R and YIP P. Discrete Cosine Transform: Algorithms, Advantages, Applications[M]. San Diego, USA: Academic Press Professional, Inc. , 1990.
|
[30] |
HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
|
[31] |
HUANG Gao, LIU Zhuang, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]. IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 2261–2269. doi: 10.1109/CVPR.2017.243.
|
[32] |
SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. 3rd International Conference on Learning Representations, San Diego, USA, 2015.
|
[33] |
DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[C]. International Conference on Learning Representations, Vienna, Austria, 2021.
|
[34] |
TOUVRON H, CORD M, SABLAYROLLES A, et al. Going deeper with image transformers[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 32–42. doi: 10.1109/ICCV48922.2021.00010.
|
[35] |
CHEN Zhengsu, XIE Lingxi, NIU Jianwei, et al. Visformer: The vision-friendly transformer[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 569–578. doi: 10.1109/ICCV48922.2021.00063.
|
[36] |
D’ASCOLI S, TOUVRON H, LEAVITT M L, et al. ConViT: Improving vision transformers with soft convolutional inductive biases[C]. The 38th International Conference on Machine Learning, 2021: 2286–2296.
|
[37] |
LIU Ze, LIN Yutong, CAO Yue, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 9992–10002. doi: 10.1109/ICCV48922.2021.00986.
|