高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

面向多谱段的目标跟踪补丁式对抗样本生成方法

马佳义 向昕宇 燕庆龙 张浩 黄珺 马泳

马佳义, 向昕宇, 燕庆龙, 张浩, 黄珺, 马泳. 面向多谱段的目标跟踪补丁式对抗样本生成方法[J]. 电子与信息学报. doi: 10.11999/JEIT240891
引用本文: 马佳义, 向昕宇, 燕庆龙, 张浩, 黄珺, 马泳. 面向多谱段的目标跟踪补丁式对抗样本生成方法[J]. 电子与信息学报. doi: 10.11999/JEIT240891
MA Jiayi, XIANG Xinyu, YAN Qinglong, ZHANG Hao, HUANG Jun, MA Yong. Patch-based Adversarial Example Generation Method for Multi-spectral Object Tracking[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240891
Citation: MA Jiayi, XIANG Xinyu, YAN Qinglong, ZHANG Hao, HUANG Jun, MA Yong. Patch-based Adversarial Example Generation Method for Multi-spectral Object Tracking[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240891

面向多谱段的目标跟踪补丁式对抗样本生成方法

doi: 10.11999/JEIT240891
基金项目: 国家自然科学基金(U23B2050, 62473297)
详细信息
    作者简介:

    马佳义:男,博士,教授,研究方向为计算机视觉、模式识别和图像处理

    向昕宇:男,博士生,研究方向为对抗攻击、目标跟踪

    燕庆龙:男,硕士生,研究方向为对抗攻击、图像融合

    张浩:男,博士生,研究方向为计算机视觉、机器学习

    黄珺:男,博士,教授,研究方向为红外热成像、机器视觉

    马泳:男,博士,教授,研究方向为红外热成像、红外超光谱、机器视觉

    通讯作者:

    马泳 mayong@whu.edu.cn

  • 中图分类号: TN911.73; TP391.41

Patch-based Adversarial Example Generation Method for Multi-spectral Object Tracking

Funds: The National Natural Science Foundation of China (U23B2050, 62473297)
  • 摘要: 当前面向跟踪器的对抗样本生成研究主要集中于可见光谱段,无法在多谱段条件下实现对跟踪器的有效攻击。为了填补这一空缺,该文提出一种基于多谱段的目标跟踪补丁式对抗样本生成网络,有效提升了对抗样本在多谱段条件下的攻击有效性。具体来说,该网络包含对抗纹理生成模块与对抗形状优化策略,对可见光谱段下跟踪器对目标纹理的理解进行语义干扰,并显著破坏对热显著目标相关特征的提取。此外,根据不同跟踪器的特点设计误回归损失和掩膜干扰损失对多谱段跟踪模型补丁式对抗样本生成提供指引,实现跟踪预测框扩大或者脱离目标的效果,引入最大特征差异损失削弱特征空间中模版帧和搜索帧间的相关性,进而实现对跟踪器的有效攻击。定性和定量实验证明该文对抗样本可以有效提升多谱段环境下对跟踪器的攻击成功率。
  • 图  1  多谱段目标跟踪补丁式对抗样本生成网络

    图  2  多谱段补丁生成细节

    图  3  多谱段对抗样本攻击对比实验

    图  4  多谱段不同场景不同目标跟踪攻击定性实验

    表  1  SiamRPN/SiamMask对抗样本可见光谱段攻击对比定量结果

    跟踪器场景跟踪结果干净视频PAT攻击成功率(%)MTD攻击成功率(%)本文攻击成功率(%)
    SiamRPN白天成功382242.112144.71781.57
    失败12282943
    黑夜成功271544.441159.25581.48
    失败23353945
    SiamMask白天成功412734.153612.191953.65
    失败9231431
    黑夜成功362141.062919.441752.77
    失败14292133
    下载: 导出CSV

    表  2  SiamRPN/SiamMask对抗样本红外谱段攻击对比定量结果

    跟踪器场景跟踪结果干净视频HOTCOLD攻击成功率(%)本文攻击成功率(%)
    SiamRPN白天成功423223.811271.43
    失败81838
    黑夜成功372045.94781.08
    失败133043
    SiamMask白天成功473917.021665.95
    失败31134
    黑夜成功412148.781465.85
    失败92936
    下载: 导出CSV

    表  3  损失函数消融实验

    跟踪器损失函数干净跟踪成功视频数对抗样本跟踪成功视频数攻击成功率(%)
    SiamRPN误回归损失792568.35
    最大特征差异损失3358.23
    误回归损失+最大特征差异损失1975.95
    SiamMask掩膜损失883757.95
    最大特征差异损失5240.91
    掩膜损失+最大特征差异损失3065.91
    下载: 导出CSV
  • [1] 卢湖川, 李佩霞, 王栋. 目标跟踪算法综述[J]. 模式识别与人工智能, 2018, 31(1): 61–67. doi: 10.16451/j.cnki.issn1003-6059.201801006.

    LU Huchuan, LI Peixia, and WANG Dong. Visual object tracking: A survey[J]. Pattern Recognition and Artificial Intelligence, 2018, 31(1): 61–67. doi: 10.16451/j.cnki.issn1003-6059.201801006.
    [2] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[C]. The 2nd International Conference on Learning Representations, Banff, Canada, 2014.
    [3] 潘文雯, 王新宇, 宋明黎, 等. 对抗样本生成技术综述[J]. 软件学报, 2020, 31(1): 67–81. doi: 10.13328/j.cnki.jos.005884.

    PAN Wenwen, WANG Xinyu, SONG Mingli, et al. Survey on generating adversarial examples[J]. Journal of Software, 2020, 31(1): 67–81. doi: 10.13328/j.cnki.jos.005884.
    [4] JIA Shuai, MA Chao, SONG Yibing, et al. Robust tracking against adversarial attacks[C]. The 16th European Conference on Computer Vision, Glasgow, UK, 2020: 69–84. doi: 10.1007/978-3-030-58529-7_5.
    [5] CHEN Fei, WANG Xiaodong, ZHAO Yunxiang, et al. Visual object tracking: A survey[J]. Computer Vision and Image Understanding, 2022, 222: 103508. doi: 10.1016/j.cviu.2022.103508.
    [6] CHEN Xuesong, FU Canmiao, ZHENG Feng, et al. A unified multi-scenario attacking network for visual object tracking[C]. The 35th AAAI Conference on Artificial Intelligence, Vancouver, Canada, 2021: 1097–1104. doi: 10.1609/aaai.v35i2.16195.
    [7] YAN Bin, PENG Houwen, FU Jianlong, et al. Learning spatio-temporal transformer for visual tracking[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 10428–10437. doi: 10.1109/ICCV48922.2021.01028.
    [8] TANG Chuanming, WANG Xiao, BAI Yuanchao, et al. Learning spatial-frequency transformer for visual object tracking[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(9): 5102–5116. doi: 10.1109/TCSVT.2023.3249468.
    [9] LI Bo, YAN Junjie, WU Wei, et al. High performance visual tracking with Siamese region proposal network[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 8971–8980. doi: 10.1109/CVPR.2018.00935.
    [10] HU Weiming, WANG Qiang, ZHANG Li, et al. SiamMask: A framework for fast online object tracking and segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(3): 3072–3089. doi: 10.1109/TPAMI.2022.3172932.
    [11] LIN Liting, FAN Heng, ZHANG Zhipeng, et al. SwinTrack: A simple and strong baseline for transformer tracking[C]. The 36th International Conference on Neural Information Processing Systems, New Orleans, USA, 2022: 1218. doi: 10.5555/3600270.3601488.
    [12] LIN Xixun, ZHOU Chuan, WU Jia, et al. Exploratory adversarial attacks on graph neural networks for semi-supervised node classification[J]. Pattern Recognition, 2023, 133: 109042. doi: 10.1016/j.patcog.2022.109042.
    [13] HUANG Hao, CHEN Ziyan, CHEN Huanran, et al. T-SEA: Transfer-based self-ensemble attack on object detection[C]. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 20514–20523. doi: 10.1109/CVPR52729.2023.01965.
    [14] JIA Shuai, SONG Yibing, MA Chao, et al. IoU attack: Towards temporally coherent black-box adversarial attack for visual object tracking[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 6705–6714. doi: 10.1109/CVPR46437.2021.00664.
    [15] DING Li, WANG Yongwei, YUAN Kaiwen, et al. Towards universal physical attacks on single object tracking[C]. The 35th AAAI Conference on Artificial Intelligence, Vancouver, Canada, 2021: 1236–1245. doi: 10.1609/aaai.v35i2.16211.
    [16] HUANG Xingsen, MIAO Deshui, WANG Hongpeng, et al. Context-guided black-box attack for visual tracking[J]. IEEE Transactions on Multimedia, 2024, 26: 8824–8835. doi: 10.1109/TMM.2024.3382473.
    [17] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139–144. doi: 10.1145/3422622.
    [18] CHEN Zhaoyu, LI Bo, WU Shuang, et al. Shape matters: Deformable patch attack[C]. The 17th European Conference on Computer Vision, Tel Aviv, Israel, 2022: 529–548. doi: 10.1007/978-3-031-19772-7_31.
    [19] LI Chenglong, LIANG Xinyan, LU Yijuan, et al. RGB-T object tracking: Benchmark and baseline[J]. Pattern Recognition, 2019, 96: 106977. doi: 10.1016/j.patcog.2019.106977.
    [20] WIYATNO R and XU Anqi. Physical adversarial textures that fool visual object tracking[C]. 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 4821–4830. doi: 10.1109/ICCV.2019.00492.
    [21] WEI Hui, WANG Zhixiang, JIA Xuemei, et al. HOTCOLD block: Fooling thermal infrared detectors with a novel wearable design[C]. The 37th AAAI Conference on Artificial Intelligence, Washington, USA, 2023: 15233–15241. doi: 10.1609/aaai.v37i12.26777.
  • 加载中
图(4) / 表(3)
计量
  • 文章访问数:  255
  • HTML全文浏览量:  122
  • PDF下载量:  52
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-10-21
  • 修回日期:  2025-02-25
  • 网络出版日期:  2025-03-13

目录

    /

    返回文章
    返回