Advanced Search
Volume 46 Issue 4
Apr.  2024
Turn off MathJax
Article Contents
HOU Zhiqiang, WANG Zhuo, MA Sugang, ZHAO Jiaxin, YU Wangsheng, FAN Jiulun. Target Drift Discriminative Network Based on Dual-template Siamese Structure in Long-term Tracking[J]. Journal of Electronics & Information Technology, 2024, 46(4): 1458-1467. doi: 10.11999/JEIT230496
Citation: HOU Zhiqiang, WANG Zhuo, MA Sugang, ZHAO Jiaxin, YU Wangsheng, FAN Jiulun. Target Drift Discriminative Network Based on Dual-template Siamese Structure in Long-term Tracking[J]. Journal of Electronics & Information Technology, 2024, 46(4): 1458-1467. doi: 10.11999/JEIT230496

Target Drift Discriminative Network Based on Dual-template Siamese Structure in Long-term Tracking

doi: 10.11999/JEIT230496
Funds:  The National Natural Science Foundation of China (62072370), The Natural Science Foundation of Shaanxi Province (2023-JC-YB-598)
  • Received Date: 2023-05-26
  • Rev Recd Date: 2023-12-27
  • Available Online: 2024-01-02
  • Publish Date: 2024-04-24
  • In long-term visual tracking, most of the target loss discriminative methods require artificially determined thresholds, and the selection of optimal thresholds is usually difficult, resulting in weak generalization ability of long-term tracking algorithms. A target drift Discriminative Network (DNet) that does not require artificially selected thresholds is proposed. The network adopts Siamese structure and uses both static and dynamic templates to determine whether the tracking results are lost or not. Among them, the introduction of dynamic templates effectively improves the algorithm’s ability to adapt to changes in target appearance. In order to train the proposed target drift discriminative network, a sample-rich dataset is established. To verify the effectiveness of the proposed network, a complete long-term tracking algorithm is constructed in this paper by combining this network with the base tracker and the re-detection module. It is tested on classical visual tracking datasets such as UAV20L, LaSOT, VOT2018-LT and VOT2020-LT. The experimental results show that compared with the base tracker, the tracking accuracy and success rate are improved by 10.4% and 7.5% on UAV20L dataset, respectively.
  • loading
  • [1]
    周治国, 荆朝, 王秋伶, 等. 基于时空信息融合的无人艇水面目标检测跟踪[J]. 电子与信息学报, 2021, 43(6): 1698–1705. doi: 10.11999/JEIT200223.

    ZHOU Zhiguo, JING Zhao, WANG Qiuling, et al. Object detection and tracking of unmanned surface vehicles based on spatial-temporal information fusion[J]. Journal of Electronics & Information Technology, 2021, 43(6): 1698–1705. doi: 10.11999/JEIT200223.
    [2]
    MUELLER M, SMITH N, and GHANEM B. A benchmark and simulator for UAV tracking[C]. The 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 445–461. Doi: 10.1007/978-3-319-46448-0_27.
    [3]
    FAN Heng, LIN Liting, YANG Fan, et al. LaSOT: A high-quality benchmark for large-scale single object tracking[C]. The 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 5369–5378. doi: 10.1109/CVPR.2019.00552.
    [4]
    LUKEŽIČ A, ZAJC L Č, VOJÍŘ T, et al. Now you see me: Evaluating performance in long-term visual tracking[EB/OL]. https://arxiv.org/abs/1804.07056, 2018.
    [5]
    KRISTAN M, LEONARDIS A, MATAS J, et al. The eighth visual object tracking VOT2020 challenge results[C]. European Conference on Computer Vision, Glasgow, UK, 2020: 547–601. doi: 10.1007/978-3-030-68238-5_39.
    [6]
    BOLME D S, BEVERIDGE J R, DRAPER B A, et al. Visual object tracking using adaptive correlation filters[C]. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, USA, 2010: 2544–2550. doi: 10.1109/CVPR.2010.5539960.
    [7]
    WANG Mengmeng, LIU Yong, and HUANG Zeyi. Large margin object tracking with circulant feature maps[C]. The 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 4800–4808. doi: 10.1109/CVPR.2017.510.
    [8]
    LUKEŽIČ A, ZAJC L Č, VOJÍŘ T, et al. Fucolot–a fully-correlational long-term tracker[C]. 14th Asian Conference on Computer Vision, Perth, Australia, 2018: 595–611. doi: 10.1007/978-3-030-20890-5_38.
    [9]
    YAN Bin, ZHAO Haojie, WANG Dong, et al. 'Skimming-Perusal' tracking: A framework for real-time and robust long-term tracking[C]. The 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 2385–2393. doi: 10.1109/ICCV.2019.00247.
    [10]
    KRISTAN M, LEONARDIS A, MATAS J, et al. The sixth visual object tracking vot2018 challenge results[C]. The European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 2018: 3–53. doi: 10.1007/978-3-030-11009-3_1.
    [11]
    ZHANG Yunhua, WANG Dong, WANG Lijun, et al. Learning regression and verification networks for long-term visual tracking[EB/OL].https://arxiv.org/abs/1809.04320, 2018.
    [12]
    XUAN Shiyu, LI Shengyang, ZHAO Zifei, et al. Siamese networks with distractor-reduction method for long-term visual object tracking[J]. Pattern Recognition, 2021, 112: 107698. doi: 10.1016/j.patcog.2020.107698.
    [13]
    DAI Kenan, ZHANG Yunhua, WANG Dong, et al. High-performance long-term tracking with meta-updater[C]. The 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 6297–6306. doi: 10.1109/CVPR42600.2020.00633.
    [14]
    WU Yi, LIM J, and YANG M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834–1848. doi: 10.1109/TPAMI.2014.2388226.
    [15]
    BHAT G, DANELLJAN M, VAN GOOL L, et al. Learning discriminative model prediction for tracking[C]. The 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 6181–6190. doi: 10.1109/ICCV.2019.00628.
    [16]
    HUANG Lianghua, ZHAO Xin, and HUANG Kaiqi. Globaltrack: A simple and strong baseline for long-term tracking[C]. The 34th AAAI Conference on Artificial Intelligence, New York, USA, 2020: 11037–11044. doi: 10.1609/aaai.v34i07.6758.
    [17]
    CHENG Siyuan, ZHONG Bineng, LI Guorong, et al. Learning to filter: Siamese relation network for robust tracking[C]. The 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 4419–4429. doi: 10.1109/CVPR46437.2021.00440.
    [18]
    CAO Ziang, FU Changhong, YE Junjie, et al. HiFT: Hierarchical feature transformer for aerial tracking[C]. The 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 15457–15466. doi: 10.1109/ICCV48922.2021.01517.
    [19]
    CAO Ziang, HUANG Ziyuan, PAN Liang, et al. TCTrack: Temporal contexts for aerial tracking[C]. The 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 14778–14788. doi: 10.1109/CVPR52688.2022.01438.
    [20]
    FU Zhihong, LIU Qingjie, FU Zehua, et al. STMTrack: Template-free visual tracking with space-time memory networks[C]. The 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 13769–13778. doi: 10.1109/CVPR46437.2021.01356.
    [21]
    ZHAO Haojie, YAN Bin, WANG Dong, et al. Effective local and global search for fast long-term tracking[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2023, 45(1): 460–474. doi: 10.1109/TPAMI.2022.3153645.
    [22]
    https://www.votchallenge.net/vot2019/results.html.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(10)  / Tables(7)

    Article Metrics

    Article views (160) PDF downloads(27) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return