Advanced Search
Volume 43 Issue 5
May  2021
Turn off MathJax
Article Contents
Rui SUN, Linfeng FANG, Qili LIANG, Xudong ZHANG. Siamese Network Combined Learning Saliency and Online Leaning Interference for Aerial Object Tracking Algorithm[J]. Journal of Electronics & Information Technology, 2021, 43(5): 1414-1423. doi: 10.11999/JEIT200140
Citation: Rui SUN, Linfeng FANG, Qili LIANG, Xudong ZHANG. Siamese Network Combined Learning Saliency and Online Leaning Interference for Aerial Object Tracking Algorithm[J]. Journal of Electronics & Information Technology, 2021, 43(5): 1414-1423. doi: 10.11999/JEIT200140

Siamese Network Combined Learning Saliency and Online Leaning Interference for Aerial Object Tracking Algorithm

doi: 10.11999/JEIT200140
Funds:  The National Natural Science Foundation of China (61471154, 61876057), The Key Research Plan of Anhui Province - Strengthening Police with Science and Technology (202004d07020012)
  • Received Date: 2020-03-03
  • Rev Recd Date: 2020-10-21
  • Available Online: 2020-11-19
  • Publish Date: 2021-05-18
  • In view of the fact that the general tracking algorithm can not solve the special problems such as low resolution, large field of view and many changes of view angle, a Unmanned Aerial Vehicle (UAV) tracking algorithm combining target saliency and online learning interference factor is proposed. The deep feature that the general model pre-trained can not effectively identify the aerial target, the tracking algorithm can better select the salient feature of each convolution filter according to the importance of the back propagation gradient, so as to highlight the aerial target feature. In addition, it makes full use of the rich context information of the continuous video, and learn the interference factor of the dynamic target online by guiding the target appearance model as similar as possible to the current frame, so as to achieve reliable adaptive matching tracking. It is proved that the tracking success rate and accuracy rate of the algorithm are 5.3% and 3.6% higher than that of the siamese network benchmark algorithm on the more difficult UAV123 dataset, respectively, and the speed reaches an average of 28.7 frames per second, which basically meet the aerial target tracking accuracy and real-time requirements.
  • loading
  • [1]
    TRILAKSONO B R, TRIADHITAMA R, ADIPRAWITA W, et al. Hardware-in-the-loop simulation for visual target tracking of octorotor UAV[J]. Aircraft Engineering and Aerospace Technology, 2011, 83(6): 407–419. doi: 10.1108/00022661111173289
    [2]
    黄静琪, 胡琛, 孙山鹏, 等. 一种基于异步传感器网络的空间目标分布式跟踪方法[J]. 电子与信息学报, 2020, 42(5): 1132–1139. doi: 10.11999/JEIT190460

    HUANG Jingqi, HU Chen, SUN Shanpeng, et al. A distributed space target tracking algorithm based on asynchronous multi-sensor networks[J]. Journal of Electronics &Information Technology, 2020, 42(5): 1132–1139. doi: 10.11999/JEIT190460
    [3]
    KRISTAN M, MATAS J, LEONARDIS A, et al. A novel performance evaluation methodology for single-target trackers[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(11): 2137–2155. doi: 10.1109/TPAMI.2016.2516982
    [4]
    HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583–596. doi: 10.1109/TPAMI.2014.2345390
    [5]
    SUN Chong, WANG Dong, LU Huchuan, et al. Correlation tracking via joint discrimination and reliability learning[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 489–497. doi: 10.1109/CVPR.2018.00058.
    [6]
    SUN Chong, WANG Dong, LU Huchuan, et al. Learning spatial-aware regressions for visual tracking[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 8962–8970. doi: 10.1109/CVPR.2018.00934.
    [7]
    QI Yuankai, ZHANG Shengping, QIN Lei, et al. Hedging deep features for visual tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(5): 1116–1130. doi: 10.1109/TPAMI.2018.2828817
    [8]
    SONG Yibing, MA Chao, WU Xiaohe, et al. VITAL: Visual tracking via adversarial learning[C]. The 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, USA, 2018: 8990–8999. doi: 10.1109/CVPR.2018.00937.
    [9]
    BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional Siamese networks for object tracking[C]. The 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 2016: 850–865. doi: 10.1007/978-3-319-48881-3_56.
    [10]
    KRISTAN M, LEONARDIS A, MATAS J, et al. The sixth visual object tracking vot2018 challenge results[C]. Computer Vision ECCV 2018 Workshops, Munich, Germany, 2018: 3–53. doi: 10.1007/978-3-030-11009-3_1.
    [11]
    CHEN Kai and TAO Wenbing. Once for all: A two-flow convolutional neural network for visual tracking[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2018, 28(12): 3377–3386. doi: 10.1109/TCSVT.2017.2757061
    [12]
    HELD D, THRUN S, SAVARESE S, et al. Learning to track at 100 fps with deep regression networks [C] The 14th European Conference on Computer Vision (ECCV) , Amsterdam, Netherlands, 2016, 9905: 749–765. doi: 10.1007/978-3-319-46448-0_45.
    [13]
    侯志强, 陈立琳, 余旺盛, 等. 基于双模板Siamese网络的鲁棒视觉跟踪算法[J]. 电子与信息学报, 2019, 41(9): 2247–2255. doi: 10.11999/JEIT181018

    HOU Zhiqiang, CHEN Lilin, YU Wangsheng, et al. Robust visual tracking algorithm based on Siamese network with dual templates[J]. Journal of Electronics &Information Technology, 2019, 41(9): 2247–2255. doi: 10.11999/JEIT181018
    [14]
    HUANG Chen, LUCEY S, RAMANAN D, et al. Learning policies for adaptive tracking with deep feature cascades[C]. 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017: 105–114. doi: 10.1109/ICCV.2017.21.
    [15]
    SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: Visual explanations from deep networks via gradient-based localization[C]. 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017: 618–626. doi: 10.1109/ICCV.2017.74.
    [16]
    MUELLER M, SMITH N, and GHANEM B. A benchmark and simulator for UAV tracking[C]. The 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 2016: 445–461. doi: 10.1007/978-3-319-46448-0_27.
    [17]
    HENRIQUES J F, CASEIRO R, MARTINS P, et al. Exploiting the circulant structure of tracking-by-detection with kernels[C]. The 12th European Conference on Computer Vision (ECCV), Florence, Italy, 2012: 702–715. doi: 10.1007/978-3-642-33765-9_50.
    [18]
    DANELLJAN M, HÄGER G, SHAHBAZ K, et al. Accurate scale estimation for robust visual tracking[C]. The British Machine Vision Conference (BMVC), Nottingham, UK, 2014: 65.1–65.11. doi: 10.5244/C.28.65.
    [19]
    HARE S, GOLODETZ S, and SAFFARI A. Struck: Structured output tracking with kernels[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10): 2096–2109. doi: 10.1109/TPAMI.2015.2509974
    [20]
    ZHANG Jianming, MA Shugao, and SCLAROFF S. MEEM: Robust tracking via multiple experts using entropy minimization[C]. The 13th European Conference on Computer Vision (ECCV), Zurich, Switzerland, 2014: 188–203. doi: 10.1007/978-3-319-10599-4_13.
    [21]
    HONG Zhibin, CHEN Zhe, WANG Chaohui, et al. Multi-store tracker (MUSTer): A cognitive psychology inspired approach to object tracking[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, USA, 2015: 749–758. doi: 10.1109/CVPR.2015.7298675.
    [22]
    KRISTAN M, PFLUGFELDER R, LEONARDIS A, et al. The visual object tracking VOT2014 challenge results[C]. The 13th European Conference on Computer Vision (ECCV), Zurich, Switzerland, 2014: 191–217. doi: 10.1007/978-3-319-16181-5_14.
    [23]
    JIA Xu, LU Huchuan, and YANG M H. Visual tracking via adaptive structural local sparse appearance model[C]. 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, USA, 2012: 1822–1829. doi: 10.1109/CVPR.2012.6247880.
    [24]
    VALMADRE J, BERTINETTO L, HENRIQUES J, et al. End-to-end representation learning for correlation filter based tracking[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, 2017: 5000–5008. doi: 10.1109/CVPR.2017.531.
    [25]
    DANELLJAN M, HÄGER G, KHAN F S, et al. Learning spatially regularized correlation filters for visual tracking[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 4310–4318. doi: 10.1109/ICCV.2015.490.
    [26]
    LI Bo, YAN Junjie, WU Wei, et al. High performance visual tracking with Siamese region proposal network[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 8971–8980. doi: 10.1109/CVPR.2018.00935.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(8)  / Tables(3)

    Article Metrics

    Article views (1286) PDF downloads(119) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return