Advanced Search
Volume 39 Issue 5
May  2017
Turn off MathJax
Article Contents
HOU Zhiqiang, WANG Xin, YU Wangsheng, DAI Bo, JIN Zefenfen. Online Visual Tracking via Adaptive Deep Sparse Neural Network[J]. Journal of Electronics & Information Technology, 2017, 39(5): 1079-1087. doi: 10.11999/JEIT160762
Citation: HOU Zhiqiang, WANG Xin, YU Wangsheng, DAI Bo, JIN Zefenfen. Online Visual Tracking via Adaptive Deep Sparse Neural Network[J]. Journal of Electronics & Information Technology, 2017, 39(5): 1079-1087. doi: 10.11999/JEIT160762

Online Visual Tracking via Adaptive Deep Sparse Neural Network

doi: 10.11999/JEIT160762
Funds:

The National Natural Science Foundation of China (61473309), The Project Supported by Natural Science Basic Research Plan in Shaanxi Province (2015JM6269, 2016JM6050)

  • Received Date: 2016-07-20
  • Rev Recd Date: 2016-12-16
  • Publish Date: 2017-05-19
  • In visual tracking, the efficient and robust feature representation is the key factor to solve the problem of tracking drift in complex environments. Therefore, to solve the problems of the complex and time-consuming of the pre-training process of deep neural network and the drift of the single network tracking, an online tracking method based on an adaptive deep sparse network is proposed under the tracking structure of particle filter. A deep sparse neural network architecture, which can be adaptively selected according to different types of targets, is constructed with the implementation of the Rectifier Linear Unit (ReLU) activation function. The robustness of deep tracking network can be easily achieved only through the online training of limited labeled samples. The results of experiments show that, compared with the state-of-the-art tracking algorithm, the average success ratio and precision of the proposed algorithm are both the highest, and they are raised by 20.64% and 17.72% respectively contrasted with the Deep Learning Tracker (DLT) algorithm based on deep learning. The proposed method can solve the problems of tracking drift efficiently, and shows better robustness, especially for the complex environment such as illumination changes, background clutter and so on.
  • loading
  • SMEULDERS A W M, CHU D M, CUCCHIARA R, et al. Visual tracking: An experimental survey[J]. IEEE Transactions On Pattern Analysis and Machine Intelligence, 2014, 36(7): 1442-1468. doi: 10.1109/TPAMI.2013.230.
    WANG Naiyan, SHI Jianping, YEUNG Dityan, et al. Understanding and diagnosing visual tracking systems[C]. IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 3101-3109. doi: 1109/ICCV.2015.355.
    ROSS D A, LIM J, LIN R S, et al. Incremental learning for robust visual tracking[J]. International Journal of Computer Vision, 2008, 77(1-3): 125-141. doi: 10.1007/s11263-007- 0075-7.
    BABENKO B, YANG M, and BELONGIE S. Robust object tracking with online multiple instance learning [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8): 1619-1632. doi: 10.1109/TPAMI.2010.226.
    KALAL Z, MIKOLAJCZYK K, and MATAS J. Tracking- learning-detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409-1422. doi: 10.1109/TPAMI.2011.239.
    ZHANG Kaihua, ZHANG Lei, and YANG Minghsuan. Real-time compressive tracking[C]. European Conference on Computer Vision, Florence, Italy, 2012: 864-877.
    WU Yi, LIM Jongwoo, and YANG Minghsuan. Online object tracking: A benchmark[C]. IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013, 9(4): 2411-2418. doi: 10.1109/CVPR.2013.312.
    MA Chao, HUANG Jiabin, YANG Xiaokang, et al. Hierarchical convolutional features for visual tracking[C]. IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 3074-3082.
    NAJAFABADI M M, VILLANUSTRE F, KHOSHGOFTAAR T M, et al. Deep learning applications and challenges in big data analytics[J]. Journal of Big Data, 2015, 2(1): 1-21. doi: 10.1186/s40537-014-0007-7.
    李寰宇, 毕笃彦, 杨源, 等. 基于深度特征表达与学习的视觉跟踪算法研究[J]. 电子与信息学报, 2015, 37(9): 2033-2039. doi: 10.11999/JEIT150031.
    LI Huanyu, BI Duyan, YANG Yuan, et al. Research on visual tracking algorithm based on deep feature expression and learning[J]. Journal of Electronics Information Technology, 2015, 37(9): 2033-2039. doi: 10.11999/JEIT150031.
    JIN J, DUNDAR A, BATES J, et al. Tracking with deep neural networks[C]. Annual Conference on Information Sciences and Systems, Baltimore, MD, USA, 2013: 213-217.
    WANG Naiyan and YEUNG Dityan. Learning a deep compact image representation for visual tracking[C]. Advances in Neural Information Processing Systems, South Lake Tahoe, Nevada, USA, 2013: 809-817.
    RUSSAKOVSKY O, DENG J, SU H, et al. Imagenet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211-252. doi: 10.1007/ s11263-015-0816-y.
    侯志强, 戴铂, 胡丹, 等. 基于感知深度神经网络的视觉跟踪[J]. 电子与信息学报, 2016, 38(7): 1616-1623. doi: 10.11999/ JEIT151449.
    HOU Zhiqiang, DAI Bo, HU Dan, et al. Robust visual tracking via perceptive deep neural network[J]. Journal of Electronics Information Technology, 2016, 38(7): 1616-1623. doi: 10.11999/JEIT151449.
    GLOROT X and BENGIO Y. Understanding the difficulty of training deep feedforward neural networks[C]. International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 2010: 249-256.
    GLOROT X, BORDES A, and BENGIO Y. Deep sparse rectifier neural networks[C]. International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 2011: 315-323.
    TOTH L. Phone recognition with deep sparse rectifier neural networks[C]. IEEE International Conference on Acoustics, Speech and Signal Processing, Vancounver, BC, Canada, 2013: 6985-6989. doi: 10.1109/ICASSP.2013.6639016.
    VINCENT P, LAROCHELLE H, LAJOIE I, et al. Stacked denoising autoencoders: Iearning useful representations in a deep network with a local denoising criterion[J]. Journal of Machine Learning Research, 2010, 11(6): 3371-3408.
    HE K, ZHANG X, REN S, et al. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification[C]. IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1026-1034.
    WU Y W, ZHAO H H, and ZAHNG L Q. Image denoising with retified linear units[J]. Lecture Notes in Computer Science, 2014, 8836: 142-149. doi: 10.1007/978-3-319- 12643-2_18.
    LI X, HU W, SHEN C, et al. A survey of appearance models in visual object tracking[J]. ACM Transactions on Intelligent Systems and Technology, 2013, 4(4): 1-48. doi: 10.1145/ 2508037.2508039.
    WANG F S. Particle filters for visual tracking[C]. International Conference on Advanced Research on Computer Science and Information Engineering, Zhengzhou, China, 2011: 107-112.
    GRABNER H, GRABNER M, and BISCHOF H. Real-time tracking via on-line boosting[C]. British Machine Vision Conference, Edinburgh, Scotland, 2006: 47-56.
    ADAM A, RIVLIN E, and SHIMSHONI I. Robust fragments-based tracking using the integral histogram[C]. IEEE Conference on Computer Vision and Pattern Recognition, New York, USA, 2006: 798-805.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (1409) PDF downloads(298) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return