高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于自适应深度稀疏网络的在线跟踪算法

侯志强 王鑫 余旺盛 戴铂 金泽芬芬

侯志强, 王鑫, 余旺盛, 戴铂, 金泽芬芬. 基于自适应深度稀疏网络的在线跟踪算法[J]. 电子与信息学报, 2017, 39(5): 1079-1087. doi: 10.11999/JEIT160762
引用本文: 侯志强, 王鑫, 余旺盛, 戴铂, 金泽芬芬. 基于自适应深度稀疏网络的在线跟踪算法[J]. 电子与信息学报, 2017, 39(5): 1079-1087. doi: 10.11999/JEIT160762
HOU Zhiqiang, WANG Xin, YU Wangsheng, DAI Bo, JIN Zefenfen. Online Visual Tracking via Adaptive Deep Sparse Neural Network[J]. Journal of Electronics & Information Technology, 2017, 39(5): 1079-1087. doi: 10.11999/JEIT160762
Citation: HOU Zhiqiang, WANG Xin, YU Wangsheng, DAI Bo, JIN Zefenfen. Online Visual Tracking via Adaptive Deep Sparse Neural Network[J]. Journal of Electronics & Information Technology, 2017, 39(5): 1079-1087. doi: 10.11999/JEIT160762

基于自适应深度稀疏网络的在线跟踪算法

doi: 10.11999/JEIT160762
基金项目: 

国家自然科学基金(61473309);陕西省自然科学基础研究计划项目(2015JM6269, 2016JM6050)

Online Visual Tracking via Adaptive Deep Sparse Neural Network

Funds: 

The National Natural Science Foundation of China (61473309), The Project Supported by Natural Science Basic Research Plan in Shaanxi Province (2015JM6269, 2016JM6050)

  • 摘要: 视觉跟踪中,高效鲁棒的特征表达是解决复杂环境下跟踪漂移问题的关键。该文针对深层网络预训练复杂费时及单网络跟踪易漂移的问题,在粒子滤波框架下,提出一种基于自适应深度稀疏网络的在线跟踪算法。该算法利用ReLU激活函数,针对不同类型目标构建了一种具有自适应选择性的深度稀疏网络结构,仅通过有限标签样本的在线训练,就可得到鲁棒的跟踪网络。实验数据表明:与当前主流的跟踪算法相比,该算法的平均跟踪成功率和精度均为最好,且与同样基于深度学习的DLT算法相比分别提高了20.64%和17.72%。在光照变化、相似背景等复杂环境下,该算法表现出了良好的鲁棒性,能够有效地解决跟踪漂移问题。
  • SMEULDERS A W M, CHU D M, CUCCHIARA R, et al. Visual tracking: An experimental survey[J]. IEEE Transactions On Pattern Analysis and Machine Intelligence, 2014, 36(7): 1442-1468. doi: 10.1109/TPAMI.2013.230.
    WANG Naiyan, SHI Jianping, YEUNG Dityan, et al. Understanding and diagnosing visual tracking systems[C]. IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 3101-3109. doi: 1109/ICCV.2015.355.
    ROSS D A, LIM J, LIN R S, et al. Incremental learning for robust visual tracking[J]. International Journal of Computer Vision, 2008, 77(1-3): 125-141. doi: 10.1007/s11263-007- 0075-7.
    BABENKO B, YANG M, and BELONGIE S. Robust object tracking with online multiple instance learning [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8): 1619-1632. doi: 10.1109/TPAMI.2010.226.
    KALAL Z, MIKOLAJCZYK K, and MATAS J. Tracking- learning-detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409-1422. doi: 10.1109/TPAMI.2011.239.
    ZHANG Kaihua, ZHANG Lei, and YANG Minghsuan. Real-time compressive tracking[C]. European Conference on Computer Vision, Florence, Italy, 2012: 864-877.
    WU Yi, LIM Jongwoo, and YANG Minghsuan. Online object tracking: A benchmark[C]. IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013, 9(4): 2411-2418. doi: 10.1109/CVPR.2013.312.
    MA Chao, HUANG Jiabin, YANG Xiaokang, et al. Hierarchical convolutional features for visual tracking[C]. IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 3074-3082.
    NAJAFABADI M M, VILLANUSTRE F, KHOSHGOFTAAR T M, et al. Deep learning applications and challenges in big data analytics[J]. Journal of Big Data, 2015, 2(1): 1-21. doi: 10.1186/s40537-014-0007-7.
    李寰宇, 毕笃彦, 杨源, 等. 基于深度特征表达与学习的视觉跟踪算法研究[J]. 电子与信息学报, 2015, 37(9): 2033-2039. doi: 10.11999/JEIT150031.
    LI Huanyu, BI Duyan, YANG Yuan, et al. Research on visual tracking algorithm based on deep feature expression and learning[J]. Journal of Electronics Information Technology, 2015, 37(9): 2033-2039. doi: 10.11999/JEIT150031.
    JIN J, DUNDAR A, BATES J, et al. Tracking with deep neural networks[C]. Annual Conference on Information Sciences and Systems, Baltimore, MD, USA, 2013: 213-217.
    WANG Naiyan and YEUNG Dityan. Learning a deep compact image representation for visual tracking[C]. Advances in Neural Information Processing Systems, South Lake Tahoe, Nevada, USA, 2013: 809-817.
    RUSSAKOVSKY O, DENG J, SU H, et al. Imagenet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211-252. doi: 10.1007/ s11263-015-0816-y.
    侯志强, 戴铂, 胡丹, 等. 基于感知深度神经网络的视觉跟踪[J]. 电子与信息学报, 2016, 38(7): 1616-1623. doi: 10.11999/ JEIT151449.
    HOU Zhiqiang, DAI Bo, HU Dan, et al. Robust visual tracking via perceptive deep neural network[J]. Journal of Electronics Information Technology, 2016, 38(7): 1616-1623. doi: 10.11999/JEIT151449.
    GLOROT X and BENGIO Y. Understanding the difficulty of training deep feedforward neural networks[C]. International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 2010: 249-256.
    GLOROT X, BORDES A, and BENGIO Y. Deep sparse rectifier neural networks[C]. International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 2011: 315-323.
    TOTH L. Phone recognition with deep sparse rectifier neural networks[C]. IEEE International Conference on Acoustics, Speech and Signal Processing, Vancounver, BC, Canada, 2013: 6985-6989. doi: 10.1109/ICASSP.2013.6639016.
    VINCENT P, LAROCHELLE H, LAJOIE I, et al. Stacked denoising autoencoders: Iearning useful representations in a deep network with a local denoising criterion[J]. Journal of Machine Learning Research, 2010, 11(6): 3371-3408.
    HE K, ZHANG X, REN S, et al. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification[C]. IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1026-1034.
    WU Y W, ZHAO H H, and ZAHNG L Q. Image denoising with retified linear units[J]. Lecture Notes in Computer Science, 2014, 8836: 142-149. doi: 10.1007/978-3-319- 12643-2_18.
    LI X, HU W, SHEN C, et al. A survey of appearance models in visual object tracking[J]. ACM Transactions on Intelligent Systems and Technology, 2013, 4(4): 1-48. doi: 10.1145/ 2508037.2508039.
    WANG F S. Particle filters for visual tracking[C]. International Conference on Advanced Research on Computer Science and Information Engineering, Zhengzhou, China, 2011: 107-112.
    GRABNER H, GRABNER M, and BISCHOF H. Real-time tracking via on-line boosting[C]. British Machine Vision Conference, Edinburgh, Scotland, 2006: 47-56.
    ADAM A, RIVLIN E, and SHIMSHONI I. Robust fragments-based tracking using the integral histogram[C]. IEEE Conference on Computer Vision and Pattern Recognition, New York, USA, 2006: 798-805.
  • 加载中
计量
  • 文章访问数:  1381
  • HTML全文浏览量:  125
  • PDF下载量:  298
  • 被引次数: 0
出版历程
  • 收稿日期:  2016-07-20
  • 修回日期:  2016-12-16
  • 刊出日期:  2017-05-19

目录

    /

    返回文章
    返回