高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种平衡准确性以及高效性的显著性目标检测深度卷积网络模型

张文明 姚振飞 高雅昆 李海滨

张文明, 姚振飞, 高雅昆, 李海滨. 一种平衡准确性以及高效性的显著性目标检测深度卷积网络模型[J]. 电子与信息学报, 2020, 42(5): 1201-1208. doi: 10.11999/JEIT190229
引用本文: 张文明, 姚振飞, 高雅昆, 李海滨. 一种平衡准确性以及高效性的显著性目标检测深度卷积网络模型[J]. 电子与信息学报, 2020, 42(5): 1201-1208. doi: 10.11999/JEIT190229
Wenming ZHANG, Zhenfei YAO, Yakun GAO, Haibin LI. A Deep Convolutional Network for Saliency Object Detection with Balanced Accuracy and High Efficiency[J]. Journal of Electronics & Information Technology, 2020, 42(5): 1201-1208. doi: 10.11999/JEIT190229
Citation: Wenming ZHANG, Zhenfei YAO, Yakun GAO, Haibin LI. A Deep Convolutional Network for Saliency Object Detection with Balanced Accuracy and High Efficiency[J]. Journal of Electronics & Information Technology, 2020, 42(5): 1201-1208. doi: 10.11999/JEIT190229

一种平衡准确性以及高效性的显著性目标检测深度卷积网络模型

doi: 10.11999/JEIT190229
基金项目: 河北省自然科学基金(F2015203212, F2019203195)
详细信息
    作者简介:

    张文明:男,1979年生,副教授,研究方向为工业过程控制、机器视觉

    姚振飞:男,1992年生,硕士生,研究方向为机器视觉与图像处理

    高雅昆:男,1988年生,博士生,研究方向为机器视觉与图像处理

    李海滨:男,1978年生,教授,研究方向为工业过程控制.、机器视觉、人工智能

    通讯作者:

    高雅昆 gaoyakun6@163.com

  • 中图分类号: TN911.73; TP391.41

A Deep Convolutional Network for Saliency Object Detection with Balanced Accuracy and High Efficiency

Funds: The Nature Science Foundation of Hebei Province (F2015203212, F2019203195)
  • 摘要:

    当前的显著性目标检测算法在准确性和高效性两方面不能实现良好的平衡,针对这一问题,该文提出了一种新的平衡准确性以及高效性的显著性目标检测深度卷积网络模型。首先,通过将传统的卷积替换为可分解卷积,大幅减少计算量,提高检测效率。其次,为了更好地利用不同尺度的特征,采用了稀疏跨层连接结构及多尺度融合结构来提高模型检测精度。广泛的评价表明,与现有方法相比,所提的算法在效率和精度上都取得了领先的性能。

  • 图  1  整体框架图

    图  2  卷积分解示意图

    图  3  直连与稀疏跨层连接网络结构对比图

    图  4  不同连接结构效果对比图

    图  5  多尺度融合示意图

    图  6  不同模型视觉对比图

    图  7  5种数据集上不同算法P-R曲线图

    表  1  不同卷积结构对比

    结构参数量(106)准确率(%)使用时间(s)
    2维卷积5.1689.30.026
    分解卷积3.7589.70.017
    下载: 导出CSV

    表  2  不同卷积结构对比

    结构准确率(%)使用时间(s)
    无跨层连接89.70.017
    跨层连接91.70.023
    下载: 导出CSV

    表  3  整体网络结构详表

    结构名称类型输出尺寸输出编号结构名称类型输出尺寸输出编号
    convblock1reconv$ \times $2448$ \times $448$ \times $161cross-layerconv3rate=12224$ \times $224$ \times $256$5" $
    cross-layerconv3rate=16448$ \times $448$ \times $32$1' $convblock4maxpool下采样
    cross-layerconv3rate=24448$ \times $448$ \times $256$1'' $reconv$ \times $356$ \times $56$ \times $1286
    convblock2maxpool下采样concat3融合56$ \times $56$ \times $256$(5'+6) $
    reconv$ \times $2224$ \times $224$ \times $322conv1降维56$ \times $56$ \times $1287
    concat1融合224$ \times $224$ \times $64$(1'+2) $cross-layerconv3rate=656$ \times $56$ \times $256$7'' $
    conv1降维224$ \times $224$ \times $323convblock5maxpool下采样
    cross-layerconv3rate=8224$ \times $224$ \times $64$3′ $reconv$ \times $328$ \times $28$ \times $2568
    cross-layerconv3rate=18224$ \times $224$ \times $256$3" $concat4融合28$ \times $28$ \times $1280$(1''+3''+5''+7''+8) $
    convblock3maxpool下采样conv1降维28$ \times $28$ \times $2569
    reconv$ \times $3112$ \times $112$ \times $644upblock1deconv上采样
    concat2融合112$ \times $112$ \times $128$(3'+4) $reconv$ \times $3112$ \times $112$ \times $64
    conv1降维112$ \times $112$ \times $645upblock2deconv上采样448$ \times $448$ \times $2final
    ross-layerconv3rate=4224$ \times $224$ \times $128$5' $
    下载: 导出CSV

    表  4  F-measure(F-m)和MAE得分表

    算法MSRAECSSDPASCAL-SSODHKU-IS
    F-mMAE F-mMAE F-mMAE F-mMAE F-mMAE
    本文方法0.9140.0450.8930.0600.8140.1130.8320.1190.8930.036
    DCL0.9050.0520.8900.0880.8050.1250.8200.1390.8850.072
    ELD0.9040.0620.8670.0800.7710.1210.7600.1540.8390.074
    NLDF0.9110.0480.9050.0630.8310.0990.8100.1430.9020.048
    MST0.8390.1280.6530.1710.5840.236
    DSR0.8120.1190.7370.1730.6460.2040.6550.2340.7350.140
    下载: 导出CSV

    表  5  不同算法处理时间对比(s)

    模型本文方法DCLELDNLDFMSTDSR
    时间0.0231.2000.3000.0800.02513.580
    环境GTX1080GTX1080GTX1080Titan Xi7 CPUi7 CPU
    尺寸448$ \times $448300$ \times $400400$ \times $300300$ \times $400300$ \times $400400$ \times $300
    下载: 导出CSV
  • WANG Lijun, LU Huchuan, RUAN Xiang, et al. Deep networks for saliency detection via local estimation and global search[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 3183–3192. doi: 10.1109/CVPR.2015.7298938.
    LI Guanbin and YU Yizhou. Visual saliency based on multiscale deep features[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 5455–5463. doi: 10.1109/CVPR.2015.7299184.
    LEE G, TAI Y W, and KIM J. Deep saliency with encoded low level distance map and high level features[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 660–668. doi: 10.1109/CVPR.2016.78.
    LIU Nian and HAN Junwei. DHSNet: Deep hierarchical saliency network for salient object detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 678–686. doi: 10.1109/CVPR.2016.80.
    WANG Linzhao, WANG Lijun, LU Huchuan, et al. Saliency detection with recurrent fully convolutional networks[C]. The 14th European Conference on Computer Vision, Amsterdam, Netherlands, 2016: 825–841. doi: 10.1007/978-3-319-46493-0_50.
    ZHANG Xinsheng, GAO Teng, and GAO Dongdong. A new deep spatial transformer convolutional neural network for image saliency detection[J]. Design Automation for Embedded Systems, 2018, 22(3): 243–256. doi: 10.1007/s10617-018-9209-0
    ZHANG Jing, ZHANG Tong, DAI Yuchao, et al. Deep unsupervised saliency detection: A multiple noisy labeling perspective[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 9029–9038. doi: 10.1109/CVPR.2018.00941.
    CAO Feilong, LIU Yuehua, and WANG Dianhui. Efficient saliency detection using convolutional neural networks with feature selection[J]. Information Sciences, 2018, 456: 34–49. doi: 10.1016/j.ins.2018.05.006
    ZHU Dandan, DAI Lei, LUO Ye, et al. Multi-scale adversarial feature learning for saliency detection[J]. Symmetry, 2018, 10(10): 457–471. doi: 10.3390/sym10100457
    ZENG Yu, ZHUGE Yunzhi, LU Huchuan, et al. Multi-source weak supervision for saliency detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 6067–6076.
    SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. 2014, arXiv: 1409.1556.
    ALVAREZ J and PETERSSON L. DecomposeMe: Simplifying convNets for end-to-end learning[J]. 2016, arXiv: 1606.05426v1.
    LIU Tie, YUAN Zejian, SUN Jian, et al. Learning to detect a salient object[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(2): 353–367. doi: 10.1109/TPAMI.2010.70
    YAN Qiong, XU Li, SHI Jianping, et al. Hierarchical saliency detection[C]. 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013: 1155–1162. doi: 10.1109/CVPR.2013.153.
    LI Yin, HOU Xiaodi, KOCH C, et al. The secrets of salient object segmentation[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 280–287. doi: 10.1109/CVPR.2014.43.
    MOVAHEDI V and ELDER J H. Design and perceptual validation of performance measures for salient object segmentation[C]. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, USA, 2010: 49–56. doi: 10.1109/CVPRW.2010.5543739.
    LI Guanbin and YU Yizhou. Deep contrast learning for salient object detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 478–487. doi: 10.1109/CVPR.2016.58.
    LUO Zhiming, MISHRA A, ACHKAR A, et al. Non-local deep features for salient object detection[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 6593–6601. doi: 10.1109/CVPR.2017.698.
    TU W C, HE Shengfeng, YANG Qingxiong, et al. Real-time salient object detection with a minimum spanning tree[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 2334–2342. doi: 10.1109/CVPR.2016.256.
    LI Xiaohui, LU Huchuan, ZHANG Lihe, et al. Saliency detection via dense and sparse reconstruction[C]. 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 2013: 2976–2983. doi: 10.1109/ICCV.2013.370.
  • 加载中
图(7) / 表(5)
计量
  • 文章访问数:  4081
  • HTML全文浏览量:  3435
  • PDF下载量:  142
  • 被引次数: 0
出版历程
  • 收稿日期:  2019-04-08
  • 修回日期:  2019-08-30
  • 网络出版日期:  2020-01-21
  • 刊出日期:  2020-06-04

目录

    /

    返回文章
    返回