高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

多尺度语义信息融合的目标检测

陈鸿坤 罗会兰

田生伟, 周兴发, 禹龙, 冯冠军, 艾山?吾买尔, 李圃. 基于双向LSTM的维吾尔语事件因果关系抽取[J]. 电子与信息学报, 2018, 40(1): 200-208. doi: 10.11999/JEIT170402
引用本文: 陈鸿坤, 罗会兰. 多尺度语义信息融合的目标检测[J]. 电子与信息学报, 2021, 43(7): 2087-2095. doi: 10.11999/JEIT200147
TIAN Shengwei, ZHOU Xingfa, YU Long, FENG Guanjun, Aishan WUMAIER, LI Pu. Causal Relation Extraction of Uyghur Events Based on Bidirectional Long Short-term Memory Model[J]. Journal of Electronics & Information Technology, 2018, 40(1): 200-208. doi: 10.11999/JEIT170402
Citation: Hongkun CHEN, Huilan LUO. Multi-scale Semantic Information Fusion for Object Detection[J]. Journal of Electronics & Information Technology, 2021, 43(7): 2087-2095. doi: 10.11999/JEIT200147

多尺度语义信息融合的目标检测

doi: 10.11999/JEIT200147
基金项目: 国家自然科学基金(61862031, 61462035);江西省教育厅科学技术研究项目(GJJ200859, GJJ200884);江西省赣州市“科技创新人才计划” 项目
详细信息
    作者简介:

    陈鸿坤:男,1995年生,硕士,研究方向为目标检测

    罗会兰:女,1974年生,教授,博士后,主要研究方向为机器学习、模式识别

    通讯作者:

    罗会兰 luohuilan@sina.com

  • 中图分类号: TN911.73; TP391.4

Multi-scale Semantic Information Fusion for Object Detection

Funds: The National Natural Science Foundation of China (61862031, 61462035), The Science and Technology Research Project of Jiangxi Provincial Department of Education (GJJ200859, GJJ200884), Ganzhou City, Jiangxi Province “Technology Innovation Talent Program” Project
  • 摘要: 针对当前目标检测算法对小目标及密集目标检测效果差的问题,该文在融合多种特征和增强浅层特征表征能力的基础上提出了浅层特征增强网络(SEFN),首先将特征提取网络VGG16中Conv4_3层和Conv5_3层提取的特征进行融合形成基础融合特征;然后将基础融合特征输入到小型的多尺度语义信息融合模块中,得到具有丰富上下文信息和空间细节信息的语义特征,同时把语义特征和基础融合特征经过特征重利用模块获得浅层增强特征;最后基于浅层增强特征进行一系列卷积获取多个不同尺度的特征,并输入各检测分支进行检测,利用非极大值抑制算法实现最终的检测结果。在PASCAL VOC2007和MS COCO2014数据集上进行测试,模型的平均精度均值分别为81.2%和33.7%,相对于经典的单极多盒检测器(SSD)算法,分别提高了2.7%和4.9%;此外,该文方法在检测小目标和密集目标场景上,检测精度和召回率都有显著提升。实验结果表明该文算法采用特征金字塔结构增强了浅层特征的语义信息,并利用特征重利用模块有效保留了浅层的细节信息用于检测,增强了模型对小目标和密集目标的检测效果。
  • 图  1  浅层特征增强网络

    图  2  拼接融合模块

    图  3  多尺度语义信息融合模块

    图  4  特征重利用模块

    图  5  不同算法在PASCAL VOC2007数据集上的检测结果

    表  1  在PASCAL VOC2007测试集本文方法与其他方法的结果对比

    方法骨干网络输入尺度GPUfps(帧/s)mAP(%),IOU=0.5
    Faster RCNN[16]VGG161000×600Tian X7.073.2
    Faster RCNN[16]ResNet-1011000×600K402.476.4
    HyperNet[17]VGG161000×600Tian X5.076.3
    OHEM[18]VGG161000×600Tian X7.074.6
    ION[19]VGG161000×600Tian X1.376.5
    R-FCN[12]ResNet-1011000×600K405.879.5
    YOLOv1[14]GoogleNet448×448Tian X45.063.4
    YOLOv2[15]Darknet-19352×352Tian X81.073.7
    SSD300[1]VGG16300×300Tian X46.077.2
    DSSD321[4]ResNet-101321×321Tian X9.578.6
    RSSD300[13]VGG16300×300Tian X35.078.5
    FSSD300[6]VGG16300×3001080Ti65.878.8
    RFB300[7]VGG16300×3001080Ti83.080.5
    本文SEFN300VGG16300×300Tesla P10055.079.6
    YOLOv2[15]Darknet-19544×544Tian X40.078.6
    SSD512[1]VGG16512×512Tian X19.078.5
    DSSD513[4]ResNet-101513×513Tian X5.581.5
    RSSD512[13]VGG16512×512Tian X16.680.8
    FSSD512[6]VGG16512×5121080Ti35.780.9
    RFB512[7]VGG16512×5121080Ti38.082.2
    本文SEFN512VGG16512×512Tesla P10030.081.2
    下载: 导出CSV

    表  2  在MS COCO2014_minival测试集上本文方法与其他方法的结果对比

    方法骨干网络检测精度mAP(%)mAP(%)召回率AR(%)
    IOU=0.5:0.95IOU=0.5IOU=0.75area: Sarea: Marea: Larea: Sarea: Marea: L
    Faster R-CNN[16]VGG1624.245.323.57.726.437.1
    R-FCN[12]ResNet-10129.251.510.332.443.3
    YOLOv2[15]Darknet-1921.644.019.25.022.435.59.836.554.4
    SSD512[1]VGG1628.848.530.310.931.843.516.546.660.8
    DSSD513[4]ResNet-10133.253.335.213.035.451.521.849.166.4
    FSSD512[6]VGG1631.852.833.514.235.145.022.349.962.0
    RFB512[7]VGG1634.455.736.417.637.049.727.352.365.4
    本文SEFN512VGG1633.754.735.619.238.047.329.152.563.2
    下载: 导出CSV
  • [1] LIU Wei, ANGUELOV D, ERHAN D, et al. SSD: Single shot MultiBox detector[C]. The 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 21–37.
    [2] 罗会兰, 卢飞, 孔繁胜. 基于区域与深度残差网络的图像语义分割[J]. 电子与信息学报, 2019, 41(11): 2777–2786. doi: 10.11999/JEIT190056

    LUO Huilan, LU Fei, and KONG Fansheng. Image semantic segmentation based on region and deep residual network[J]. Journal of Electronics &Information Technology, 2019, 41(11): 2777–2786. doi: 10.11999/JEIT190056
    [3] LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 936–944.
    [4] FU Chengyang, LIU Wei, RANGA A, et al. DSSD: Deconvolutional single shot detector[EB/OL]. http://arxiv.org/abs/1701.06659, 2017.
    [5] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [6] LI Zuoxin and ZHOU Fuqiang. FSSD: Feature fusion single shot multibox detector[EB/OL]. https://arxiv.org/abs/1712.00960, 2017.
    [7] LIU Songtao, HUANG Di, and WANG Yunhong. Receptive field block net for accurate and fast object detection[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 404–419.
    [8] EVERINGHAM M, VAN GOOL L, WILLIAMS C K I, et al. The PASCAL Visual Object Classes (VOC) challenge[J]. International Journal of Computer Vision, 2010, 88(2): 303–338. doi: 10.1007/s11263-009-0275-4
    [9] LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: Common objects in context[C]. 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 740–755.
    [10] LI Hanchao, XIONG Pengfei, AN Jie, et al. Pyramid attention network for semantic segmentation[C]. British Machine Vision Conference, Newcastle, UK, 2018.
    [11] 罗会兰, 卢飞, 严源. 跨层融合与多模型投票的动作识别[J]. 电子与信息学报, 2019, 41(3): 649–655. doi: 10.11999/JEIT180373

    LUO Huilan, LU Fei, and YAN Yuan. Action recognition based on multi-model voting with cross layer fusion[J]. Journal of Electronics &Information Technology, 2019, 41(3): 649–655. doi: 10.11999/JEIT180373
    [12] DAI Jifeng, LI Yi, HE Kaiming, et al. R-FCN: Object detection via region-based fully convolutional networks[C]. The 30th International Conference on Neural Information Processing Systems, Barcelona, SPAIN, 2016: 379–387.
    [13] JEONG J, PARK H, and KWAK N. Enhancement of SSD by concatenating feature maps for object detection[C]. British Machine Vision Conference, London, UK, 2017.
    [14] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 779–788.
    [15] REDMON J and FARHADI A. YOLO9000: Better, faster, stronger[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 6517–6525.
    [16] REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149. doi: 10.1109/TPAMI.2016.2577031
    [17] KONG Tao, YAO Anbang, CHEN Yurong, et al. HyperNet: Towards accurate region proposal generation and joint object detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 845–853.
    [18] SHRIVASTAVA A, GUPTA A, and GIRSHICK R. Training region-based object detectors with online hard example mining[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016.
    [19] BELL S, ZITNICK C L, BALA K, et al. Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, 2874–2883.
  • 期刊类型引用(5)

    1. 张小军,李娜,董雁飞,崔建明,郭华. 针对极化码置信度传播算法的低复杂度早期停止准则. 电子与信息学报. 2021(01): 77-84 . 本站查看
    2. 赵生妹,徐鹏,张南,孔令军. 基于CNN扰动的极化码译码算法. 电子与信息学报. 2021(07): 1900-1906 . 本站查看
    3. 刘建航,何怡静,李世宝,卢丽金,邓云强. 基于预译码的极化码最大似然简化连续消除译码算法. 电子与信息学报. 2019(04): 959-966 . 本站查看
    4. 王杰,郭锐. 极化码改进串行抵消比特翻转译码算法. 通信技术. 2018(03): 516-521 . 百度学术
    5. 法代东. 计算机网络通信中实时差错控制技术研究. 山东工业技术. 2018(23): 113 . 百度学术

    其他类型引用(7)

  • 加载中
图(5) / 表(2)
计量
  • 文章访问数:  1556
  • HTML全文浏览量:  1148
  • PDF下载量:  228
  • 被引次数: 12
出版历程
  • 收稿日期:  2020-03-03
  • 修回日期:  2020-11-27
  • 网络出版日期:  2020-12-07
  • 刊出日期:  2021-07-10

目录

    /

    返回文章
    返回