高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于多图神经网络协同学习的显著性物体检测方法

刘冰 王甜甜 高丽娜 徐明珠 付平

刘冰, 王甜甜, 高丽娜, 徐明珠, 付平. 基于多图神经网络协同学习的显著性物体检测方法[J]. 电子与信息学报, 2023, 45(7): 2561-2570. doi: 10.11999/JEIT220706
引用本文: 刘冰, 王甜甜, 高丽娜, 徐明珠, 付平. 基于多图神经网络协同学习的显著性物体检测方法[J]. 电子与信息学报, 2023, 45(7): 2561-2570. doi: 10.11999/JEIT220706
LIU Bing, WANG Tiantian, GAO Lina, XU Mingzhu, FU Ping. Salient Object Detection Based on Multiple Graph Neural Networks Collaborative Learning[J]. Journal of Electronics & Information Technology, 2023, 45(7): 2561-2570. doi: 10.11999/JEIT220706
Citation: LIU Bing, WANG Tiantian, GAO Lina, XU Mingzhu, FU Ping. Salient Object Detection Based on Multiple Graph Neural Networks Collaborative Learning[J]. Journal of Electronics & Information Technology, 2023, 45(7): 2561-2570. doi: 10.11999/JEIT220706

基于多图神经网络协同学习的显著性物体检测方法

doi: 10.11999/JEIT220706
基金项目: 国家自然科学基金(62171156)
详细信息
    作者简介:

    刘冰:男,副教授,研究方向为机器学习与图像处理、嵌入式人工智能

    王甜甜:女,硕士生,研究方向为计算机视觉

    高丽娜:女,博士生,研究方向为计算机视觉

    徐明珠:男,助理研究员,研究方向为多媒体信息计算、计算机视觉

    付平:男,教授,研究方向为机器学习与图像处理、信息检测与处理

    通讯作者:

    徐明珠 xumingzhu@sdu.edu.cn

  • 中图分类号: TN911.73; TP391.41

Salient Object Detection Based on Multiple Graph Neural Networks Collaborative Learning

Funds: The National Natural Science Foundation of China (62171156)
  • 摘要: 目前基于深度卷积神经网络的显著性物体检测方法难以在非欧氏空间不规则结构数据中应用,在复杂视觉场景中易造成显著物体边缘及结构等高频信息损失,影响检测性能。为此,该文面向显著性物体检测任务提出一种端到端的多图神经网络协同学习框架,实现显著性边缘特征与显著性区域特征协同学习的过程。在该学习框架中,该文构造了一种动态信息增强图卷积算子,通过增强不同图节点之间和同一图节点内不同通道之间的信息传递,捕获非欧氏空间全局上下文结构信息,完成显著性边缘信息与显著性区域信息的充分挖掘;进一步地,通过引入注意力感知融合模块,实现显著性边缘信息与显著性区域信息的互补融合,为两种信息挖掘过程提供互补线索。最后,通过显式编码显著性边缘信息,指导显著性区域的特征学习,从而更加精准地定位复杂场景下的显著性区域。在4个公开的基准测试数据集上的实验表明,所提方法优于目前主流的基于深度卷积神经网络的显著性物体检测方法,具有较强的鲁棒性和泛化能力。
  • 图  1  所提方法整体框架

    图  2  初始图交互示意图

    图  3  动态信息增强图卷积模块

    图  4  注意力感知融合模块

    图  5  9种方法在3个标准数据集上的P-R曲线图

    图  6  视觉比较结果

    图  7  不同关系类型数量对性能指标$\textstyle {S_\alpha } $, $\textstyle F_\beta ^\omega $和MAE的影响曲线

    表  1  参数$ N $和$ k $在不同设置下的性能结果

    ECSSDPASCAL-S
    $ {S_\alpha }( \uparrow ) $$ F_\beta ^\omega ( \uparrow ) $${\rm{MAE}}( \downarrow )$$ {S_\alpha }( \uparrow ) $$ F_\beta ^\omega ( \uparrow ) $${\rm{MAE}}( \downarrow )$
    本文 ($ N = 48,k = 8 $)0.9330.9240.0240.8810.8410.047
    本文 ($ N = 32,k = 8 $)0.9320.9260.0240.8860.8500.047
    本文 ($ N = 16,k = 8 $)0.9290.9230.0280.8790.8300.057
    本文 ($ N = 32,k = 16 $)0.9320.9280.0240.8790.8510.047
    本文 ($ N = 32,k = 8 $)0.9320.9260.0240.8860.8500.047
    本文 ($ N = 32,k = 4 $)0.9310.9220.0260.8750.8330.052
    下载: 导出CSV

    表  2  9种方法在4个标准数据集上的$ {S_\alpha } $, $ F_\beta ^\omega $和MAE指标

    方法DUTS-TEECSSDPASCAL-SDUT-OMRON
    $ {S_\alpha }( \uparrow ) $$ F_\beta ^\omega ( \uparrow ) $$ {\rm{MAE}}( \downarrow ) $$ {S_\alpha }( \uparrow ) $$ F_\beta ^\omega ( \uparrow ) $$ {\rm{MAE}}( \downarrow ) $$ {S_\alpha }( \uparrow ) $$ F_\beta ^\omega ( \uparrow ) $$ {\rm{MAE}}( \downarrow ) $$ {S_\alpha }( \uparrow ) $$ F_\beta ^\omega ( \uparrow ) $$ {\rm{MAE}}( \downarrow ) $
    PoolNet0.8830.8070.0400.9210.8960.0390.8510.7990.0750.8360.7290.055
    BASNet0.8660.8030.0480.9160.9040.0370.8360.7950.0770.8360.7510.057
    EGNet0.8790.7980.0440.9190.8920.0410.8470.7910.0780.8360.7270.056
    AFNet0.8670.7850.0460.9140.8870.0420.8500.7970.0710.8260.7170.057
    DFI0.8860.8170.0390.9270.9060.0350.8660.8190.0650.8390.7360.055
    LDF0.8920.8450.0340.9240.9150.0340.8620.8250.0610.8390.7510.052
    PFSNet0.9000.8980.0360.9270.9120.0310.8440.7910.0630.8020.7430.055
    DCN0.8920.8400.0350.9280.9200.0320.8620.8250.0620.8450.7600.051
    本文0.9200.8930.0270.9320.9260.0240.8860.8500.0470.8670.8070.048
    下载: 导出CSV

    表  3  不同模块的性能影响

    模块ECSSD
    BDMEGC(R=2)AMF$ {S_\alpha }( \uparrow ) $$ F_\beta ^\omega ( \uparrow ) $$ {\rm {MAE}}( \downarrow ) $
    0.7250.7080.189
    0.9110.8980.040
    0.9320.9260.024
    下载: 导出CSV
  • [1] VAN DE SANDE K E A, UIJLINGS J R R, GEVERS T, et al. Segmentation as selective search for object recognition[C]. 2011 International Conference on Computer Vision, Barcelona, Spain, 2011: 1879–1886.
    [2] BORJI A, CHENG Mingming, HOU Qibin, et al. Salient object detection: A survey[J]. Computational Visual Media, 2019, 5(2): 117–150. doi: 10.48550/arXiv.1411.5878
    [3] 刘桂池. 基于显著性和稀疏表示学习的光学遥感图像目标检测与分类[D]. [博士论文], 郑州大学, 2020.

    LIU Guichi. Saliency and sparse representation learning based optical remote sensing image object detection and classification[D]. [Ph. D. dissertation], Zhengzhou University, 2020.
    [4] 康凯, 杨磊, 李红艳. 基于视觉显著性的小型无人机目标检测方法[J]. 光学与光电技术, 2020, 18(3): 40–44. doi: 10.19519/j.cnki.1672-3392.2020.03.008

    KANG Kai, YANG Lei, and LI Hongyan. A novel small UAV detection method based on visual saliency[J]. Optics &Optoelectronic Technology, 2020, 18(3): 40–44. doi: 10.19519/j.cnki.1672-3392.2020.03.008
    [5] REN Zhixiang, GAO Shenghua, CHIA L T, et al. Region-based saliency detection and its application in object recognition[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2014, 24(5): 769–779. doi: 10.1109/TCSVT.2013.2280096
    [6] FENG Wei, HAN Ruize, GUO Qing, et al. Dynamic saliency-aware regularization for correlation filter-based object tracking[J]. IEEE Transactions on Image Processing, 2019, 28(7): 3232–3245. doi: 10.1109/TIP.2019.2895411
    [7] RAMANISHKA V, DAS A, ZHANG Jianming, et al. Top-down visual saliency guided by captions[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 3135–3144.
    [8] ZHOU Lian, ZHANG Yuejie, JIANG Yugang, et al. Re-caption: Saliency-enhanced image captioning through two-phase learning[J]. IEEE Transactions on Image Processing, 2020, 29: 694–709. doi: 10.1109/TIP.2019.2928144
    [9] HOU Qibin, CHENG Mingming, HU Xiaowei, et al. Deeply supervised salient object detection with short connections[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(4): 815–828. doi: 10.1109/TPAMI.2018.2815688
    [10] ZHANG Pingping, LIU Wei, LU Huchuan, et al. Salient object detection with lossless feature reflection and weighted structural loss[J]. IEEE Transactions on Image Processing, 2019, 28(6): 3048–3060. doi: 10.1109/TIP.2019.2893535
    [11] 张文明, 姚振飞, 高雅昆, 等. 一种平衡准确性以及高效性的显著性目标检测深度卷积网络模型[J]. 电子与信息学报, 2020, 42(5): 1201–1208. doi: 10.11999/JEIT190229

    ZHANG Wenming, YAO Zhenfei, GAO Yakun, et al. A deep convolutional network for saliency object detection with balanced accuracy and high efficiency[J]. Journal of Electronics &Information Technology, 2020, 42(5): 1201–1208. doi: 10.11999/JEIT190229
    [12] MA Mingcan, XIA Changqun, and LI Jia. Pyramidal feature shrinking for salient object detection[C]. The Thirty-Fifth AAAI Conference on Artificial Intelligence, Palo Alto, USA, 2021: 2311–2318.
    [13] LIU Jiangjiang, HOU Qibin, and CHENG Mingming. Dynamic feature integration for simultaneous detection of salient object, edge, and skeleton[J]. IEEE Transactions on Image Processing, 2020, 29: 8652–8667. doi: 10.1109/TIP.2020.3017352
    [14] FENG Mengyang, LU Huchuan, and DING Errui. Attentive feedback network for boundary-aware salient object detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 1623–1632.
    [15] ZHAO Jiaxing, LIU Jiangjiang, FAN Dengping, et al. EGNet: Edge guidance network for salient object detection[C]. 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 8778–8787.
    [16] LIU Jiangjiang, HOU Qibin, CHENG Mingming, et al. A simple pooling-based design for real-time salient object detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 3912–3921.
    [17] QIN Xuebin, FAN Dengping, HUANG Chenyang, et al. Boundary-aware segmentation network for mobile and web applications[EB/OL]. https://arxiv.org/abs/2101.04704, 2021.
    [18] WEI Jun, WANG Shuhui, WU Zhe, et al. Label decoupling framework for salient object detection[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 13022–13031.
    [19] WU Zhe, SU Li, and HUANG Qingming. Decomposition and completion network for salient object detection[J]. IEEE Transactions on Image Processing, 2021, 30: 6226–6239. doi: 10.1109/TIP.2021.3093380
    [20] KIPF T N and WELLING M. Semi-supervised classification with graph convolutional networks[C]. The 5th International Conference on Learning Representations, Toulon, France, 2017.
    [21] WANG Lijun, LU Huchuan, WANG Yifan, et al. Learning to detect salient objects with image-level supervision[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 3796–3805.
    [22] SHI Jianping, YAN Qiong, LI Xu, et al. Hierarchical image saliency detection on extended CSSD[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(4): 717–729. doi: 10.1109/TPAMI.2015.2465960
    [23] LI Yin, HOU Xiaodi, KOCH C, et al. The secrets of salient object segmentation[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 280–287.
    [24] LI Cuiping, CHEN Zhenxue, LIU Chengyun, et al. Saliency detection: Multi-level combination approach via graph-based manifold ranking[C]. 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, Guilin, China, 2017: 604–609.
    [25] ZHAO Xiaoqi, PANG Youwei, ZHANG Lihe, et al. Self-supervised pretraining for RGB-D salient object detection[C]. The Thirty-Sixth AAAI Conference on Artificial Intelligence, Palo Alto, USA, 2022: 3463–3471.
    [26] LI Yin and GUPTA A. Beyond grids: Learning graph representations for visual recognition[C]. The 32nd International Conference on Neural Information Processing Systems, Montréal, Canada, 2018: 9245–9255.
    [27] ZHAI Qiang, LI Xin, YANG Fan, et al. Mutual graph learning for camouflaged object detection[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 12992–13002.
  • 加载中
图(7) / 表(3)
计量
  • 文章访问数:  482
  • HTML全文浏览量:  219
  • PDF下载量:  176
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-05-31
  • 修回日期:  2022-12-05
  • 网络出版日期:  2022-12-22
  • 刊出日期:  2023-07-10

目录

    /

    返回文章
    返回