Advanced Search
Volume 46 Issue 7
Jul.  2024
Turn off MathJax
Article Contents
XIA Chenxing, CHEN Xinyu, SUN Yanguang, GE Bin, FANG Xianjin, GAO Xiuju, ZHANG Yan. Integrating Multiple Context and Hybrid Interaction for Salient Object Detection[J]. Journal of Electronics & Information Technology, 2024, 46(7): 2918-2931. doi: 10.11999/JEIT230719
Citation: XIA Chenxing, CHEN Xinyu, SUN Yanguang, GE Bin, FANG Xianjin, GAO Xiuju, ZHANG Yan. Integrating Multiple Context and Hybrid Interaction for Salient Object Detection[J]. Journal of Electronics & Information Technology, 2024, 46(7): 2918-2931. doi: 10.11999/JEIT230719

Integrating Multiple Context and Hybrid Interaction for Salient Object Detection

doi: 10.11999/JEIT230719
Funds:  The National Natural Science Foundation of China (62102003), The Natural Science Foundation of Anhui Province (2108085QF258), Anhui Postdoctoral Science Foundation (2022B623), Huainan City Science and Technology Plan Project (2023A316), The University Synergy Innovation Program of Anhui Province (GXXT-2021-006, GXXT-2022-038), The University-level General Projects of Anhui University of Science and Technology (xjyb2020-04), The Central Guiding Local Technology Development Special Funds (202107d06020001)
  • Received Date: 2023-07-18
  • Rev Recd Date: 2024-01-06
  • Available Online: 2024-01-28
  • Publish Date: 2024-07-29
  • Salient Object Detection (SOD) aims to recognize and segment visual salient objects in images, which is one of the important research contents in computer vision tasks and related fields. Existing Fully Convolutional Networks (FCNs)-based SOD methods have achieved good performance. However, the types and sizes of salient objects are variable and unfixed in real-world scenes, which makes it still a huge challenge to detect and segment salient objects accurately and completely. For that, in this paper, a novel integrating multiple context and hybrid interaction for SOD task is proposed to efficiently predict salient objects by collaborating Dense Context Information Exploration (DCIE) module and Multi-source Feature Hybrid Interaction (MFHI) module. The DCIE module uses dilated convolution, asymmetric convolution and dense guided connection to progressively capture the strongly correlated multi-scale and multi-receptive field context information, and enhances the expression ability of each initial input feature by aggregating context information. The MFHI module contains diverse feature aggregation operations, which can adaptively interact with complementary information from multi-level features to generate high-quality feature representations for accurately predicting saliency maps. The performance of the proposed method is tested on five public datasets. The performance of the proposed method is tested on five public datasets. Experimental results demonstrate that our method achieves superior prediction performance compared with 19 state-of-the-art SOD methods under different evaluation metrics.
  • loading
  • [1]
    LIU Mingyuan, SCHONFELD D, and TANG Wei. Exploit visual dependency relations for semantic segmentation[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 9726–9735. doi: 10.1109/CVPR46437.2021.00960.
    [2]
    ZHANG Xi and WU Xiaolin. Attention-guided image compression by deep reconstruction of compressive sensed saliency skeleton[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 13354–13364. doi: 10.1109/cvpr46437.2021.01315.
    [3]
    LEE S, SEONG H, LEE S, et al. Correlation verification for image retrieval[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 5374–5384. doi: 10.1109/CVPR52688.2022.00530.
    [4]
    ZHU Junyan, WU Jianjun, XU Yan, et al. Unsupervised object class discovery via saliency-guided multiple class learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(4): 862–875. doi: 10.1109/tpami.2014.2353617.
    [5]
    GUPTA D K, ARYA D, and GAVVES E. Rotation equivariant Siamese networks for tracking[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 12362–12371. doi: 10.1109/cvpr46437.2021.01218.
    [6]
    PANG Youwei, ZHAO Xiaoqi, XIANG Tianzhu, et al. Zoom in and out: A mixed-scale triplet network for camouflaged object detection[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 2160–2170. doi: 10.1109/cvpr52688.2022.00220.
    [7]
    PENG Houwen, LI Bing, LING Haibin, et al. Salient object detection via structured matrix decomposition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 818–832. doi: 10.1109/TPAMI.2016.2562626.
    [8]
    SHEN Xiaohui and WU Ying. A unified approach to salient object detection via low rank matrix recovery[C]. 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 853–860. doi: 10.1109/cvpr.2012.6247758.
    [9]
    唐红梅, 白梦月, 韩力英, 等. 基于低秩背景约束与多线索传播的图像显著性检测[J]. 电子与信息学报, 2021, 43(5): 1432–1440. doi: 10.11999/JEIT200193.

    TANG Hongmei, BAI Mengyue, HAN Liying, et al. Image saliency detection based on background constraint of low rank and multi-cue propagation[J]. Journal of Electronics & Information Technology, 2021, 43(5): 1432–1440. doi: 10.11999/JEIT200193.
    [10]
    MARGOLIN R, TAL A, and ZELNIK-MANOR L. What makes a patch distinct?[C]. 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013: 1139–1146. doi: 10.1109/cvpr.2013.151.
    [11]
    TONG Na, LU Huchuan, RUAN Xiang, et al. Salient object detection via bootstrap learning[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 1884–1892. doi: 10.1109/cvpr.2015.7298798.
    [12]
    JIANG Zhuolin and DAVIS L S. Submodular salient region detection[C]. 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013: 2043–2050. doi: 10.1109/cvpr.2013.266.
    [13]
    LI Guanbin and YU Yizhou. Visual saliency based on multiscale deep features[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 5455–5463. doi: 10.1109/cvpr.2015.7299184.
    [14]
    LIU Nian and HAN Junwei. DHSnet: Deep hierarchical saliency network for salient object detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 678–686. doi: 10.1109/cvpr.2016.80.
    [15]
    SHELHAMER E, LONG J, and DARRELL T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640–651. doi: 10.1109/TPAMI.2016.2572683.
    [16]
    ZHAO Ting and WU Xiangqian. Pyramid feature attention network for saliency detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 3085–3094. doi: 10.1109/cvpr.2019.00320.
    [17]
    SIRIS A, JIAO Jianbo, TAM G K L, et al. Scene context-aware salient object detection[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 4156–4166. doi: 10.1109/iccv48922.2021.00412.
    [18]
    CHEN Zuyao, XU Qianqian, CONG Runmin, et al. Global context-aware progressive aggregation network for salient object detection[C]. The 34th AAAI Conference on Artificial Intelligence, New York, USA, 2020: 10599–10606. doi: 10.1609/aaai.v34i07.6633.
    [19]
    WU Zhenyu, LI Shuai, CHEN Chenglizhao, et al. Salient object detection via dynamic scale routing[J]. IEEE Transactions on Image Processing, 2022, 31: 6649–6663. doi: 10.1109/tip.2022.3214332.
    [20]
    ZHANG Pingping, WANG Dong, LU Huchuan, et al. Amulet: Aggregating multi-level convolutional features for salient object detection[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 202–211. doi: 10.1109/iccv.2017.31.
    [21]
    HOU Qibin, CHENG Mingming, HU Xiaowei, et al. Deeply supervised salient object detection with short connections[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(4): 815–828. doi: 10.1109/TPAMI.2018.2815688.
    [22]
    LI Junxia, PAN Zefeng, LIU Qingshan, et al. Stacked U-shape network with channel-wise attention for salient object detection[J]. IEEE Transactions on Multimedia, 2021, 23: 1397–1409. doi: 10.1109/TMM.2020.2997192.
    [23]
    ZHANG Lu, DAI Ju, LU Huchuan, et al. A bi-directional message passing model for salient object detection[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 1741–1750. doi: 10.1109/cvpr.2018.00187.
    [24]
    雷大江, 杜加浩, 张莉萍, 等. 联合多流融合和多尺度学习的卷积神经网络遥感图像融合方法[J]. 电子与信息学报, 2022, 44(1): 237–244. doi: 10.11999/JEIT200792.

    LEI Dajiang, DU Jiahao, ZHANG Liping, et al. Multi-stream architecture and multi-scale convolutional neural network for remote sensing image fusion[J]. Journal of Electronics & Information Technology, 2022, 44(1): 237–244. doi: 10.11999/JEIT200792.
    [25]
    李珣, 李林鹏, LAZOVIK A, 等. 基于改进双流卷积递归神经网络的RGB-D物体识别方法[J]. 光电工程, 2021, 48(2): 200069. doi: 10.12086/oee.2021.200069.

    LI Xun, LI Linpeng, LAZOVIK A, et al. RGB-D object recognition algorithm based on improved double stream convolution recursive neural network[J]. Opto-Electronic Engineering, 2021, 48(2): 200069. doi: 10.12086/oee.2021.200069.
    [26]
    邓箴, 王一斌, 刘立波. 视觉注意机制的注意残差稠密神经网络弱光照图像增强[J]. 液晶与显示, 2021, 36(11): 1463–1473. doi: 10.37188/CJLCD.2021-0098.

    DENG Zhen, WANG Yibin, and LIU Libo. Attentive residual dense network of visual attention mechanism for weakly illuminated image enhancement[J]. Chinese Journal of Liquid Crystals and Displays, 2021, 36(11): 1463–1473. doi: 10.37188/CJLCD.2021-0098.
    [27]
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/cvpr.2016.90.
    [28]
    LIU Jiangjiang, HOU Qibin, CHENG Mingming, et al. A simple pooling-based design for real-time salient object detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 3917–3926. doi: 10.1109/cvpr.2019.00404.
    [29]
    卢珊妹, 郭强, 王任, 等. 基于多特征注意力循环网络的显著性检测[J]. 计算机辅助设计与图形学学报, 2020, 32(12): 1926–1937. doi: 10.3724/sp.j.1089.2020.18240.

    LU Shanmei, GUO Qiang, WANG Ren, et al. Salient object detection using multi-scale features with attention recurrent mechanism[J]. Journal of Computer-Aided Design & Computer Graphics, 2020, 32(12): 1926–1937. doi: 10.3724/sp.j.1089.2020.18240.
    [30]
    HUANG Gao, LIU Zhuang, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 4700–4708. doi: 10.1109/cvpr.2017.243.
    [31]
    ZHANG Xiangyu, ZHOU Xinyu, LIN Mengxiao, et al. ShuffleNet: An extremely efficient convolutional neural network for mobile devices[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 6848–6856. doi: 10.1109/cvpr.2018.00716.
    [32]
    ZHANG Pingping, WANG Dong, LU Huchuan, et al. Learning uncertain convolutional features for accurate saliency detection[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 212–221. doi: 10.1109/iccv.2017.32.
    [33]
    WANG Tiantian, ZHANG Lihe, WANG Shuo, et al. Detect globally, refine locally: A novel approach to saliency detection[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 3127–3135. doi: 10.1109/cvpr.2018.00330.
    [34]
    FENG Mengyang, LU Huchuan, and DING Errui. Attentive feedback network for boundary-aware salient object detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 1623–1632. doi: 10.1109/cvpr.2019.00172.
    [35]
    WU Zhe, SU Li, and HUANG Qingming. Cascaded partial decoder for fast and accurate salient object detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 3907–3916. doi: 10.1109/cvpr.2019.00403.
    [36]
    FENG Mengyang, LU Huchuan, and YU Yizhou. Residual learning for salient object detection[J]. IEEE Transactions on Image Processing, 2020, 29: 4696–4708. doi: 10.1109/tip.2020.2975919.
    [37]
    ZHAO Xiaoqi, PANG Youwei, ZHANG Lihe, et al. Suppress and balance: A simple gated network for salient object detection[C]. The 16th European Conference on Computer Vision, Glasgow, United Kingdom, 2020: 35–51. doi: 10.1007/978-3-030-58536-5_3.
    [38]
    ZHOU Huajun, XIE Xiaohua, LAI Jianhuang, et al. Interactive two-stream decoder for accurate and fast saliency detection[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 9141–9150. doi: 10.1109/cvpr42600.2020.00916.
    [39]
    PANG Youwei, ZHAO Xiaoqi, ZHANG Lihe, et al. Multi-scale interactive network for salient object detection[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 9413–9422. doi: 10.1109/cvpr42600.2020.00943.
    [40]
    REN Qinghua, LU Shijian, ZHANG Jinxia, et al. Salient object detection by fusing local and global contexts[J]. IEEE Transactions on Multimedia, 2021, 23: 1442–1453. doi: 10.1109/tmm.2020.2997178.
    [41]
    WANG Liansheng, CHEN Rongzhen, ZHU Lei, et al. Deep sub-region network for salient object detection[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 31(2): 728–741. doi: 10.1109/tcsvt.2020.2988768.
    [42]
    LIU Nian, ZHANG Ni, WAN Kaiyuan, et al. Visual saliency transformer[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 4722–4732. doi: 10.1109/iccv48922.2021.00468.
    [43]
    LIU Yun, CHENG Mingming, ZHANG Xinyu, et al. DNA: Deeply supervised nonlinear aggregation for salient object detection[J]. IEEE Transactions on Cybernetics, 2022, 52(7): 6131–6142. doi: 10.1109/tcyb.2021.3051350.
    [44]
    MEI Haiyang, LIU Yuanyuan, WEI Ziqi, et al. Exploring dense context for salient object detection[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(3): 1378–1389. doi: 10.1109/tcsvt.2021.3069848.
    [45]
    FANG Chaowei, TIAN Haibin, ZHANG Dingwen, et al. Densely nested top-down flows for salient object detection[J]. Science China Information Sciences, 2022, 65(8): 182103. doi: 10.1007/s11432-021-3384-y.
    [46]
    ZHUGE Mingchen, FAN Dengping, LIU Nian, et al. Salient object detection via integrity learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(3): 3738–3752. doi: 10.1109/tpami.2022.3179526.
    [47]
    LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 2117–2125. doi: 10.1109/cvpr.2017.106.
    [48]
    CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834–848. doi: 10.1109/tpami.2017.2699184.
    [49]
    LIU Songtao, HUANG Di, and WANG Yunhong. Receptive field block net for accurate and fast object detection[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 385–400. doi: 10.1007/978-3-030-01252-6_24.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(11)  / Tables(6)

    Article Metrics

    Article views (423) PDF downloads(80) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return