Advanced Search
Volume 44 Issue 3
Mar.  2022
Turn off MathJax
Article Contents
ZHU Shiping, XIE Wentao, ZHAO Congyang, LI Qinghai. Salient Object Detection via Feature Permutation and Space Activation[J]. Journal of Electronics & Information Technology, 2022, 44(3): 1093-1101. doi: 10.11999/JEIT210133
Citation: ZHU Shiping, XIE Wentao, ZHAO Congyang, LI Qinghai. Salient Object Detection via Feature Permutation and Space Activation[J]. Journal of Electronics & Information Technology, 2022, 44(3): 1093-1101. doi: 10.11999/JEIT210133

Salient Object Detection via Feature Permutation and Space Activation

doi: 10.11999/JEIT210133
Funds:  The National Key Research and Development Program (2016YFB0500505), The National Natural Science Foundation of China (61375025, 61075011, 60675018)
  • Received Date: 2021-02-05
  • Rev Recd Date: 2021-08-19
  • Available Online: 2021-09-04
  • Publish Date: 2022-03-28
  • Salient object detection occupies an important position in the field of computer vision. How to deal with feature information on different scales becomes the key to obtain excellent prediction results. Two contributions are made in this article. On the one hand, a feature permutation method for salient object detection is proposed. The proposed method is a convolutional neural network based on the self-encoding network structure. It uses the concept of scale representation proposed in this paper to group and permute the multiscale feature maps of different layers in the neural network. So the proposed method obtains a more generalized salient object detection model and a more accurate prediction results about salient object detection. On the other hand, the proposed method adopts the double-conv residual and FReLU activation for the output of the model, so that more complete pixel information could be obtained, and the spatial information is also activated as well. The characteristics of the two algorithms are fused to act on the learning and training of the model. Finally, the proposed algorithm is compared with the mainstream salient object detection algorithms, and the experimental results show that the proposed algorithm obtains the best results from all.
  • loading
  • [1]
    GIBSON K B, VO D T, and NGUYEN T Q. An investigation of dehazing effects on image and video coding[J]. IEEE Transactions on Image Processing, 2012, 21(2): 662–673. doi: 10.1109/TIP.2011.2166968
    [2]
    SULLIVAN G J, OHM J R, HAN W J, et al. Overview of the high efficiency video coding (HEVC) standard[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2012, 22(12): 1649–1668. doi: 10.1109/TCSVT.2012.2221191
    [3]
    OHM J R, SULLIVAN G J, SCHWARZ H, et al. Comparison of the coding efficiency of video coding standards—including high efficiency video coding (HEVC)[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2012, 22(12): 1669–1684. doi: 10.1109/TCSVT.2012.2221192
    [4]
    TREISMAN A M and GELADE G. A feature-integration theory of attention[J]. Cognitive Psychology, 1980, 12(1): 97–136. doi: 10.1016/0010-0285(80)90005-5
    [5]
    KOCH C and ULLMAN S. Shifts in selective visual attention: Towards the underlying neural circuitry[J]. Human Neurobiology, 1985, 4(4): 219–227.
    [6]
    ITTI L, KOCH C, and NIEBUR E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11): 1254–1259. doi: 10.1109/34.730558
    [7]
    LIU Nian, HAN Junwei, and YANG M H. PiCANet: Learning pixel-wise contextual attention for saliency detection[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 3089–3098.
    [8]
    CHEN Shuhan, TAN Xiuli, WANG Ben, et al. Reverse attention for salient object detection[C]. The 15th European Conference on Computer Vision, Munich, Germany, Springer, 2018: 236–252.
    [9]
    LI Xin, YANG Fan, CHENG Hong, et al. Contour knowledge transfer for salient object detection[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 370–385.
    [10]
    QIN Xuebin, ZHANG Zichen, HUANG Chenyang, et al. BASNet: Boundary-aware salient object detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 7471–7481.
    [11]
    WU Zhe, SU Li, and HUANG Qingming. Cascaded partial decoder for fast and accurate salient object detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 3902–3911.
    [12]
    LIU Jiangjiang, HOU Qibin, CHENG Mingming, et al. A simple pooling-based design for real-time salient object detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 3912–3921.
    [13]
    MA Guangxiao, CHEN Chenglizhao, LI Shuai, et al. Salient object detection via multiple instance joint re-learning[J]. IEEE Transactions on Multimedia, 2020, 22(2): 324–336. doi: 10.1109/TMM.2019.2929943
    [14]
    WEI Jun, WANG Shuhui, WU Zhe, et al. Label decoupling framework for salient object detection[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 13025–13034.
    [15]
    PANG Youwei, ZHAO Xiaoqi, ZHANG Lihe, et al. Multi-scale interactive network for salient object detection[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 9413–9422.
    [16]
    LI Haofeng, LI Guanbin, and YU Yizhou. ROSA: Robust salient object detection against adversarial attacks[J]. IEEE Transactions on Cybernetics, 2020, 50(11): 4835–4847. doi: 10.1109/TCYB.2019.2914099
    [17]
    CHEN Shuhan, TAN Xiuli, WANG Ben, et al. Reverse attention-based residual network for salient object detection[J]. IEEE Transactions on Image Processing, 2020, 29: 3763–3776. doi: 10.1109/TIP.2020.2965989
    [18]
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [19]
    ZHANG Xianyu, ZHOU Xinyu, LIN Mengxiao, et al. ShuffleNet: An extremely efficient convolutional neural network for mobile devices[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 6848–6856.
    [20]
    LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 936–944.
    [21]
    MA Ningning, ZHANG Xiangyu, and SUN Jian. Funnel activation for visual recognition[C]. The 16th European Conference on Computer Vision, Glasgow, UK, 2020: 1–17.
    [22]
    RAHMAN M A and WANG Yang. Optimizing intersection-over-union in deep neural networks for image segmentation[C]. The 12th International Symposium on Visual Computing, Las Vegas, USA, 2016: 234–244.
    [23]
    MARGOLIN R, ZELNIK-MANOR L, and TAL A. How to evaluate foreground maps[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 248–255.
    [24]
    FAN Dengping, CHENG Mingming, LIU Yun, et al. Structure-measure: A new way to evaluate foreground maps[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 4558–4567.
    [25]
    PERAZZI F, KRÄHENBÜHL P, PRITCH Y, et al. Saliency filters: Contrast based filtering for salient region detection[C]. 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 733–740.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(9)  / Tables(2)

    Article Metrics

    Article views (889) PDF downloads(87) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return