高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种具有边缘增强特点的医学图像分割网络

孙军梅 葛青青 李秀梅 赵宝奇

孙军梅, 葛青青, 李秀梅, 赵宝奇. 一种具有边缘增强特点的医学图像分割网络[J]. 电子与信息学报, 2022, 44(5): 1643-1652. doi: 10.11999/JEIT210784
引用本文: 孙军梅, 葛青青, 李秀梅, 赵宝奇. 一种具有边缘增强特点的医学图像分割网络[J]. 电子与信息学报, 2022, 44(5): 1643-1652. doi: 10.11999/JEIT210784
SUN Junmei, GE Qingqing, LI Xiumei, ZHAO Baoqi. A Medical Image Segmentation Network with Boundary Enhancement[J]. Journal of Electronics & Information Technology, 2022, 44(5): 1643-1652. doi: 10.11999/JEIT210784
Citation: SUN Junmei, GE Qingqing, LI Xiumei, ZHAO Baoqi. A Medical Image Segmentation Network with Boundary Enhancement[J]. Journal of Electronics & Information Technology, 2022, 44(5): 1643-1652. doi: 10.11999/JEIT210784

一种具有边缘增强特点的医学图像分割网络

doi: 10.11999/JEIT210784
基金项目: 国家自然科学基金(61801159, 61571174),福建省软件测评工程技术研究中心开放课题(ST2019004),杭州市科技计划项目(20201203B124)
详细信息
    作者简介:

    孙军梅:女,1974年生,博士,副教授,研究方向为深度学习、智能软件系统

    葛青青:女,1999年生,硕士生,研究方向为医学图像处理、目标检测

    李秀梅:女,1978年生,博士,教授,研究方向为时频分析及应用、压缩感知、深度学习

    赵宝奇:男,1994年生,硕士,研究方向为计算机视觉、医学图像处理

    通讯作者:

    李秀梅 lixiumei@hznu.edu.cn

  • 中图分类号: TN911.73; TP391.41

A Medical Image Segmentation Network with Boundary Enhancement

Funds: The National Natural Science Foundation of China (61801159, 61571174), The Open Fund of Engineering Research Center for Software Testing and Evaluation of Fujian Province (ST2019004), The Science and Technology Plan Project of Hangzhou (20201203B124)
  • 摘要: 针对传统医学图像分割网络存在边缘分割不清晰、缺失值大等问题,该文提出一种具有边缘增强特点的医学图像分割网络(AS-UNet)。利用掩膜边缘提取算法得到掩膜边缘图,在UNet扩张路径的最后3层引入结合多尺度特征图的边缘注意模块(BAB),并提出组合损失函数来提高分割精度;测试时通过舍弃BAB来减少参数。在3种不同类型的医学图像分割数据集Glas, DRIVE, ISIC2018上进行实验,与其他分割方法相比,AS-UNet分割性能较优。
  • 图  1  AS-UNet网络结构

    图  2  BAB结构

    图  3  图像掩模与对应边缘图

    图  4  注意力模块

    图  5  迁移模型

    图  6  不同模型在Glas数据集上的分割结果对比

    图  7  不同模型在DRIVE数据集上的分割结果对比

    图  8  不同模型在ISIC2018数据集上的分割结果对比

    表  1  不同模型在不同数据集上分割结果对比

    方法GlasDRIVEISIC2018参数量(M)
    Mean DiceF1值Hausdorff距离Mean DiceF1值Hausdorff距离Mean DiceF1值Hausdorff距离
    UNet0.86200.9120120.820.74030.880655.500.86840.893242.487.93
    UNet++0.86790.923889.190.75450.899252.170.86930.901638.199.24
    DRU-Net0.87240.9131128.090.74020.893995.230.87310.905041.223.57
    KiU-Net0.86680.9154101.450.74360.882850.790.86670.915936.230.75
    本文0.88390.934189.020.76190.907044.610.88370.922334.957.94
    下载: 导出CSV

    表  2  消融实验

    基础网络对比
    方法
    GlasDRIVEISIC2018参数量(M)
    Mean
    Dice
    F1值Hausdorff
    距离
    Mean
    Dice
    F1值Hausdorff
    距离
    Mean
    Dice
    F1值Hausdorff
    距离
    UNetUNet0.86200.9120120.820.74030.880655.500.86840.893242.487.93
    +BAB0.88420.934090.110.76190.907144.900.88350.922334.778.95
    +Sub0.88390.934189.020.76190.907044.610.88370.922334.957.94
    FCNFCN0.79310.7171135.120.66710.586359.120.80260.801550.309.31
    +BAB0.81750.7346120.470.70380.603449.350.82940.820344.8810.09
    +Sub0.81740.7346121.330.70380.603249.810.82960.820144.959.31
    下载: 导出CSV

    表  3  不同注意力模块在不同数据集上的分割结果对比

    BAB中注意力模块GlasDRIVEISIC2018
    Mean
    Dice
    F1值Hausdorff
    距离
    Mean
    Dice
    F1值Hausdorff
    距离
    Mean
    Dice
    F1值Hausdorff
    距离
    0.87680.8993108.510.75350.875449.050.87990.909638.18
    scSE0.88030.920897.080.75950.881246.180.88120.916637.51
    本文方法0.88390.934189.020.76190.907044.610.88370.922334.95
    下载: 导出CSV

    表  4  不同损失函数在不同数据集上的分割结果对比

    损失函数GlasDRIVEISIC2018
    Mean
    Dice
    F1值Hausdorff
    距离
    Mean
    Dice
    F1值Hausdorff
    距离
    Mean
    Dice
    F1值Hausdorff
    距离
    Dice 损失0.87790.925090.190.76050.889145.020.88000.919535.30
    Boundary损失0.86480.918992.530.75180.877347.910.87770.917336.19
    Dice + Boundary
    损失
    0.88390.934189.020.76190.907044.610.88370.922334.95
    下载: 导出CSV
  • [1] GENG Qichuan, ZHOU Zhong, and CAO Xiaochun. Survey of recent progress in semantic image segmentation with CNNs[J]. Science China Information Sciences, 2018, 61(5): 051101. doi: 10.1007/s11432-017-9189-6
    [2] ANWAR S M, MAJID M, QAYYUM A, et al. Medical image analysis using convolutional neural networks: A review[J]. Journal of Medical Systems, 2018, 42(11): 226. doi: 10.1007/s10916-018-1088-1
    [3] 徐莹莹, 沈红斌. 基于模式识别的生物医学图像处理研究现状[J]. 电子与信息学报, 2020, 42(1): 201–213. doi: 10.11999/JEIT190657

    XU Yingying and SHEN Hongbin. Review of research on biomedical image processing based on pattern recognition[J]. Journal of Electronics &Information Technology, 2020, 42(1): 201–213. doi: 10.11999/JEIT190657
    [4] RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. The 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241.
    [5] SHANKARANARAYANA S M, RAM K, MITRA K, et al. Joint optic disc and cup segmentation using fully convolutional and adversarial networks[C]. International Workshop on Ophthalmic Medical Image Analysis, Québec City, Canada, 2017: 168–176.
    [6] OKTAY O, SCHLEMPER J, LE FOLGOC L, et al. Attention U-Net: Learning where to look for the pancreas[EB/OL]. https://arxiv.org/abs/1804.03999.pdf, 2021.
    [7] ZHOU Zongwei, SIDDIQUEE M R, TAJBAKHSH N, et al. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation[J]. IEEE Transactions on Medical Imaging, 2020, 39(6): 1856–1867. doi: 10.1109/TMI.2019.2959609
    [8] JAFARI M, AUER D, FRANCIS S, et al. DRU-Net: An efficient deep convolutional neural network for medical image segmentation[C]. The 2020 IEEE 17th International Symposium on Biomedical Imaging, Iowa City, USA, 2020: 1144–1148.
    [9] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [10] HUANG Zehao and WANG Naiyan. Like what you like: Knowledge distill via neuron selectivity transfer[EB/OL]. https://arxiv.org/abs/1707.01219.pdf, 2021.
    [11] LI Ziqiang, PAN Hong, ZHU Yaping, et al. PGD-UNet: A position-guided deformable network for simultaneous segmentation of organs and tumors[C]. 2020 International Joint Conference on Neural Networks, Glasgow, UK, 2020: 1–8.
    [12] KITRUNGROTSAKUL T, YUTARO I, LIN Lanfen, et al. Interactive deep refinement network for medical image segmentation[EB/OL]. https://arxiv.org/pdf/2006.15320.pdf, 2021.
    [13] ZHANG Zhijie, FU Huazhu, DAI Hang, et al. ET-Net: A Generic edge-aTtention guidance network for medical image segmentation[C]. The 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 2019: 442–450.
    [14] VALANARASU J M J, SINDAGI V A, HACIHALILOGLU I, et al. KiU-Net: Towards accurate segmentation of biomedical images using over-complete representations[C]. The 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 2020: 363–373.
    [15] LEE H J, KIM J U, LEE S, et al. Structure boundary preserving segmentation for medical image with ambiguous boundary[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 4816–4825.
    [16] CHU Jiajia, CHEN Yajie, ZHOU Wei, et al. Pay more attention to discontinuity for medical image segmentation[C]. The 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 2020: 166–175.
    [17] BAHETI B, INNANI S, GAJRE S, et al. Eff-UNet: A novel architecture for semantic segmentation in unstructured environment[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, USA, 2020: 1473–1481.
    [18] TREBING K, STAǸCZYK T, and MEHRKANOON S. SmaAT-UNet: Precipitation nowcasting using a small attention-UNet architecture[J]. Pattern Recognition Letters, 2021, 145: 178–186. doi: 10.1016/j.patrec.2021.01.036
    [19] QAMAR S, JIN Hai, ZHENG Ran, et al. A variant form of 3D-UNet for infant brain segmentation[J]. Future Generation Computer Systems, 2020, 108: 613–623. doi: 10.1016/j.future.2019.11.021
    [20] GADOSEY P K, LI Yujian, AGYEKUM E A, et al. SD-UNet: Stripping down U-Net for segmentation of biomedical images on platforms with low computational budgets[J]. Diagnostics, 2020, 10(2): 110. doi: 10.3390/diagnostics10020110
    [21] SUN J, DARBEHANI F, ZAIDI M, et al. SAUNet: Shape attentive U-Net for interpretable medical image segmentation[C]. The 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 2020: 797–806.
    [22] TAKIKAWA T, ACUNA D, JAMPANI V, et al. Gated-SCNN: Gated shape CNNS for semantic segmentation[C]. 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 5228–5237.
    [23] HEIDLER K, MOU Lichao, BAUMHOER C, et al. HED-UNet: Combined segmentation and edge detection for monitoring the Antarctic coastline[EB/OL]. https://arxiv.org/abs/2103.01849v1.pdf, 2021.
    [24] JADON S. A survey of loss functions for semantic segmentation[C]. 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, Via del Mar, Chile, 2020: 1–7.
    [25] KERVADEC H, BOUCHTIBA J, DESROSIERS C, et al. Boundary loss for highly unbalanced segmentation[J]. Medical Image Analysis, 2021, 67: 101851. doi: 10.1016/j.media.2020.101851
    [26] ROY A G, NAVAB N, and WACHINGER C. Concurrent spatial and channel ‘squeeze & excitation’ in fully convolutional networks[C]. The 21st International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 2018: 421–429.
    [27] CODELLA N, ROTEMBERG V, TSCHANDL P, et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (ISIC)[EB/OL]. https://arxiv.org/abs/1902.03368.pdf, 2021.
    [28] TSCHANDL P, ROSENDAHL C, and KITTLER H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions[J]. Scientific Data, 2018, 5: 180161. doi: 10.1038/sdata.2018.161
    [29] MILLETARI F, NAVAB N, and AHMADI S A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation[C]. The 2016 4th International Conference on 3D Vision, Stanford, USA, 2016: 565–571.
    [30] ATTOUCH H, LUCCHETTI R, and WETS R J B. The topology of the ρ-hausdorff distance[J]. Annali di Matematica Pura ed Applicata, 1991, 160(1): 303–320. doi: 10.1007/BF01764131
  • 加载中
图(8) / 表(4)
计量
  • 文章访问数:  1441
  • HTML全文浏览量:  1212
  • PDF下载量:  275
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-08-06
  • 修回日期:  2021-12-10
  • 录用日期:  2021-12-14
  • 网络出版日期:  2021-12-26
  • 刊出日期:  2022-05-25

目录

    /

    返回文章
    返回