高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于阶梯结构的U-Net结肠息肉分割算法

时永刚 李祎 周治国 张岳 夏卓岩

时永刚, 李祎, 周治国, 张岳, 夏卓岩. 基于阶梯结构的U-Net结肠息肉分割算法[J]. 电子与信息学报, 2022, 44(1): 39-47. doi: 10.11999/JEIT210916
引用本文: 时永刚, 李祎, 周治国, 张岳, 夏卓岩. 基于阶梯结构的U-Net结肠息肉分割算法[J]. 电子与信息学报, 2022, 44(1): 39-47. doi: 10.11999/JEIT210916
SHI Yonggang, LI Yi, ZHOU Zhiguo, ZHANG Yue, XIA Zhuoyan. Polyp Segmentation Using Stair-structured U-Net[J]. Journal of Electronics & Information Technology, 2022, 44(1): 39-47. doi: 10.11999/JEIT210916
Citation: SHI Yonggang, LI Yi, ZHOU Zhiguo, ZHANG Yue, XIA Zhuoyan. Polyp Segmentation Using Stair-structured U-Net[J]. Journal of Electronics & Information Technology, 2022, 44(1): 39-47. doi: 10.11999/JEIT210916

基于阶梯结构的U-Net结肠息肉分割算法

doi: 10.11999/JEIT210916
基金项目: 国家自然科学基金(60971133, 61271112)
详细信息
    作者简介:

    时永刚:男,1969年生,副教授,研究方向为医学图像分割、目标检测识别、目标分类、图像复原和超分辨率重建

    李祎:女,1996年生,硕士生,研究方向为医学图像分割

    周治国:男,1977年生,副教授,研究方向为智能感知与导航

    张岳:男,1996年生,硕士生,研究方向为医学图像处理、深度学习

    夏卓岩:男,1997年生,硕士生,研究方向为图像分割、目标检测与分类、目标识别

    通讯作者:

    时永刚 ygshi@bit.edu.cn

  • 中图分类号: TN911.73; R735.34

Polyp Segmentation Using Stair-structured U-Net

Funds: The National Natural Science Foundation of China (60971133, 61271112)
  • 摘要: 结肠息肉的精确分割对结直肠癌的诊断和治疗具有重要意义,目前的分割方法普遍存在有伪影、分割精度低等问题。该文提出一种基于阶梯结构的U-Net结肠息肉分割算法(SU-Net),使用U-Net的U型结构,利用Kronecker乘积来扩展标准空洞卷积核,构成Kronecker空洞卷积下采样有效扩大感受野,弥补传统空洞卷积容易丢失的细节特征;应用具有阶梯结构的融合模块,遵循扩展和堆叠原则形成阶梯状的分层结构,有效捕获上下文信息并从多个尺度聚合特征;在解码器引入卷积重构上采样模块生成密集的像素级预测图,捕获双线性插值上采样中缺少的精细信息。在Kvasir-SEG数据集和CVC-EndoSceneStill数据集上对模型进行了测试,相似系数(Dice)指标和交并比(IoU)指标分别达到了87.51%, 88.75%和82.30%, 85.64%。实验结果表明,该文所提方法改善了因过度曝光、低对比度引起的分割精度低的问题,同时消除了边界外部的图像伪影和图像内部不连贯的现象,优于其他息肉分割方法。
  • 图  1  SU-Net整体框架

    图  2  不同类型卷积核和KACD模块

    图  3  阶梯结构的融合模块

    图  4  卷积重构上采样模块

    图  5  SU-Net与其他分割模型在EndoSceneStill数据集上的分割结果

    图  6  SU-Net与其他分割模型在Kvasir-SEG数据集上的分割结果

    表  1  SU-Net消融实验列表

    序号实验描述
    1基线
    2仅将基线里的空洞卷积替换为Kronecker空洞卷积
    3将实验2中下采样替换为Kronecker空洞卷积下采样
    4在实验3中的编码器解码器之间加入阶梯结构的融合模块
    5SU-Net
    下载: 导出CSV

    表  2  在EndoSceneStill数据集上各实验的量化结果

    评估标准消融实验编号
    12345
    召回率0.78190.81950.80280.80270.8237
    特异性0.99310.99080.99460.99470.9929
    精确率0.91850.87470.91790.91190.9007
    F10.78990.79940.80880.81740.8230
    F20.77910.80250.79800.80460.8175
    IoU0.71940.72140.73600.74500.7499
    IoUB0.96010.95990.92690.96270.9630
    IoUM0.83970.84070.84940.85380.8564
    Dice0.78990.79940.80880.81740.8230
    下载: 导出CSV

    表  3  在Kvasir-SEG数据集上各实验的量化结果

    评估标准消融实验编号
    12345
    召回率0.86640.86310.86360.87500.8752
    特异性0.98400.98540.98440.98580.9866
    精确率0.89210.90060.91630.90210.9207
    F10.85600.86070.86540.86890.8751
    F20.85740.86730.86020.86810.8718
    IoU0.78660.79200.79570.80320.8173
    IoUB0.95340.95390.95200.95320.9577
    IoUM0.87000.87300.87380.87820.8875
    Dice0.85600.86070.86540.86890.8751
    下载: 导出CSV

    表  4  不同模型在EndoSceneStill数据集中的量化评估结果

    模型召回率特异性精确率F1F2IoUIoUBIoUMDice
    U-Net0.68390.99540.92220.71130.69100.63140.95150.79140.7113
    Attention unet0.67440.99620.93730.70840.68330.62600.95040.78820.7084
    TKCN0.81100.98660.85650.78190.78750.70230.95360.82800.7819
    Xception0.80170.99200.89640.79400.79060.72200.95750.83980.7940
    DeepLabV3+0.76110.99190.85430.75420.75050.68330.95450.81890.7542
    PraNet0.79730.99370.92150.80160.79450.73490.96100.84800.8016
    SU-Net0.82370.99290.90070.82300.81750.74990.96300.85640.8230
    下载: 导出CSV

    表  5  不同模型在Kvasir-SEG数据集中的量化评估结果

    模型召回率特异性精确率F1F2IoUIoUBIoUMDice
    U-Net0.84080.97070.83150.80170.81610.70990.93310.82150.8017
    Attention unet0.85760.96820.83170.81050.82830.72490.93400.82940.8105
    TKCN0.86510.98260.89890.85520.85670.78110.94730.86420.8552
    Xception0.87020.98310.90410.86620.86390.79820.95040.87430.8662
    DeepLabV3+0.88790.98120.89380.87250.87700.81100.95500.88300.8725
    PraNet0.87630.98590.91540.87430.87180.81100.95570.88330.8743
    SU-Net0.87520.98660.92070.87510.87180.81730.95770.88750.8751
    下载: 导出CSV
  • [1] GSCHWANTLER M, KRIWANEK S, LANGNER E, et al. High-grade dysplasia and invasive carcinoma in colorectal adenomas: A multivariate analysis of the impact of adenoma and patient characteristics[J]. European Journal of Gastroenterology & Hepatology, 2002, 14(2): 183–188. doi: 10.1097/00042737-200202000-00013
    [2] ARNOLD M, SIERRA M S, LAVERSANNE M, et al. Global patterns and trends in colorectal cancer incidence and mortality[J]. Gut, 2017, 66(4): 683–691. doi: 10.1136/gutjnl-2015-310912
    [3] PUYAL J G B, BHATIA K K, BRANDAO P, et al. Endoscopic polyp segmentation using a hybrid 2D/3D CNN[C]. 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 2020: 295–305.
    [4] TASHK A, HERP J, and NADIMI E. Fully automatic polyp detection based on a novel u-net architecture and morphological post-process[C]. 2019 IEEE International Conference on Control, Artificial Intelligence, Robotics & Optimization, Athens, Greece, 2019: 37–41.
    [5] WANG Pu, XIAO Xiao, BROWN J R G, et al. Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy[J]. Nature Biomedical Engineering, 2018, 2(10): 741–748. doi: 10.1038/s41551-018-0301-3
    [6] SORNAPUDI S, MENG F, and YI S. Region-based automated localization of colonoscopy and wireless capsule endoscopy polyps[J]. Applied Sciences, 2019, 9(12): 2404. doi: 10.3390/app9122404
    [7] FAN Dengping, JI Gepeng, ZHOU Tao, et al. PraNet: Parallel reverse attention network for polyp segmentation[C]. 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 2020: 263–273.
    [8] FENG Ruiwei, LEI Biwen, WANG Wenzhe, et al. SSN: A stair-shape network for real-time polyp segmentation in colonoscopy images[C]. 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, USA, 2020: 225–229.
    [9] JI Gepeng, CHOU Yucheng, FAN Dengping, et al. Progressively normalized self-attention network for video polyp segmentation[J]. arXiv: 2105.08468, 2021.
    [10] LIN Ailiang, CHEN Bingzhi, XU Jiayu, et al. DS-TransUNet: Dual swin transformer U-Net for medical image segmentation[J]. arXiv: 2106.06716, 2021.
    [11] ZHANG Yundong, LIU Huiye, and HU Qiang. TransFuse: Fusing transformers and CNNs for medical image segmentation[J]. arXiv: 2102.08005, 2021.
    [12] JHA D, SMEDSRUD P H, RIEGLER M A, et al. Kvasir-SEG: A segmented polyp dataset[C]. 26th International Conference on Multimedia Modeling, Daejeon, Korea, 2020: 451–462.
    [13] VÁZQUEZ D, BERNAL J, SÁNCHEZ F J, et al. A benchmark for endoluminal scene segmentation of colonoscopy images[J]. Journal of Healthcare Engineering, 2017, 2017: 4037190. doi: 10.1155/2017/4037190
    [14] RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241.
    [15] LONG J, SHELHAMER E, and DARRELL T. Fully convolutional networks for semantic segmentation[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, USA, 2015: 3431–3440.
    [16] WU Tianyi, TANG Sheng, ZHANG Rui, et al. Tree-structured kronecker convolutional network for semantic segmentation[C]. 2019 IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, 2019: 940–945.
    [17] CHOLLET F. Xception: Deep learning with depthwise separable convolutions[C]. The 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, 2017: 1800–1807.
    [18] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. The 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 770–778.
    [19] SHI Wenzhe, CABALLERO J, HUSZÁR F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C]. The 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 1874–1883.
    [20] KINGMA D P and BA J. Adam: A method for stochastic optimization[J]. arXiv: 1412.6980, 2017.
    [21] OKTAY O, SCHLEMPER J, LE FOLGOC L, et al. Attention U-Net: Learning where to look for the pancreas[J]. arXiv: 1804.03999v3, 2018.
    [22] CHEN L C, ZHU Yukun, PAPANDREOU G, et al. Encoder-decoder with Atrous separable convolution for semantic image segmentation[C]. The 15th European Conference on Computer Vision (ECCV), Munich, Germany, 2018: 833–851.
  • 加载中
图(6) / 表(5)
计量
  • 文章访问数:  877
  • HTML全文浏览量:  447
  • PDF下载量:  189
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-09-01
  • 修回日期:  2021-12-21
  • 录用日期:  2021-12-21
  • 网络出版日期:  2021-12-27
  • 刊出日期:  2022-01-10

目录

    /

    返回文章
    返回