高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于改进U型神经网络的脑出血CT图像分割

胡敏 周秀东 黄宏程 张光华 陶洋

胡敏, 周秀东, 黄宏程, 张光华, 陶洋. 基于改进U型神经网络的脑出血CT图像分割[J]. 电子与信息学报, 2022, 44(1): 127-137. doi: 10.11999/JEIT200996
引用本文: 胡敏, 周秀东, 黄宏程, 张光华, 陶洋. 基于改进U型神经网络的脑出血CT图像分割[J]. 电子与信息学报, 2022, 44(1): 127-137. doi: 10.11999/JEIT200996
HU Min, ZHOU Xiudong, HUANG Hongcheng, ZHANG Guanghua, TAO Yang. Computed-Tomography Image Segmentation of Cerebral Hemorrhage Based on Improved U-shaped Neural Network[J]. Journal of Electronics & Information Technology, 2022, 44(1): 127-137. doi: 10.11999/JEIT200996
Citation: HU Min, ZHOU Xiudong, HUANG Hongcheng, ZHANG Guanghua, TAO Yang. Computed-Tomography Image Segmentation of Cerebral Hemorrhage Based on Improved U-shaped Neural Network[J]. Journal of Electronics & Information Technology, 2022, 44(1): 127-137. doi: 10.11999/JEIT200996

基于改进U型神经网络的脑出血CT图像分割

doi: 10.11999/JEIT200996
基金项目: 国家重点研发计划(2019YFB2102001),山西省回国留学人员科研项目(2020-149)
详细信息
    作者简介:

    胡敏:女,1971年生,教授,研究方向为数字媒体技术、人机交互理论与技术应用

    周秀东:男,1995年生,硕士生,研究方向为智能多媒体信息处理

    黄宏程:男,1979年生,副教授,研究方向为人机融合计算智能、智能多媒体信息处理

    张光华:男,1986年生,副教授,主要研究方向为量子点微型多光谱成像技术、多光谱图像处理、医学图像处理

    陶洋:男,1964年生,教授,研究方向为人工智能、大数据与计算智能

    通讯作者:

    黄宏程    huanghc@cqupt.edu.cn

  • 中图分类号: TN911.73; TP391.41

Computed-Tomography Image Segmentation of Cerebral Hemorrhage Based on Improved U-shaped Neural Network

Funds: The National Key Research and Development Program of China(2019YFB2102001), The Research Project of Shanxi Scholarship Council of China (2020-149)
  • 摘要: 针对脑出血CT图像病灶部位的多尺度性导致分割精度较低的问题,该文提出一种基于改进U型神经网络的图像分割模型(AU-Net+)。首先,该模型利用U-Net中的编码器对脑出血CT图像特征编码,将提出的残差八度卷积(ROC)块应用到U型神经网络的跳跃连接部分,使不同层次的特征更好地融合;其次,对融合后的特征,分别引入混合注意力机制,用以提高对目标区域的特征提取能力;最后,通过改进Dice损失函数进一步加强模型对脑出血CT图像中小目标区域的特征学习力度。为验证模型的有效性,在脑出血CT图像数据集上进行实验,同U-Net, Attention U-Net, UNet++以及CE-Net相比,mIoU指标分别提升了20.9%, 3.6%, 7.0%, 3.1%,表明AU-Net+模型具有更好的分割效果。
  • 图  1  AU-Net+网络框架

    图  2  混合注意力机制

    图  3  位置注意力机制

    图  4  通道注意力机制

    图  5  八度卷积计算过程

    图  6  残差八度卷积模块(ROC)

    图  7  实验流程图

    图  8  实验预处理效果对比图

    图  9  典型病例的分割结果

    图  10  AU-Net+模型训练曲线

    图  11  分割效果图

    图  12  实验结果分割效果图

    图  13  ${y_{{\rm{pred}}}}$的指数对分割的影响

    表  1  AU-Net+网络结构

    编码器-解码器跳跃连接
    conv2d_1 (UConv2D)up_sampling2d_4 (Conv2DTrans)
    max_pooling2d_1 (MaxPooling2D)concatenate_4 (Concatenate)
    conv2d_2 (UConv2D)roc_1(Roc)
    max_pooling2d_2 (MaxPooling2D)up_sampling2d_5 (Conv2DTrans)
    conv2d_3 (UConv2D)concatenate_5 (Concatenate)
    max_pooling2d_3 (MaxPooling2D)roc_2(Roc)
    conv2d_4 (UConv2D)up_sampling2d_6 (Conv2DTrans)
    dropout_1 (Dropout)add_1 (Add)
    up_sampling2d_1 (Conv2DTrans)att_1(Attention)
    concatenate_1 (Concatenate)up_sampling2d_7 (Conv2DTrans)
    conv2d_5 (UConv2D)concatenate_6 (Concatenate)
    up_sampling2d_2 (Conv2DTrans)roc_3(Roc)
    concatenate_2 (Concatenate)up_sampling2d_8 (Conv2DTrans)
    conv2d_6 (UConv2D)add_2 (Add)
    up_sampling2d_3 (Conv2DTrans)att_2(Attention)
    concatenate_3 (Concatenate)up_sampling2d_9 (Conv2DTrans)
    conv2d_7 (UConv2D)add_3 (Add)
    conv2d_8 (EConv2D)att_3(Attention)
    下载: 导出CSV

    表  2  分类结果的混淆矩阵

    预测值\实际值正样本负样本
    正样本${\rm{TP}}$${\rm{FP}}$
    负样本${\rm{FN}}$${\rm{TN}}$
    下载: 导出CSV

    表  3  评价指标的统计结果

    评价指标mIoUVOERecallDICESpecificity
    均值0.8620.0210.9120.9240.987
    方差0.0090.0010.0040.0020.002
    中值0.9010.0230.9350.9530.998
    下载: 导出CSV

    表  4  实验结果对比

    方法(参数量)迭代次数mIoUVOERecallDICESpecificity
    U-Net (31377858)46000.6530.0430.7310.7060.974
    Attention U-Net(31901542)46000.8260.0210.8610.9050.977
    U-Net++(36165192)48000.7920.0250.8330.8830.976
    CE-Net (29003094)45000.8310.0220.8730.9110.981
    AU-Net+(37646416)50000.8620.0210.9120.9240.987
    下载: 导出CSV

    表  5  混合注意力机制和ROC结构分析指标对比

    模型mIoUVOERecallDICESpecificity
    Network_10.6610.0420.7350.7140.976
    Network_20.8350.0250.8410.8930.974
    Network_30.7810.0410.7440.7230.985
    Network_40.8420.0230.8620.9050.986
    AU-Net+0.8620.0210.9120.9240.987
    下载: 导出CSV

    表  6  实验结果对比

    模型参数量mIoUVOERecallDICESpecificity
    Attention U-Net*386544160.8040.0270.8530.8960.956
    Attention U-Net319015420.8260.0210.8610.9050.977
    AU-Net+376464160.8620.0210.9120.9240.987
    下载: 导出CSV
  • [1] 谈山峰, 方芳, 陈兵, 等. 脑疝后脑梗塞预后因素分析[J]. 海南医学, 2014, 25(3): 400–402. doi: 10.3969/j.issn.1003-6350.2014.03.0152

    TAN Shanfeng, FANG Fang, CHEN Bing, et al. Analysis of prognostic factors of cerebral infarction after cerebral hernia[J]. Hainan Medical Journal, 2014, 25(3): 400–402. doi: 10.3969/j.issn.1003-6350.2014.03.0152
    [2] SUN Mingjie, HU R, YU Huimin, et al. Intracranial hemorrhage detection by 3D voxel segmentation on brain CT images[C]. 2015 International Conference on Wireless Communications & Signal Processing (WCSP), Nanjing, China, 2015: 1–5. doi: 10.1109/WCSP.2015.7341238.
    [3] WANG Nian, TONG Fei, TU Yongcheng, et al. Extraction of cerebral hemorrhage and calculation of its volume on CT image using automatic segmentation algorithm[J]. Journal of Physics: Conference Series, 2019, 1187(4): 042088. doi: 10.1088/1742-6596/1187/4/042088
    [4] BHADAURIA H S, SINGH A, and DEWAL M L. An integrated method for hemorrhage segmentation from brain CT Imaging[J]. Computers & Electrical Engineering, 2013, 39(5): 1527–1536. doi: 10.1016/j.compeleceng.2013.04.010
    [5] SHAHANGIAN B and POURGHASSEM H. Automatic brain hemorrhage segmentation and classification in CT scan images[C]. 2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP), Zanjan, Iran, 2013: 467–471. doi: 10.1109/IranianMVIP.2013.6780031.
    [6] KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84–90. doi: 10.1145/3065386
    [7] WANG Shuxin, CAO Shilei, WEI Dong, et al. LT-Net: Label transfer by learning reversible voxel-wise correspondence for one-shot medical image segmentation[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 9159–9168. doi: 10.1109/CVPR42600.2020.00918.
    [8] RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. The 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241. doi: 10.1007/978-3-319-24574-4_28.
    [9] 彭佳林, 揭萍. 基于序列间先验约束和多视角信息融合的肝脏CT图像分割[J]. 电子与信息学报, 2018, 40(4): 971–978. doi: 10.11999/JEIT170933

    PENG Jialin and JIE Ping. Liver segmentation from CT image based on sequential constraint and multi-view information fusion[J]. Journal of Electronics &Information Technology, 2018, 40(4): 971–978. doi: 10.11999/JEIT170933
    [10] MILLETARI F, NAVAB N, and AHMADI S A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation[C]. 2016 Fourth International Conference on 3D Vision (3DV), Stanford, USA, 2016: 565–571. doi: 10.1109/3DV.2016.79.
    [11] GUAN S, KHAN A A, SIKDAR S, et al. Fully dense UNet for 2-D sparse photoacoustic tomography artifact removal[J]. IEEE Journal of Biomedical and Health Informatics, 2020, 24(2): 568–576. doi: 10.1109/JBHI.2019.2912935
    [12] XIAO Xiao, LIAN Shen, LUO Zhiming, et al. Weighted Res-UNet for high-quality retina vessel segmentation[C]. 2018 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 2018: 327–331. doi: 10.1109/ITME.2018.00080.
    [13] OKTAY O, SCHLEMPER J, LE FOLGOC L, et al. Attention U-Net: Learning where to look for the pancreas[C]. The 1st Conference on Medical Imaging with Deep Learning, Amsterdam, Netherlands, 2018: 1–10.
    [14] IBTEHAZ N and RAHMAN M S. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation[J]. Neural Networks, 2020, 121: 74–87. doi: 10.1016/j.neunet.2019.08.025
    [15] ALOM M Z, YAKOPCIC C, TAHA T M, et al. Nuclei segmentation with recurrent residual convolutional neural networks based U-Net (R2U-Net)[C]. NAECON 2018-IEEE National Aerospace and Electronics Conference, Dayton, USA, 2018: 228–233. doi: 10.1109/NAECON.2018.8556686.
    [16] ZHOU Zongwei, RAHMAN M M, TAJBAKHSH N, et al. UNet++: Redesigning skip connections to exploit multiscale features in image segmentation[J]. IEEE Transactions on Medical Imaging, 2020, 39(6): 1856–1867. doi: 10.1109/TMI.2019.2959609
    [17] GU Zaiwang, CHENG Jun, FU Huazhu, et al. CE-Net: Context encoder network for 2D medical image segmentation[J]. IEEE Transactions on Medical Imaging, 2019, 38(10): 2281–2292. doi: 10.1109/TMI.2019.2903562
    [18] FU Jun, LIU Jing, TIAN Haijie, et al. Dual attention network for scene segmentation[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 3141–3149. doi: 10.1109/CVPR.2019.00326.
    [19] CHEN Yunpeng, FAN Haoqi, XU Bing, et al. Drop an Octave: Reducing spatial redundancy in convolutional neural networks with Octave convolution[C]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, South Korea, 2019: 3434–3443. doi: 10.1109/ICCV.2019.00353.
  • 加载中
图(13) / 表(6)
计量
  • 文章访问数:  1147
  • HTML全文浏览量:  752
  • PDF下载量:  211
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-11-25
  • 修回日期:  2021-05-27
  • 网络出版日期:  2021-08-16
  • 刊出日期:  2022-01-10

目录

    /

    返回文章
    返回