高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

结合跨模态特征激励与双分支交叉注意力融合的左心房疤痕分割方法

阮东升 施哲彬 王嘉辉 李杨 蒋明峰

阮东升, 施哲彬, 王嘉辉, 李杨, 蒋明峰. 结合跨模态特征激励与双分支交叉注意力融合的左心房疤痕分割方法[J]. 电子与信息学报. doi: 10.11999/JEIT240775
引用本文: 阮东升, 施哲彬, 王嘉辉, 李杨, 蒋明峰. 结合跨模态特征激励与双分支交叉注意力融合的左心房疤痕分割方法[J]. 电子与信息学报. doi: 10.11999/JEIT240775
RUAN Dongsheng, SHI Zhebin, WANG Jiahui, LI Yang, JIANG Mingfeng. Left Atrial Scar Segmentation Method Combining Cross-Modal Feature Excitation and Dual Branch Cross Attention Fusion[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240775
Citation: RUAN Dongsheng, SHI Zhebin, WANG Jiahui, LI Yang, JIANG Mingfeng. Left Atrial Scar Segmentation Method Combining Cross-Modal Feature Excitation and Dual Branch Cross Attention Fusion[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240775

结合跨模态特征激励与双分支交叉注意力融合的左心房疤痕分割方法

doi: 10.11999/JEIT240775
基金项目: 国家重点研究计划 (2023YFE0205600),国家自然科学基金(62272415, 62401508),浙江省高层次人才特殊支持计划(2023R5216),宁夏自治区重点研发项目(2023BEG02065).
详细信息
    作者简介:

    阮东升:男,特聘副教授,研究方向为计算机视觉、注意力机制、目标检测和语义分割

    施哲彬:男,硕士,研究方向为深度学习和医学图像处理

    王嘉辉:男,硕士生,研究方向为医学图像分析

    李杨:男,副教授,研究方向为医学图像分析、计算机辅助诊断和机器学习

    通讯作者:

    蒋明峰 m.jiang@zstu.edu.cn

  • 中图分类号: TN912.34

Left Atrial Scar Segmentation Method Combining Cross-Modal Feature Excitation and Dual Branch Cross Attention Fusion

Funds: The National Key Research Program of China (2023YFE0205600), The National Natural Science Foundation of China (62272415 and 62401508), Special Support Program for High-level Talents of Zhejiang Province (2023R5216), The Key Research and Development Project of Ningxia Autonomous Region (2023BEG02065)
  • 摘要: 左心房疤痕的分布情况与严重程度能够为房颤的生理病理学研究提供重要信息,因此,实现左心房疤痕的自动化分割对房颤的临床诊断与治疗有着重要意义。但由于左心房疤痕具有形状多样化、目标小、分布离散等特点,现有的左心房疤痕分割方法往往难以取得好的分割效果。该文利用疤痕通常分布在左心房壁上的先验知识,提出一种基于左心房边界特征增强的左心房疤痕分割方法,通过提出的跨模态特征激励模块与双分支交叉注意力融合模块在U型网络的编码器与瓶颈层分别对核磁共振图像与左心房边界符号距离图进行特征增强引导与深层语义信息融合增强,实现从特征层面提高模型对左心房边界特性信息的关注度。该文所提分割模型在LAScarQS2022数据集上进行验证,分割结果评估明显优于当前主流的分割方法。Dice分数和准确率相比基线网络分别高出了2.17%,4.82%。
  • 图  1  左心房与疤痕的位置

    图  2  网络框架图

    图  3  左心房边界SDM提取过程

    图  4  跨模态特征激励模块

    图  5  双分支交叉注意力融合模块

    图  6  分割结果可视化对比

    图  7  特征可视化对比

    表  1  对比实验结果

    方法Dice(%)95HD(voxel)ASD(voxel)Sensitivity(%)Accuracy(%)
    2DU-Net42.6411.232.4737.8568.90
    AttentionUNet43.2210.852.5939.0369.48
    TransUNet47.449.101.8344.1872.06
    Swin-UNet48.629.131.8348.0173.96
    3DV-Net49.4810.032.2754.5077.17
    nnUNet52.419.051.7951.3475.62
    本文54.586.591.4161.0280.44
    下载: 导出CSV

    表  2  与不同规模的基线网络对比

    模型评价指标参数量(M)
    Dice(%)95HD(voxel)ASD(voxel)Sensitivity(%)Accuracy(%)
    nnUNet52.419.051.7951.3475.62126.22
    nnUNet-L52.489.321.8551.8775.93189.33
    本文54.586.591.4161.0280.44166.53
    下载: 导出CSV

    表  3  不同模型结构消融实验结果

    模型 评价指标 参数量(M)
    Dice (%) 95HD (voxel) ASD (voxel) Sensitivity (%) Accuracy (%)
    Share encoder +CME+DBCA 53.72 7.09 1.61 53.80 76.85 127.67
    Independent encoder +conv+conv 53.33 9.16 1.73 53.20 76.55 165.87
    Independent encoder +CME+conv 53.21 8.68 1.69 52.72 76.31 165.91
    Independent encoder +conv+DBCA 53.17 9.17 1.74 52.71 76.31 166.48
    Independent encoder +CME+DBCA 54.58 6.59 1.41 61.02 80.44 166.53
    下载: 导出CSV

    表  4  CBAM模块、CME模块与DBCA模块不同位置组合消融实验结果

    编码器瓶颈层评价指标
    CBAMCMECBAMCMEDBCADice (%)95HD (voxel)ASD (voxel)Sensitivity (%)Accuracy (%)
    53.288.951.6555.7177.80
    53.737.601.5358.4679.17
    53.239.641.8252.6176.26
    52.179.651.8550.4575.18
    53.339.031.8354.4177.15
    53.648.641.6453.3076.60
    52.889.331.8051.1575.53
    50.509.851.8045.8672.89
    54.586.591.4161.0280.44
    下载: 导出CSV

    表  6  左心房边界宽度取值对模型效果影响结果

    边界宽度(mm)评价指标
    Dice(%)95HD(voxel)ASD(voxel)Sensitivity(%)Accuracy(%)
    2.554.597.091.4359.0679.47
    5.054.586.591.4161.0280.44
    7.553.408.471.6754.4077.15
    下载: 导出CSV

    表  5  $ \alpha $和$ \beta $取值消融实验结果

    $ \alpha $ $ \beta $取值评价指标
    Dice(%)95HD(voxel)ASD(voxel)Sensitivity(%)Accuracy(%)
    $ \alpha $= 0.1, $ \beta $ = 0.952.729.731.8450.9875.45
    $ \alpha $= 0.3, $ \beta $ = 0.753.828.431.6554.1177.01
    $ \alpha $ = 0.5, $ \beta $ = 0.553.489.261.7453.4776.69
    $ \alpha $ = 0.7, $ \beta $ = 0.354.586.591.4161.0280.44
    $ \alpha $ = 0.9, $ \beta $ = 0.154.477.511.4956.0977.99
    下载: 导出CSV
  • [1] LIPPI G, SANCHIS-GOMAR F, and CERVELLIN G. Global epidemiology of atrial fibrillation: An increasing epidemic and public health challenge[J]. International Journal of Stroke, 2021, 16(2): 217–221. doi: 10.1177/1747493019897870.
    [2] AKOUM N, DACCARETT M, MCGANN C, et al. Atrial fibrosis helps select the appropriate patient and strategy in catheter ablation of atrial fibrillation: A DE-MRI guided approach[J]. Journal of Cardiovascular Electrophysiology, 2011, 22(1): 16–22. doi: 10.1111/j.1540-8167.2010.01876.x.
    [3] 谷祥婷, 黄锐. 心房颤动发病机制和维持机制的研究进展[J]. 实用心脑肺血管病杂志, 2019, 27(1): 112–115,120. doi: 10.3969/j.issn.1008-5971.2019.01.025.

    GU Xiangting and HUANG Rui. Research progress on pathogenesis and maintaining mechanism of atrial fibrillation[J]. Practical Journal of Cardiac Cerebral Pneumal and Vascular Disease, 2019, 27(1): 112–115,120. doi: 10.3969/j.issn.1008-5971.2019.01.025.
    [4] WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional block attention module[C]. Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 2018: 3–19. doi: 10.1007/978-3-030-01234-2_1.
    [5] ISENSEE F, JAEGER P F, KOHL S A A, et al. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation[J]. Nature Methods, 2021, 18(2): 203–211. doi: 10.1038/s41592-020-01008-z.
    [6] 孙军梅, 葛青青, 李秀梅, 等. 一种具有边缘增强特点的医学图像分割网络[J]. 电子与信息学报, 2022, 44(5): 1643–1652. doi: 10.11999/JEIT210784.

    SUN Junmei, GE Qingqing, LI Xiumei, et al. A medical image segmentation network with boundary enhancement[J]. Journal of Electronics & Information Technology, 2022, 44(5): 1643–1652. doi: 10.11999/JEIT210784.
    [7] 周涛, 刘赟璨, 陆惠玲, 等. ResNet及其在医学图像处理领域的应用: 研究进展与挑战[J]. 电子与信息学报, 2022, 44(1): 149–167. doi: 10.11999/JEIT210914.

    ZHOU Tao, LIU Yuncan, LU Huiling, et al. ResNet and its application to medical image processing: Research progress and challenges[J]. Journal of Electronics & Information Technology, 2022, 44(1): 149–167. doi: 10.11999/JEIT210914.
    [8] SHOTTON J, JOHNSON M, and CIPOLLA R. Semantic texton forests for image categorization and segmentation[C]. Proceedings of 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, USA, 2008: 1–8. doi: 10.1109/CVPR.2008.4587503.
    [9] 周涛, 侯森宝, 陆惠玲, 等. C2 Transformer U-Net: 面向跨模态和上下文语义的医学图像分割模型[J]. 电子与信息学报, 2023, 45(5): 1807–1816. doi: 10.11999/JEIT220445.

    ZHOU Tao, HOU Senbao, LU Huiling, et al. C2 Transformer U-Net: A medical image segmentation model for cross-modality and contextual semantics[J]. Journal of Electronics & Information Technology, 2023, 45(5): 1807–1816. doi: 10.11999/JEIT220445.
    [10] ALBAWI S, MOHAMMED T A, and AL-ZAWI S. Understanding of a convolutional neural network[C]. Proceedings of 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 2017: 1–6. doi: 10.1109/ICEngTechnol.2017.8308186.
    [11] NIYAS S, PAWAN S J, ANAND KUMAR M, et al. Medical image segmentation with 3D convolutional neural networks: A survey[J]. Neurocomputing, 2022, 493: 397–413. doi: 10.1016/j.neucom.2022.04.065.
    [12] 张淑军, 彭中, 李辉. SAU-Net: 基于U-Net和自注意力机制的医学图像分割方法[J]. 电子学报, 2022, 50(10): 2433–2442. doi: 10.12263/DZXB.20200984.

    ZHANG Shujun, PENG Zhong, and LI Hui. SAU-Net: Medical image segmentation method based on U-Net and self-attention[J]. Acta Electronica Sinica, 2022, 50(10): 2433–2442. doi: 10.12263/DZXB.20200984.
    [13] RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Munich, Germany, 2015: 234–241. doi: 10.1007/978-3-319-24574-4_28.
    [14] ÇİÇEK Ö, ABDULKADIR A, LIENKAMP S S, et al. 3D U-Net: Learning dense volumetric segmentation from sparse annotation[C]. Proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, Athens, Greece, 2016: 424–432. doi: 10.1007/978-3-319-46723-8_49.
    [15] MILLETARI F, NAVAB N, and AHMADI S A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation[C]. Proceedings of 2016 Fourth International Conference on 3D Vision (3DV), Stanford, USA, 2016: 565–571. doi: 10.1109/3DV.2016.79.
    [16] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, USA, 2017: 6000–6010.
    [17] OKTAY O, SCHLEMPER J, LE FOLGOC L, et al. Attention U-Net: Learning where to look for the pancreas[C]. Proceedings of the 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands, 2018.
    [18] CHEN Jieneng, LU Yongyi, YU Qihang, et al. TransUNet: Transformers make strong encoders for medical image segmentation[C]. Proceedings of the 38th International Conference on Machine Learning, 2021. (查阅网上资料, 未找到本条文献出版地信息, 请确认) .
    [19] CAO Hu, WANG Yueyue, CHEN J, et al. Swin-Unet: UNet-Like pure transformer for medical image segmentation[C]. Proceedings of Computer Vision – ECCV 2022 Workshops, Tel Aviv, Israel, 2022: 205–218. doi: 10.1007/978-3-031-25066-8_9.
    [20] PERRY D, MORRIS A, BURGON N, et al. Automatic classification of scar tissue in late gadolinium enhancement cardiac MRI for the assessment of left-atrial wall injury after radiofrequency ablation[C]. Proceedings of Medical Imaging 2012: Computer-Aided Diagnosis, San Diego, USA, 2012: 83151D. doi: 10.1117/12.910833.
    [21] KARIM R, ARUJUNA A, BRAZIER A, et al. Automatic segmentation of left atrial scar from delayed-enhancement magnetic resonance imaging[C]. Proceedings of the 6th International Conference on Functional Imaging and Modeling of the Heart, New York City, USA, 2011: 63–70.
    [22] LI Lei, ZIMMER V A, SCHNABEL J A, et al. AtrialJSQnet: A new framework for joint segmentation and quantification of left atrium and scars incorporating spatial and shape information[J]. Medical Image Analysis, 2022, 76: 102303. doi: 10.1016/j.media.2021.102303.
    [23] LIU Tianyi, HOU Size, ZHU Jiayuan, et al. UGformer for robust left atrium and scar segmentation across scanners[C]. Proceedings of the 1st Challenge on Left Atrial and Scar Quantification and Segmentation, Singapore, Singapore, 2022: 36–48. doi: 10.1007/978-3-031-31778-1_4.
    [24] OGBOMO-HARMITT S, GRZELAK J, QURESHI A, et al. TESSLA: Two-Stage ensemble scar segmentation for the left atrium[C]. Proceedings of the 1st Challenge on Left Atrial and Scar Quantification and Segmentation, Singapore, Singapore, 2022: 106–114. doi: 10.1007/978-3-031-31778-1_10.
    [25] KHAN A, ALWAZZAN O, BENNING M, et al. Sequential segmentation of the left atrium and atrial scars using a multi-scale weight sharing network and boundary-based processing[C]. Proceedings of the 1st Challenge on Left Atrial and Scar Quantification and Segmentation, Singapore, Singapore, 2022: 69–82. doi: 10.1007/978-3-031-31778-1_7.
    [26] DANGI S, LINTE C A, and YANIV Z. A distance map regularized CNN for cardiac cine MR image segmentation[J]. Medical Physics, 2019, 46(12): 5637–5651. doi: 10.1002/mp.13853.
    [27] HU Jie, SHEN Li, and SUN Gang. Squeeze-and-excitation networks[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, USA, 2018: 7132–7141. doi: 10.1109/CVPR.2018.00745.
    [28] JADERBERG M, SIMONYAN K, ZISSERMAN A, et al. Spatial transformer networks[C]. Proceedings of the 29th International Conference on Neural Information Processing Systems, Montreal, Canada, 2015: 2017–2025.
    [29] ROMBACH R, BLATTMANN A, LORENZ D, et al. High-resolution image synthesis with latent diffusion models[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 10674–10685. doi: 10.1109/CVPR52688.2022.01042.
    [30] LI Lei, ZIMMER V A, SCHNABEL J A, et al. Medical image analysis on left atrial LGE MRI for atrial fibrillation studies: A review[J]. Medical Image Analysis, 2022, 77: 102360. doi: 10.1016/j.media.2022.102360.
    [31] LI Lei, ZIMMER V A, SCHNABEL J A, et al. AtrialGeneral: Domain generalization for left atrial segmentation of multi-center LGE MRIs[C]. Proceedings of the 24th International Conference on Medical Image Computing and Computer-Assisted Intervention – MICCAI 2021, Strasbourg, France, 2021: 557–566. doi: 10.1007/978-3-030-87231-1_54.
  • 加载中
图(7) / 表(6)
计量
  • 文章访问数:  20
  • HTML全文浏览量:  5
  • PDF下载量:  2
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-09-09
  • 修回日期:  2025-04-01
  • 网络出版日期:  2025-04-15

目录

    /

    返回文章
    返回