高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于sECANet通道注意力机制的肾透明细胞癌病理图像ISUP分级预测

杨昆 常世龙 王尉丞 高聪 刘筱 刘爽 薛林雁

杨昆, 常世龙, 王尉丞, 高聪, 刘筱, 刘爽, 薛林雁. 基于sECANet通道注意力机制的肾透明细胞癌病理图像ISUP分级预测[J]. 电子与信息学报, 2022, 44(1): 138-148. doi: 10.11999/JEIT210900
引用本文: 杨昆, 常世龙, 王尉丞, 高聪, 刘筱, 刘爽, 薛林雁. 基于sECANet通道注意力机制的肾透明细胞癌病理图像ISUP分级预测[J]. 电子与信息学报, 2022, 44(1): 138-148. doi: 10.11999/JEIT210900
YANG Kun, CHANG Shilong, WANG Yucheng, GAO Cong, LIU Xiao, LIU Shuang, XUE Linyan. Predict the ISUP Grade of Clear Cell Renal Cell Carcinoma Using Pathological Images Based on sECANet Chanel Attention[J]. Journal of Electronics & Information Technology, 2022, 44(1): 138-148. doi: 10.11999/JEIT210900
Citation: YANG Kun, CHANG Shilong, WANG Yucheng, GAO Cong, LIU Xiao, LIU Shuang, XUE Linyan. Predict the ISUP Grade of Clear Cell Renal Cell Carcinoma Using Pathological Images Based on sECANet Chanel Attention[J]. Journal of Electronics & Information Technology, 2022, 44(1): 138-148. doi: 10.11999/JEIT210900

基于sECANet通道注意力机制的肾透明细胞癌病理图像ISUP分级预测

doi: 10.11999/JEIT210900
基金项目: 河北省自然科学基金(H2019201378)
详细信息
    作者简介:

    杨昆:男,1976年生,教授,研究方向为分子医学影像技术、基于人工智能的医学影像处理

    常世龙:男,1995年生,硕士生,研究方向为智能仪器与机器视觉

    王尉丞:男,1996年生,硕士生,研究方向为智能仪器与机器视觉

    高聪:男,1995年生,硕士生,研究方向为智能仪器与机器视觉

    刘筱:女,1996年生,硕士生,研究方向为智能仪器与机器视觉

    刘爽:女,1981年生,副教授,研究方向为生物医学图像处理

    薛林雁:女,1981年生,副教授,研究方向为生物医学图像处理、视觉神经机制

    通讯作者:

    薛林雁 lyxue@hbu.edu.cn

  • 中图分类号: TN911.73; TP391.72

Predict the ISUP Grade of Clear Cell Renal Cell Carcinoma Using Pathological Images Based on sECANet Chanel Attention

Funds: The Natural Science Foundation of Hebei Province (H2019201378)
  • 摘要: 为了对肾透明细胞癌(ccRCC)进行准确核分级以改善肾癌的治疗和预后,该文提出一种新的通道注意力模块sECANet,通过计算特征图中当前通道与临近通道以及当前通道与远距离通道之间的信息交互来获取更多有用的特征。实验中收集了90例患者的肾组织病理图像,进行裁切和增强后采用五折交叉验证法对改进后的网络在Patch级别进行验证。实验结果表明,该文所提出的模型在Patch级别上鉴别ISUP分级的准确率为78.48±3.17%,精确率为79.95±4.37%,召回率为78.43±2.44%,F1分数为78.51±3.04%。进一步地,对每个病例所有Patch的预测结果采用多数投票法得到Image级别的分类结果,所有病例的准确率为88.89%,精确率为89.88%,召回率为87.65%,F1分数为88.51%。因此,sECANet在Patch级别和Image级别上均优于其他注意力机制和基本网络模型ResNet50。据此,该文所构建的病理图像ccRCC ISUP分级模型有良好的诊断效能,可以为患者的治疗和预后提供一定的参考。
  • 图  1  sECANet通道注意力模块

    图  2  ResNet50网络的整体结构图以及sECANet模块插入位置示意图

    图  3  40倍放大倍率下病理图像切分以及图像增强过程

    图  4  融合不同注意力模块的ResNet50模型在Patch级别分类结果混淆矩阵

    图  5  数据集中4种不同病理级别的典型示意图

    图  6  融合不同注意力模块的ResNet50模型在Image级别分类结果混淆矩阵

    图  7  融合不同注意力模块的ResNet50模型相同输入图像下的Grad-CAM图

    表  1  No.U090KI01组织芯片的临床信息

    项目特征数量
    年龄(岁)≤5033
    51~7049
    >708
    性别男性28
    女性62
    ISUP等级ISUP 1级38
    ISUP 2级25
    ISUP 3级17
    正常10
    Stage分期Ⅰ期59
    Ⅱ期19
    Ⅲ期2
    TNM分期T1N0M059
    T2N0M019
    T3N0M02
    发病位置左肾癌10
    右肾癌8
    未标明62
    下载: 导出CSV

    表  2  融合不同注意力模块的ResNet50模型在Patch级别的分类性能(%)对比

    方法AccPreRecF1
    ResNet5076.57±4.3878.26±5.4976.18±4.4976.67±4.46
    ResNet50+SeNet77.64±3.4278.92±4.7477.69±3.0377.39±3.51
    ResNet50+ECANet77.27±3.9079.78±5.0377.82±3.7678.01±3.87
    本文ResNet50+sECANet78.48±3.1779.95±4.3778.43±2.4478.51±3.04
    下载: 导出CSV

    表  3  融合不同注意力模块的ResNet50模型在Image级别的分类性能(%)对比

    方法AccPreRecF1
    ResNet5085.5685.784.5284.46
    ResNet50+SeNet85.5684.7284.5284.36
    ResNet50+ECANet86.6788.5985.9986.99
    ResNet50+sECANet(Ours)88.8989.8887.6588.51
    下载: 导出CSV

    表  4  不同网络在Patch级别的分类性能(%)对比

    方法AccPreRecF1
    ShuffleNet V2[20]75.87±5.0377.78±5.6475.87±5.3475.79±5.13
    DenseNet121[24]76.64±3.8176.96±5.0676.42±3.4975.93±3.72
    VGG16[15]75.89±3.6077.20±5.3874.97±4.8874.66±4.70
    本文ResNet50+sECANet78.48±3.1779.95±4.3778.43±2.4478.51±3.04
    下载: 导出CSV

    表  5  不同网络在Image级别的分类性能(%)对比

    方法AccPreRecF1
    ShuffleNet V2[20]85.5687.683.7184.92
    DenseNet121[24]86.6786.4686.3386.15
    VGG16[15]84.4484.9582.4983.38
    本文ResNet50+sECANet88.8989.8887.6588.51
    下载: 导出CSV
  • [1] SUNG H, FERLAY J, SIEGEL R L, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries[J]. CA:A Cancer Journal for Clinicians, 2021, 71(3): 209–249. doi: 10.3322/caac.21660
    [2] 韩苏军, 王栋, 李长岭, 等. 1998-2008年中国肾癌发病趋势分析[J]. 癌症进展, 2018, 16(10): 1234–1237.

    HAN Sujun, WANG Dong, LI Changling, et al. Analysis for the incidence trends of renal cell carcinoma in China, 1998–2008[J]. Oncology Progress, 2018, 16(10): 1234–1237.
    [3] ZIMPFER A, GLASS Ä, ZETTL H, et al. Histopathologische diagnose und prognose des nierenzellkarzinoms im kontext der WHO-klassifikation 2016[J]. Der Urologe, 2019, 58(9): 1057–1065. doi: 10.1007/s00120-019-0952-z
    [4] MOCH H, CUBILLA A L, HUMPHREY P A, et al. The 2016 who classification of tumours of the urinary system and male genital organs—Part A: Renal, penile, and testicular tumours[J]. European Urology, 2016, 70(1): 93–105. doi: 10.1016/j.eururo.2016.02.029
    [5] 陈皓, 段红柏, 郭紫园, 等. 基于CT征象量化分析的肺结节恶性度分级[J]. 电子与信息学报, 2021, 43(5): 1405–1413. doi: 10.11999/JEIT200167

    CHEN Hao, DUAN Hongbai, GUO Ziyuan, et al. Malignancy grading of lung nodules based on CT signs quantization analysis[J]. Journal of Electronics &Information Technology, 2021, 43(5): 1405–1413. doi: 10.11999/JEIT200167
    [6] 韩冬, 张喜荣, 贾永军, 等. 基于增强CT构建鉴别肾透明细胞癌ISUP分级的神经网络模型[J]. 肿瘤防治研究, 2021, 48(1): 55–59. doi: 10.3971/j.issn.1000-8578.2021.20.0440

    HAN Dong, ZHANG Xirong, JIA Yongjun, et al. A neural network model based on enhanced CT for distinguishing ISUP grade of clear cell renal cell carcinoma[J]. Cancer Research on Prevention and Treatment, 2021, 48(1): 55–59. doi: 10.3971/j.issn.1000-8578.2021.20.0440
    [7] 陈心怡, 明樱, 韩雨晴, 等. 基于CT动脉期图像构建预测肾透明细胞癌分级人工智能模型及其效能分析[J]. 医学影像学杂志, 2020, 30(6): 1033–1036.

    CHEN Xinyi, MING Ying, HAN Yuqing, et al. Construction of graded artificial intelligence model for predicting renal clear cell carcinoma based on CT arterial phase imaging and its efficacy analysis[J]. Journal of Medical Imaging, 2020, 30(6): 1033–1036.
    [8] 康钦钦, 田冰, 边云, 等. 基于影像特征对透明细胞肾细胞癌病理分级的预测研究[J]. 影像诊断与介入放射学, 2021, 30(2): 103–110. doi: 10.3969/j.issn.1005-8001.2021.02.004

    KANG Qinqin, TIAN Bing, BIAN Yun, et al. Prediction models based on CT features and RENAL score for clear cell renal cell carcinoma Fuhrman grade[J]. Diagnostic Imaging &Interventional Radiology, 2021, 30(2): 103–110. doi: 10.3969/j.issn.1005-8001.2021.02.004
    [9] LIN Fan, MA Changyi, XU Jinpeng, et al. A CT-based deep learning model for predicting the nuclear grade of clear cell renal cell carcinoma[J]. European Journal of Radiology, 2020, 129: 109079. doi: 10.1016/j.ejrad.2020.109079
    [10] HADJIYSKI N. Kidney cancer staging: Deep learning neural network based approach[C]. 2020 International Conference on E-Health and Bioengineering, Iasi, Romania, 2020: 1–4.
    [11] 石博文, 叶靖, 段绍峰, 等. T2WI序列纹理分析联合机器学习的影像组学术前预测肾透明细胞癌的病理分级[J]. 中国医学影像学杂志, 2020, 28(8): 629–632. doi: 10.3969/j.issn.1005-5185.2020.08.018

    SHI Bowen, YE Jing, DUAN Shaofeng, et al. Radiomics based on T2WI sequence texture analysis combined with machine learning imaging in predicting the pathological grade of clear cell renal cell carcinoma before surgery[J]. Chinese Journal of Medical Imaging, 2020, 28(8): 629–632. doi: 10.3969/j.issn.1005-5185.2020.08.018
    [12] 张钰, 陈新元, 许宁, 等. 基于MRI纹理分析预测肾透明细胞癌核分级[J]. 中华放射学杂志, 2021, 55(1): 53–58. doi: 10.3760/cma.j.cn112149-20200712-00912

    ZHANG Yu, CHEN Xinyuan, XU Ning, et al. Prediction of nuclear grade of renal clear cell carcinoma based on MRI texture analysis in combination with imaging features[J]. Chinese Journal of Radiology, 2021, 55(1): 53–58. doi: 10.3760/cma.j.cn112149-20200712-00912
    [13] LI Zewen, YANG Wenjie, PENG Shouheng, et al. A survey of convolutional neural networks: Analysis, applications, and prospects[EB/OL]. https://arxiv.org/abs/2004.02806, 2020.
    [14] KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84–90. doi: 10.1145/3065386
    [15] SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015: 7–12.
    [16] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [17] 徐从安, 吕亚飞, 张筱晗, 等. 基于双重注意力机制的遥感图像场景分类特征表示方法[J]. 电子与信息学报, 2021, 43(3): 683–691. doi: 10.11999/JEIT200568

    XU Cong’an, LÜ Yafei, ZHANG Xiaohan, et al. A discriminative feature representation method based on dual attention mechanism for remote sensing image scene classification[J]. Journal of Electronics &Information Technology, 2021, 43(3): 683–691. doi: 10.11999/JEIT200568
    [18] HU Jie, SHEN Li, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011–2023. doi: 10.1109/TPAMI.2019.2913372
    [19] WANG Qilong, WU Banggu, ZHU Pengfei, et al. ECA-Net: Efficient channel attention for deep convolutional neural networks[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 11531–11539.
    [20] ZHANG Xiangyu, ZHOU Xinyu, LIN Mengxiao, et al. ShuffleNet: An extremely efficient convolutional neural network for mobile devices[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 6848–6856.
    [21] LI Xiang, WANG Wenhai, HU Xiaolin, et al. Selective kernel networks[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 510–519.
    [22] SHORTEN C and KHOSHGOFTAAR T M. A survey on image data augmentation for deep learning[J]. Journal of Big Data, 2019, 6(1): 60. doi: 10.1186/s40537-019-0197-0
    [23] SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: Visual explanations from deep networks via gradient-based localization[J]. International Journal of Computer Vision, 2020, 128(2): 336–359. doi: 10.1007/s11263-019-01228-7
    [24] HUANG Gao, LIU Zhuang, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 2261–2269.
  • 加载中
图(7) / 表(5)
计量
  • 文章访问数:  894
  • HTML全文浏览量:  401
  • PDF下载量:  80
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-08-30
  • 修回日期:  2021-12-26
  • 录用日期:  2021-12-27
  • 网络出版日期:  2022-01-04
  • 刊出日期:  2022-01-10

目录

    /

    返回文章
    返回