高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

特征反馈机制优化的超声图像病灶检测算法

丁建睿 王凌涛 汤丰赫 宁春平

丁建睿, 王凌涛, 汤丰赫, 宁春平. 特征反馈机制优化的超声图像病灶检测算法[J]. 电子与信息学报, 2024, 46(3): 1013-1021. doi: 10.11999/JEIT230385
引用本文: 丁建睿, 王凌涛, 汤丰赫, 宁春平. 特征反馈机制优化的超声图像病灶检测算法[J]. 电子与信息学报, 2024, 46(3): 1013-1021. doi: 10.11999/JEIT230385
DING Jianrui, WANG Lingtao, TANG Fenghe, NING Chunping. Ultrasound Image Lesion Detection Algorithm Optimized by Feature Feedback Mechanism[J]. Journal of Electronics & Information Technology, 2024, 46(3): 1013-1021. doi: 10.11999/JEIT230385
Citation: DING Jianrui, WANG Lingtao, TANG Fenghe, NING Chunping. Ultrasound Image Lesion Detection Algorithm Optimized by Feature Feedback Mechanism[J]. Journal of Electronics & Information Technology, 2024, 46(3): 1013-1021. doi: 10.11999/JEIT230385

特征反馈机制优化的超声图像病灶检测算法

doi: 10.11999/JEIT230385
基金项目: 国家自然科学基金(U22A2033),山东省自然科学基金(ZR2020MH290)
详细信息
    作者简介:

    丁建睿:男,硕士生导师,副教授,研究方向为模式识别、计算机视觉

    王凌涛:男,硕士生,研究方向为计算机视觉

    汤丰赫:男,硕士生,研究方向为计算机视觉

    宁春平:女,硕士生导师,副主任医师,研究方向为超声医学

    通讯作者:

    丁建睿 jrding@hit.edu.cn

  • 中图分类号: TN911.73; TP391.41

Ultrasound Image Lesion Detection Algorithm Optimized by Feature Feedback Mechanism

Funds: The National Natural Science Foundation of China (U22A2033), The Natural Science Foundation of Shandong Province (ZR2020MH290)
  • 摘要: 该文提出一种基于特征反馈机制的超声图像病灶检测方法,以实现超声病灶的实时精确定位与检测。所提方法由基于特征反馈机制的特征提取网络和基于分治策略的自适应检测头两部分组成。特征反馈网络通过反馈特征选取和加权融合计算,充分学习超声图像的全局上下文信息和局部低级语义细节以提高局部病灶特征的识别能力。自适应检测头对特征反馈网络所提取的多级特征进行分治预处理,通过将生理先验知识与特征卷积相结合的方式对各级特征分别进行病灶形状和尺度特征的自适应建模,增强检测头对不同大小病灶在多级特征下的检测效果。所提方法在甲状腺超声图像数据集上进行了测试,得到了70.3%的AP,99.0%的AP50和88.4%的AP75,实验结果表明,相较于主流检测算法,所提算法能实现更精准的实时超声图像病灶检测和定位。
  • 图  1  特征反馈网络结构图

    图  2  反馈特征选取模块

    图  3  ConvNeXt阶段流程改进图

    图  4  预处理块结构图

    图  5  甲状腺超声图像示例

    图  6  病灶检测结果示例

    图  7  梯度热力图

    图  8  特征图仿真示例

    表  1  甲状腺超声病灶检测精度对比(%)

    方法骨干网APAP50AP75AP良性AP恶性
    Faster RCNN[7]ResNet5064.396.679.261.567.1
    RetinaNet[10]ResNet5065.297.680.362.467.9
    Yolov3[11]Darknet5364.795.281.562.566.8
    FCOS[12]ResNet5065.895.580.863.568.2
    EfficientDet[26]EfficientNet-B166.198.777.163.868.5
    VarifocalNet[13]ResNet5064.597.378.564.464.6
    Yolof[14]ResNet5065.999.281.464.866.9
    Yolox[16]Darknet5367.098.183.464.469.5
    Yolov7[17]CBS+ELAN67.398.384.065.369.2
    DETR[20]ResNet5063.493.676.261.265.7
    DAB-DETR[21]ResNet5064.996.378.964.165.8
    DINO[22]ResNet5066.195.883.662.569.7
    本文ResNet5069.699.087.768.271.0
    本文ConvNeXt-tiny70.399.088.468.971.6
    下载: 导出CSV

    表  2  病灶检测精度消融实验(%)

    方法APAP50AP75
    Baseline65.895.580.8
    +ConvNeXt67.598.784.8
    +ConvNeXt+自适应检测头68.598.686.8
    +ConvNeXt+自适应检测头+特征反馈网络70.399.088.4
    下载: 导出CSV

    表  3  不同检测头对比(%)

    方法APAP50AP75
    基线检测头(FCOS)67.598.784.8
    耦合检测头(Yolov3)65.697.182.0
    解耦合检测头(Yolox)67.198.587.0
    自适应检测头(本文)68.598.686.8
    下载: 导出CSV

    表  4  不同反馈方式对比

    方法AP(%)AP50(%)AP75(%)FPS(帧/s)
    无反馈网络68.598.686.846
    $ {\boldsymbol{P}}_3^1 $至$ {\boldsymbol{P}}_5^1 $反馈网络+ASPP69.698.687.440
    $ {\boldsymbol{P}}_3^1 $至$ {\boldsymbol{P}}_7^1 $反馈网络+ASPP69.698.487.834
    $ {\boldsymbol{P}}_3^1 $至$ {\boldsymbol{P}}_5^1 $反馈网络+反馈特征
    选取模块(本文)
    70.399.088.439
    $ {\boldsymbol{P}}_3^1 $至$ {\boldsymbol{P}}_7^1 $反馈网络+反馈特征
    选取模块
    70.198.588.230
    下载: 导出CSV
  • [1] WATAYA T, YANAGAWA M, TSUBAMOTO M, et al. Radiologists with and without deep learning–based computer-aided diagnosis: Comparison of performance and interobserver agreement for characterizing and diagnosing pulmonary nodules/masses[J]. European Radiology, 2023, 33(1): 348–359. doi: 10.1007/s00330-022-08948-4.
    [2] SOLYMOSI T, HEGEDŰS L, BONNEMA S J, et al. Considerable interobserver variation calls for unambiguous definitions of thyroid nodule ultrasound characteristics[J]. European Thyroid Journal, 2023, 12(2): e220134. doi: 10.1530/ETJ-22-0134.
    [3] YAP M H, GOYAL M, OSMAN F, et al. Breast ultrasound region of interest detection and lesion localisation[J]. Artificial Intelligence in Medicine, 2020, 107: 101880. doi: 10.1016/j.artmed.2020.101880.
    [4] LI Yujie, GU Hong, WANG Hongyu, et al. BUSnet: A deep learning model of breast tumor lesion detection for ultrasound images[J]. Frontiers in Oncology, 2022, 12: 848271. doi: 10.3389/fonc.2022.848271.
    [5] MENG Hui, LIU Xuefeng, NIU Jianwei, et al. DGANet: A dual global attention neural network for breast lesion detection in ultrasound images[J]. Ultrasound in Medicine and Biology, 2023, 49(1): 31–44. doi: 10.1016/j.ultrasmedbio.2022.07.006.
    [6] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 580–587.
    [7] REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149. doi: 10.1109/TPAMI.2016.2577031.
    [8] LIANG Tingting, CHU Xiaojie, LIU Yudong, et al. CBNet: A composite backbone network architecture for object detection[J]. IEEE Transactions on Image Processing, 2022, 31: 6893–6906. doi: 10.1109/TIP.2022.3216771.
    [9] QIAO Siyuan, CHEN L C, and YUILLE A. DetectoRS: Detecting objects with recursive feature pyramid and switchable atrous convolution[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 10208–10219.
    [10] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 2999–3007.
    [11] REDMON J and FARHADI A. YOLOv3: An incremental improvement[J]. arXiv preprint arXiv: 1804.02767, 2018.
    [12] TIAN Zhi, SHEN Chunhua, CHEN Hao, et al. FCOS: Fully convolutional one-stage object detection[C]. 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 2019: 9626–9635.
    [13] ZHANG Haoyang, WANG Ying, DAYOUB F, et al. VarifocalNet: An IoU-aware dense object detector[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 8510–8519.
    [14] CHEN Qiang, WANG Yingming, YANG Tong, et al. You only look one-level feature[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 13034–13043.
    [15] TAN Mingxing, PANG Ruoming, and LE Q V. EfficientDet: Scalable and efficient object detection[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 10778–10787.
    [16] GE Zheng, LIU Songtao, WANG Feng, et al. YOLOX: Exceeding YOLO series in 2021[J]. arXiv preprint arXiv: 2107.08430, 2021.
    [17] WANG C Y, BOCHKOVSKIY A, and LIAO H Y M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[J]. arXiv preprint arXiv: 2207.02696, 2022.
    [18] WANG Wen, ZHANG Jing, CAO Yang, et al. Towards data-efficient detection transformers[C]. The 17th European Conference on Computer Vision, Tel Aviv, Israel, 2022: 88–105.
    [19] CHEN Xiangyu, HU Qinghao, LI Kaidong, et al. Accumulated trivial attention matters in vision transformers on small datasets[C]. 2023 IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, USA, 2023: 3973–3981.
    [20] CARION N, MASSA F, SYNNAEVE G, et al. End-to-end object detection with transformers[C]. Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, 2020: 213–229.
    [21] LIU Shilong, LI Feng, ZHANG Hao, et al. DAB-DETR: Dynamic anchor boxes are better queries for DETR[C]. The Tenth International Conference on Learning Representations (Virtual), 2022: 1–20. doi: 10.48550/arXiv.2201.12329.
    [22] ZHANG Hao, LI Feng, LIU Shilong, et al. DINO: DETR with improved DeNoising anchor boxes for end-to-end object detection[C]. The Eleventh International Conference on Learning Representations, Kigali, Rwanda, 2023: 1–19. doi: 10.48550/arXiv.2203.03605.
    [23] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [24] LIU Zhuang, MAO Hanzi, WU Chaoyuan, et al. A ConvNet for the 2020s[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 11966–11976.
    [25] PENG Zhiliang, GUO Zonghao, HUANG Wei, et al. Conformer: Local features coupling global representations for recognition and detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 9454–9468. doi: 10.1109/TPAMI.2023.3243048.
    [26] WANG W, DAI J, CHEN Z, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions[C]. IEEE Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 14408–14419.
    [27] SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: Visual explanations from deep networks via gradient-based localization[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 618–626.
  • 加载中
图(8) / 表(4)
计量
  • 文章访问数:  358
  • HTML全文浏览量:  195
  • PDF下载量:  53
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-05-08
  • 修回日期:  2023-08-18
  • 录用日期:  2023-08-21
  • 网络出版日期:  2023-08-24
  • 刊出日期:  2024-03-27

目录

    /

    返回文章
    返回