高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于残差编解码器的通道自适应超声图像去噪方法

曾宪华 李彦澄 高歌 赵雪婷

曾宪华, 李彦澄, 高歌, 赵雪婷. 基于残差编解码器的通道自适应超声图像去噪方法[J]. 电子与信息学报, 2022, 44(7): 2547-2558. doi: 10.11999/JEIT210331
引用本文: 曾宪华, 李彦澄, 高歌, 赵雪婷. 基于残差编解码器的通道自适应超声图像去噪方法[J]. 电子与信息学报, 2022, 44(7): 2547-2558. doi: 10.11999/JEIT210331
ZENG Xianhua, LI Yancheng, GAO Ge, ZHAO Xueting. Channel Adaptive Ultrasound Image Denoising Method Based on Residual Encoder-decoder Networks[J]. Journal of Electronics & Information Technology, 2022, 44(7): 2547-2558. doi: 10.11999/JEIT210331
Citation: ZENG Xianhua, LI Yancheng, GAO Ge, ZHAO Xueting. Channel Adaptive Ultrasound Image Denoising Method Based on Residual Encoder-decoder Networks[J]. Journal of Electronics & Information Technology, 2022, 44(7): 2547-2558. doi: 10.11999/JEIT210331

基于残差编解码器的通道自适应超声图像去噪方法

doi: 10.11999/JEIT210331
基金项目: 国家自然科学基金(62076044),重庆自然科学基金重点项目(cstc2019jcyjzdxmX0011)
详细信息
    作者简介:

    曾宪华:男,1973年生,教授,博士生导师,研究方向为图像处理、机器学习和数据挖掘

    李彦澄:女,1995年生,硕士生,研究方向为医学图像处理

    高歌:男,1994年生,硕士生,研究方向为医学图像处理

    赵雪婷:女,1982年生,硕士,研究方向为胎儿产前超声

    通讯作者:

    曾宪华 zengxh@cqupt.edu.cn

  • 1) 实验代码见https://github.com/liyanch/RED-SENet
  • 中图分类号: TN911.73

Channel Adaptive Ultrasound Image Denoising Method Based on Residual Encoder-decoder Networks

Funds: The National Natural Science Foundation of China (62076044), The Key Project of Chongqing Natural Science Foundation (cstc2019jcyjzdxmX0011)
  • 摘要: 超声图像去噪对提高超声图像的视觉质量和完成其他相关的计算机视觉任务都至关重要。超声图像中的特征信息与斑点噪声信号较为相似,用已有的去噪方法对超声图像去噪,容易造成超声图像纹理特征丢失,这会对临床诊断的准确性产生严重的干扰。因此,在去除斑点噪声的过程中,需尽量保留图像的边缘纹理信息才能更好地完成超声图像去噪任务。该文提出一种基于残差编解码器的通道自适应去噪模型(RED-SENet),能有效去除超声图像中的斑点噪声。在去噪模型的解码器部分引入注意力反卷积残差块,使本模型可以学习并利用全局信息,从而选择性地强调关键通道的内容特征,抑制无用特征,能提高模型去噪的性能。在2个私有数据集和2个公开数据集上对该模型进行定性评估和定量分析,与一些先进的方法相比,该模型的去噪性能有显著提升,并在噪声抑制以及结构保持方面具有良好的效果。
  • 图  1  RED-SENet总体模型结构图

    图  2  注意力反卷积残差(ADR)块

    图  4  相对于无噪声图像的绝对差分图像

    图  3  在同一噪声程度下($ \sigma = 25 $),不同方法的去噪结果可视化

    图  5  相对于噪声图像的绝对差分图像

    图  6  不同噪声方差$ \sigma $下,胎儿心脏(FH)超声图像去噪后的平均评价指标

    图  7  未使用物理手段采集的超声图像去噪前后的主观图对比

    图  8  相对于噪声图像的绝对差分图像

    表  1  RED-SENet超声图像去噪模型训练算法

     输入:训练集${\boldsymbol{S}} = \left\{ {\left( { { {\boldsymbol{X} }_{\text{1} } },{ {\boldsymbol{Y} }_1} } \right),\left( { { {\boldsymbol{X} }_{\text{2} } },{ {\boldsymbol{Y} }_{\text{2} } } } \right), \cdots ,\left( { { {\boldsymbol{X} }_N},{ {\boldsymbol{Y} }_N} } \right)} \right\}$;
        $\left( { {{\boldsymbol{X}}_i}{\text{,} }{{\boldsymbol{Y}}_i} } \right)$为带噪声图像和干净图像的图像对
     输出:RED-SENet去噪模型$ {\varphi _{\text{d}}} $;
     初始化:学习率${\rm{lr}}$;训练次数$ T $;批次大小$ M $;
     过程:
     (1) while $ t \lt T $do:
     (2)   随机打乱训练集${\boldsymbol{S}} = \left\{ {\left( { {{\boldsymbol{X}}_{\text{1} } },{{\boldsymbol{Y}}_1} } \right),\left( { {{\boldsymbol{X}}_{\text{2} } },{{\boldsymbol{Y}}_{\text{2} } } } \right), \cdots ,\left( { {{\boldsymbol{X}}_N},{{\boldsymbol{Y}}_N} } \right)} \right\}$
         中的超声图像
     (3)  $\left( { {{\boldsymbol{X}}_i}{\text{,} }{{\boldsymbol{Y}}_i} } \right) \leftarrow {f_{ {\text{data} } } }\left( {\boldsymbol{S}} \right)$//{从超声图像训练集$ S $中随机选择超
        声图像对}
     (4)  for $ k \in \left( {1,2, \cdots ,\dfrac{N}{M}} \right) $ do:
     (5)   ${{\boldsymbol{X}}^k} = \left\{ {{\boldsymbol{X}}_i^k} \right\}_{i = \left( {k - 1} \right)M + 1}^{kM}$,${{\boldsymbol{Y}}^k} = \left\{ {{\boldsymbol{Y}}_i^k} \right\}_{i = \left( {k - 1} \right)M + 1}^{kM}$
         //{第$ k $批带噪声的图像${{\boldsymbol{X}}^k}$,第$ k $批干净图像${{\boldsymbol{Y}}^k}$,第$ k $批
         中的第$ i $张带噪声的图像${\boldsymbol{X}}_i^k$,第$ k $批中的第$ i $张干净的图
         像${\boldsymbol{Y}}_i^k$}
     (6)   ${{\boldsymbol{P}}^k} \leftarrow {\varphi _d}\left( { {{\boldsymbol{X}}^k} } \right)$//{第$ k $批图像对中的带噪声的图像${{\boldsymbol{X}}^k}$
         输入到去噪模型$ {\varphi _d} $处理,得到第$ k $批预测图像
         ${{\boldsymbol{P}}^k} = \left\{ {{\boldsymbol{P}}_i^k} \right\}_{i = \left( {k - 1} \right)M + 1}^{kM}$}
     (7)   ${L_{ \rm{M} } } \leftarrow \dfrac{1}{ {2M} }\sum\limits_{i = 1}^M { { {\left\| { {\boldsymbol{Y} }_i^k - {\boldsymbol{P} }_i^k} \right\|}^2} }$//{计算第$ k $批经过去噪模
         型训练损失}
     (8)   $\dfrac{ {\partial {L_{\rm{M} } } } }{ {\partial {\theta _{\rm{d} } } } } \leftarrow {\nabla _{ {\theta _{\rm{d} } } } }{L_{\rm{M}}}$//{计算关于去噪模型$ {\varphi _{\text{d}}} $训练参数${\theta _{\rm{d}}}$的
         梯度}
     (9)   ${\theta _{\rm{d}}} \leftarrow {\theta _{\rm{d}}} - {\rm{lr} } \cdot { {\partial {L_{\rm{M}}} } \mathord{\left/ {\vphantom { {\partial {L_M} } {\partial {\theta _d} } } } \right. } {\partial {\theta _{\rm{d}}} } }$//{更新去噪模型${\varphi _{\rm{d}}}$的参数${\theta _{\rm{d}}}$}
     (10)  end for
     (11) end while
     (12) 保存训练完成的去噪模型$ {\varphi _d} $
    下载: 导出CSV

    表  2  RED-SENet去噪网络结构与参数配置

    类型配置
    Conv1卷积核大小:$ 96 \times 5 \times 5 $,步长:1,填充:0,ReLU
    Conv2卷积核大小:$ 96 \times 5 \times 5 $,步长:1,填充:0,ReLU
    Conv3卷积核大小:$ 96 \times 5 \times 5 $,步长:1,填充:0,ReLU
    Conv4卷积核大小:$ 96 \times 5 \times 5 $,步长:1,填充:0,ReLU
    Conv5卷积核大小:$ 96 \times 5 \times 5 $,步长:1,填充:0,ReLU
    ADR块1卷积核大小:$ 96 \times 5 \times 5 $,步长:1,填充:0
    池化层1:全局平均池化
    全连接层1:输入:96,输出:6
    全连接层2:输入:6,输出:96,Sigmoid
    元素逐位相乘进行扩展
    ReLU,卷积核大小:$ 96 \times 5 \times 5 $,步长:1,填充:0,ReLU
    ADR块2卷积核大小:$ 96 \times 5 \times 5 $,步长:1,填充:0
    池化层2:全局平均池化
    全连接层3:输入:96,输出:6
    全连接层4:输入:6,输出:96,Sigmoid
    元素逐位相乘进行扩展
    ReLU,卷积核大小:$ 96 \times 5 \times 5 $,步长:1,填充:0,ReLU
    Deconv1卷积核大小:$ 96 \times 5 \times 5 $,步长:1,填充:0
    下载: 导出CSV

    表  3  胎儿心脏超声数据集在不同噪声情况下的实验对比

    方法噪声变量$ \sigma $
    10152530
    PSNRSSIMRMSEPSNRSSIMRMSEPSNRSSIMRMSEPSNRSSIMRMSE
    MLP[18]37.02500.92083.943533.00380.84256.154333.68470.87405.263731.26490.78627.1016
    BM3D[19]32.28140.80926.634332.16210.80716.627131.99990.80576.897431.93890.80616.8914
    K-SVD[20]36.21250.70414.677035.03790.68204.777932.84270.59965.791831.71360.51406.6300
    CNN10[21]36.75470.92243.725935.60820.90574.251033.81230.88165.228733.00030.86805.7462
    RDN10[22]37.39450.93153.453135.99480.91404.062033.95300.88425.146433.16010.87055.6412
    RED-CNN[10]37.15540.92703.547235.88050.91094.116733.94430.88275.148533.07200.86775.6960
    本文RED-SENet37.28240.92903.496036.00490.91354.054034.03610.88545.095733.04930.86715.7712
    原始图像28.09140.619210.049724.90160.451914.508620.76520.259423.359619.29200.205127.6783
    下载: 导出CSV

    表  4  胆囊结石超声数据集在不同噪声情况下的实验对比

    方法噪声变量$ \sigma $
    10152530
    PSNRSSIMRMSEPSNRSSIMRMSEPSNRSSIMRMSEPSNRSSIMRMSE
    MLP[18]36.38280.90814.0152634.42310.85675.743132.70200.82436.206730.97540.79866.0047
    BM3D[19]36.55290.91083.812334.72110.87495.459332.62780.82245.841931.26060.80936.1137
    K-SVD[20]35.23040.58464.462233.61990.52085.363131.59140.40786.747430.66800.37837.4955
    CNN10[21]36.53960.91693.820534.73400.88334.708532.6350.83436.003031.95940.81636.4927
    RDN10[22]36.37060.91553.89834.49650.88014.841432.40870.82686.162831.25590.80487.0266
    RED-CNN[10]35.78360.90424.161333.91750.86465.163031.74660.80536.636331.15360.77917.1057
    本文RED-SENet36.64880.92033.771234.78280.88484.681732.77040.83685.911631.98530.81856.4720
    原始图像28.06150.577110.084824.7550.412714.758120.59920.235323.818619.14070.186828.1763
    下载: 导出CSV

    表  5  胎儿头部超声数据集在不同噪声情况下的实验对比

    方法噪声变量$ \sigma $
    10152530
    PSNRSSIMRMSEPSNRSSIMRMSEPSNRSSIMRMSEPSNRSSIMRMSE
    MLP[18]39.47770.95623.001836.46580.90333.998635.10460.91124.698133.77530.85446.2157
    BM3D[19]36.86960.92874.043836.80410.92624.032335.13240.91214.679534.02740.88026.4565
    K-SVD[20]39.08710.73062.870636.48400.92593.864634.21910.88964.811932.38650.43216.1468
    CNN10[21]39.58650.9562.722936.91700.93423.705334.71470.90744.756732.91890.86585.8005
    RDN10[22]39.94550.96062.603537.76660.94213.344735.08290.90964.550534.12200.89605.0764
    RED-CNN[10]36.55290.91883.812334.70450.88264.722532.77230.83685.909831.95440.81366.4951
    本文RED-SENet39.91770.96142.609737.62010.94123.399035.13700.91234.518234.22540.89785.0211
    原始图像28.52450.56219.562625.20380.39914.019321.13660.229722.396719.71650.183626.3751
    下载: 导出CSV

    表  6  CAMUS头部超声数据集在不同噪声情况下的实验对比

    方法噪声变量$ \sigma $
    10152530
    PSNRSSIMRMSEPSNRSSIMRMSEPSNRSSIMRMSEPSNRSSIMRMSE
    MLP[18]35.78960.91583.997631.74830.83057.107531.73620.81006.487229.45220.79078.7962
    BM3D[19]32.09050.86246.353831.03430.82567.173129.89460.79258.177829.50960.80118.5485
    K-SVD[20]33.41490.39695.450832.30280.38396.198930.83480.34055.909230.66800.37837.4955
    CNN10[21]36.06440.94024.014634.12120.90875.021332.03120.86306.388131.36390.84366.8986
    RDN10[22]36.30610.94303.904434.25460.91154.945130.23930.69117.850131.42480.84076.8504
    RED-CNN[10]36.07610.94064.009234.11740.90875.023732.01230.86256.402131.01230.82256.4021
    RED-SENet(本文方法)36.16520.94163.968034.24390.91074.951032.09420.86236.342231.40280.84586.8680
    原始图像28.06150.577110.084824.7550.412714.758120.59920.235323.818619.14070.186828.1763
    下载: 导出CSV

    表  7  不同方法在4个数据集上的定量结果分析($ \sigma = 50 $)

    方法FHGSHC18CAMUS
    PSNRSSIMRMSEPSNRSSIMRMSEPSNRSSIMRMSEPSNRSSIMRMSE
    MLP[18]30.4846
    ±1.02
    0.8071
    ±0.0126
    7.8512
    ±0.0015
    29.9573
    ±1.30
    0.7534
    ±0.0065
    8.4215
    ±0.0013
    31.7773
    ±1.48
    0.8523
    ±0.0085
    7.1479
    ±0.0014
    29.1950
    ±1.25
    0.6216
    ±0.0116
    10.1547
    ±0.0019
    BM3D[19]30.3689
    ±1.61
    0.7814
    ±0.0127
    8.2808
    ±0.0012
    29.8486
    ±1.17
    0.6962
    ±0.0144
    8.1261
    ±0.0016
    30.0354
    ±1.13
    0.7731
    ±0.0072
    6.9196
    ±0.0009
    28.2329
    ±1.06
    0.7253
    ±0.0075
    9.9010
    ±0.0016
    K-SVD[20]28.4602
    ±1.43
    0.4255
    ±0.0168
    9.6388
    ±0.0018
    27.8136
    ±1.33
    0.2403
    ±0.0096
    10.391
    ±0.0012
    30.7154
    ±1.44
    0.8076
    ±0.0078
    7.3864
    ±0.0008
    27.5461
    ±1.74
    0.2438
    ±0.0067
    10.7001
    ±0.0011
    CNN10[21]30.7148
    ±1.16
    0.8249
    ±0.0129
    7.4758
    ±0.0011
    30.0116
    ±1.58
    0.7641
    ±0.0082
    8.1224
    ±0.0010
    31.4681
    ±1.23
    0.8526
    ±0.0089
    6.8891
    ±0.0010
    29.6280
    ±1.60
    0.7987
    ±0.0064
    8.4264
    ±0.0009
    RDN10[22]30.7415
    ±1.32
    0.8223
    ±0.0074
    7.4541
    ±0.0013
    29.6762
    ±1.49
    0.7439
    ±0.0087
    8.4430
    ±0.0007
    31.4794
    ±1.46
    0.8451
    ±0.0121
    6.8725
    ±0.0009
    29.6327
    ±1.63
    0.7976
    ±0.0066
    8.4254
    ±0.0015
    RED-CNN[10]30.6400
    ±1.35
    0.8152
    ±0.0086
    7.5361
    ±0.0009
    28.3693
    ±1.55
    0.6696
    ±0.0079
    9.7852
    ±0.0008
    30.0165
    ±1.52
    0.7598
    ±0.0102
    8.1241
    ±0.0007
    29.7174
    ±1.57
    0.7996
    ±0.0082
    8.3412
    ±0.0008
    RED-SENet
    (本文方法)
    30.8809
    ±1.57
    0.8261
    ±0.0098
    7.3438
    ±0.0007
    30.0983
    ±1.63
    0.7668
    ±0.0091
    8.0507
    ±0.0009
    31.8705
    ±1.54
    0.8613
    ±0.0117
    6.5709
    ±0.0011
    29.7151
    ±1.59
    0.8006
    ±0.0087
    8.3431
    ±0.0006
    原始图像15.2754
    ±1.87
    0.0992
    ±0.0834
    43.9527
    ±0.0072
    15.2147
    ±1.11
    0.0922
    ±0.0659
    44.2876
    ±0.0064
    15.8068
    ±1.52
    0.0917
    ±0.0744
    41.3597
    ±0.0058
    15.2147
    ±2.03
    0.0922
    ±0.0988
    44.2876
    ±0.0083
    下载: 导出CSV
  • [1] OUAHABI A and TALEB-AHMED A. Deep learning for real-time semantic segmentation: Application in ultrasound imaging[J]. Pattern Recognition Letters, 2021, 144: 27–34. doi: 10.1016/j.patrec.2021.01.010
    [2] LOUPAS T, MCDICKEN W, and ALLAN P L. An adaptive weighted median filter for speckle suppression in medical ultrasonic images[J]. IEEE transactions on Circuits and Systems, 1989, 36(1): 129–135. doi: 10.1109/31.16577
    [3] GARG A and KHANDELWAL V. Despeckling of Medical Ultrasound Images Using Fast Bilateral Filter and Neighshrinksure Filter in Wavelet Domain[M]. RAWAT B, TRIVEDI A, MANHAS S, et al. Advances in Signal Processing and Communication. Singapore: Springer, 2019: 271–280.
    [4] YANG Qingsong, YAN Pingkun, ZHANG Yanbo, et al. Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss[J]. IEEE Transactions on Medical Imaging, 2018, 37(6): 1348–1357. doi: 10.1109/TMI.2018.2827462
    [5] SHAHDOOSTI H R and RAHEMI Z. Edge-preserving image denoising using a deep convolutional neural network[J]. Signal Processing, 2019, 159: 20–32. doi: 10.1016/j.sigpro.2019.01.017
    [6] LIU Denghong, LI Jie, and YUAN Qiangqiang. A spectral grouping and attention-driven residual dense network for hyperspectral image super-resolution[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(9): 7711–7725. doi: 10.1109/TGRS.2021.3049875
    [7] XIA Hao, CAI Nian, WANG Huiheng, et al. Brain MR image super-resolution via a deep convolutional neural network with multi-unit upsampling learning[J]. Signal, Image and Video Processing, 2021, 15(5): 931–939. doi: 10.1007/s11760-020-01817-x
    [8] MAO Xiaojiao, SHEN Chunhua, and YANG Yubin. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections[C]. The 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 2016: 2810–2818.
    [9] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [10] CHEN Hu, ZHANG Yi, KALRA M K, et al. Low-dose CT with a residual encoder-decoder convolutional neural network[J]. IEEE Transactions on Medical Imaging, 2017, 36(12): 2524–2535. doi: 10.1109/TMI.2017.2715284
    [11] CHANG Meng, LI Qi, FENG Huajun, et al. Spatial-adaptive network for single image denoising[C]. The 16th European Conference on Computer Vision, Glasgow, UK, 2020: 171–187.
    [12] MATEO J L and FERNÁNDEZ-CABALLERO A. Finding out general tendencies in speckle noise reduction in ultrasound images[J]. Expert Systems with Applications, 2009, 36(4): 7786–7797. doi: 10.1016/j.eswa.2008.11.029
    [13] OYEDOTUN O K, AL ISMAEIL K, and AOUADA D. Training very deep neural networks: Rethinking the role of skip connections[J]. Neurocomputing, 2021, 441: 105–117. doi: 10.1016/j.neucom.2021.02.004
    [14] HU Jie, SHEN Li, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011–2023. doi: 10.1109/TPAMI.2019.2913372
    [15] KINGMA D P and BA J. Adam: A method for stochastic optimization[J]. arXiv: 1412.6980, 2014.
    [16] VAN DEN HEUVEL T L A, DE BRUIJN D, DE KORTE C L, et al. Automated measurement of fetal head circumference using 2D ultrasound images[J]. PLoS One, 2018, 13(8): e0200412. doi: 10.1371/journal.pone.0200412
    [17] LECLERC S, SMISTAD E, PEDROSA J, et al. Deep learning for segmentation using an open large-scale dataset in 2D echocardiography[J]. IEEE Transactions on Medical Imaging, 2019, 38(9): 2198–2210. doi: 10.1109/TMI.2019.2900516
    [18] BURGER H C, SCHULER C J, and HARMELING S. Image denoising: Can plain neural networks compete with BM3D?[C]. 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 2392–2399.
    [19] DABOV K, FOI A, KATKOVNIK V, et al. Image denoising by sparse 3-D transform-domain collaborative filtering[J]. IEEE Transactions on Image Processing, 2007, 16(8): 2080–2095. doi: 10.1109/TIP.2007.901238
    [20] CHEN Yang, YIN Xindao, SHI Luyao, et al. Improving abdomen tumor low-dose CT images using a fast dictionary learning based processing[J]. Physics in Medicine & Biology, 2013, 58(16): 5803–5820. doi: 10.1088/0031-9155/58/16/5803
    [21] CHEN Hu, ZHANG Yi, ZHANG Weihua, et al. Low-dose CT via convolutional neural network[J]. Biomedical Optics Express, 2017, 8(2): 679–694. doi: 10.1364/BOE.8.000679
    [22] ZHANG Yulun, TIAN Yapeng, KONG Yu, et al. Residual dense network for image restoration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(7): 2480–2495. doi: 10.1109/TPAMI.2020.2968521
  • 加载中
图(8) / 表(7)
计量
  • 文章访问数:  1406
  • HTML全文浏览量:  1039
  • PDF下载量:  202
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-04-20
  • 修回日期:  2021-12-24
  • 录用日期:  2022-03-07
  • 网络出版日期:  2022-03-19
  • 刊出日期:  2022-07-10

目录

    /

    返回文章
    返回