高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

Wave-MambaCT:基于小波Mamba的低剂量CT伪影抑制方法

崔学英 王宇航 刘斌 上官宏 张雄

崔学英, 王宇航, 刘斌, 上官宏, 张雄. Wave-MambaCT:基于小波Mamba的低剂量CT伪影抑制方法[J]. 电子与信息学报. doi: 10.11999/JEIT250489
引用本文: 崔学英, 王宇航, 刘斌, 上官宏, 张雄. Wave-MambaCT:基于小波Mamba的低剂量CT伪影抑制方法[J]. 电子与信息学报. doi: 10.11999/JEIT250489
CUI Xueying, WANG Yuhang, LIU Bin, SHANGGUAN Hong, ZHANG Xiong. Wave-MambaCT: Low-dose CT Artifact Suppression Method Based on Wavelet Mamba[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250489
Citation: CUI Xueying, WANG Yuhang, LIU Bin, SHANGGUAN Hong, ZHANG Xiong. Wave-MambaCT: Low-dose CT Artifact Suppression Method Based on Wavelet Mamba[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250489

Wave-MambaCT:基于小波Mamba的低剂量CT伪影抑制方法

doi: 10.11999/JEIT250489 cstr: 32379.14.JEIT250489
基金项目: 国家青年科学基金(62001321),山西省自然科学基金(202303021221144, 202403021221140, 202403021221139)
详细信息
    作者简介:

    崔学英:女,博士,副教授,硕士生导师,研究方向为医学图像处理与重建

    王宇航:男,硕士生,研究方向为医学图像处理

    刘斌:男,博士,副教授,硕士生导师,研究方向为应用统计

    上官宏:男,博士,副教授,硕士生导师,研究方向为模式识别、医学图像处理

    张雄:男,硕士,教授,硕士生导师,研究方向为模式识别、医学图像处理和视频目标跟踪

    通讯作者:

    崔学英 xueyingcui@tyust.edu.cn

  • 中图分类号: TN911.73; TP391

Wave-MambaCT: Low-dose CT Artifact Suppression Method Based on Wavelet Mamba

Funds: The Natural Science for Youth Foundation (62001321), The Natural Science Foundation of Shanxi Province (202303021221144, 202403021221140, 202403021221139)
  • 摘要: 低剂量CT(LDCT)图像中的伪影和噪声影响疾病的早期诊断和治疗。基于卷积神经网络的去噪方法在远程建模方面能力有限。与Transformer架构的远程建模方法相比,基于Mamba模型在建模时计算复杂度低,然而现有的Mamba模型存在信息丢失或噪声残留的缺点。为此,该文提出一种基于小波Mamba的去噪模型Wave-MambaCT。首先利用小波变换的多尺度分解解耦噪声和低频内容信息。其次,构建残差模块结合状态空间模型的Mamba模块提取高低频带的局部和全局信息,并用无噪的低频特征通过基于注意力的跨频Mamba模块校正并增强同尺度的高频特征,在去除噪声的同时保持更多细节。最后,分阶段采用逆小波变换渐进恢复图像,并设置相应的损失函数提高网络的稳定性。实验结果表明Wave-MambaCT在较低的计算复杂度和参数量下,不仅提高了低剂量CT图像的视觉效果,而且在PSNR,SSIM,VIF和MSE四种定量指标上均优于现有的去噪方法。
  • 图  1  Wave-MambaCT网络结构图

    图  2  二维选择性扫描模块

    图  3  Mamba模块

    图  4  在Mayo测试集上不同方法去噪后图像的可视化结果

    图  5  在Mayo测试集上不同方法去噪后图像的可视化结果

    图  6  不同方法差值图像的可视化结果

    图  7  不同算法在Piglet数据集上的4种剂量PSNR和SSIM性能对比

    图  8  在Piglet测试集上不同算法去噪后图像的局部可视化结果

    图  9  本文方法在DeepLesion数据集上测试前后的可视化结果

    表  1  不同降噪算法在Mayo测试集上的定量指标

    方法PSNR↑SSIM↑VIF↑MSE↓
    LDCT26.7891±1.97820.8244±0.05030.3642±0.05800.00232+0.00105
    RED-CNN(2018)31.0990±1.77240.8773±0.03900.4114±0.05680.00084±0.00036
    DESD-GAN(2022)30.8887±2.00410.8789±0.04060.3958±0.05140.00091±0.00049
    CFMH-GAN(2023)30.4322±2.59620.8703±0.04470.3521±0.06530.00098±0.00027
    TransCT(2021)30.5113±1.52170.8724±0.03860.3844±0.05130.00094±0.00033
    CTformer(2023)30.5176±2.87250.8764±0.03660.3895±0.04960.00095±0.00031
    LD2ND(2024)31.2681±1.51870.8811±0.03510.4185±0.04560.00081±0.00035
    DenoMamba(2024)31.4219±1.73120.8835±0.03910.4326±0.06720.00082±0.00032
    Wave-MambaCT31.6528±1.69590.8851±0.03910.4629±0.05470.00074±0.00031
    下载: 导出CSV

    表  2  不同降噪网络的指标对比

    RED-CNNDESD-GANCFMH-GANTransCTCTformerLD2NDDenoMamba本文
    FLOPs(G)5.0861156.090130.176638.685718.1652112.156531.165217.2135
    Params(M)0.475536.859846.36347.86081.456813.836.34035.3913
    训练时间(min/epoch)3.724.884.745.905.725.484.624.12
    测试时间(s/张)0.21900.14780.12910.12520.22870.14260.15130.1463
    下载: 导出CSV

    表  3  Piglet 作为训练集Mayo作为测试集时不同降噪算法的定量指标

    方法PSNR↑SSIM↑VIF↑MSE↓
    RED-CNN30.0628±1.72130.8725±0.04110.4028±0.05680.00086±0.00032
    DESD-GAN30.3409±1.90240.8789±0.04060.3867±0.05470.00093±0.00041
    CFMH-GAN30.4258±2.28970.8684±0.04390.3509±0.06530.00097±0.00029
    TransCT30.4974±1.68660.8709±0.04230.3762±0.05560.00096±0.00031
    CTformer30.5032±2.36960.8727±0.03770.3848±0.05810.00098±0.00044
    LD2ND31.1396±1.59780.8795±0.03490.4123±0.05150.00087±0.00037
    DenoMamba31.2989±1.71210.8814±0.03260.4279±0.05370.00083±0.00036
    Wave-MambaCT31.5983±1.52160.8839±0.03150.4592±0.05470.00076±0.00033
    下载: 导出CSV

    表  4  Mayo作为训练集Piglet作为测试集时不同降噪算法的定量指标

    方法PSNR↑SSIM↑VIF↑MSE↓
    RED-CNN31.4024±1.51220.9027±0.01750.4097±0.04680.00081±0.00038
    DESD-GAN32.1595±1.43580.9139±0.02110.4303±0.05110.00076±0.00034
    CMFH-GAN32.8126±1.37380.9233±0.01730.4369±0.04990.00069±0.00030
    TransCT31.1629±1.41270.9037±0.01280.4178±0.05120.00087±0.00035
    CTformer32.1805±1.27480.9146±0.01290.4417±0.05560.00077±0.00037
    LD2ND32.4385±1.30390.9194±0.01370.4432±0.05690.00070±0.00034
    DenoMamba33.0016±1.30760.9233±0.01280.4461±0.05820.00066±0.00029
    Wave-MambaCT33.2817±1.20370.9319±0.01310.4496±0.05290.00062±0.00031
    下载: 导出CSV

    表  5  在Mayo数据集上的消融实验

    MethodPSNR↑SSIM↑VIF↑MSE↓
    A31.4121±1.64760.8817±0.03910.4459±0.02700.00079±0.00028
    B31.5284±1.67820.8845±0.03850.4456±0.02580.00077±0.00030
    C31.5741±1.73670.8849±0.03910.4619±0.02720.00075±0.00029
    D31.5431±1.73210.8846±0.03920.4598±0.02560.00076±0.00032
    本文31.6528±1.69590.8851±0.03910.4629±0.05470.00074±0.00029
    下载: 导出CSV

    表  6  在Mayo数据集上提出网络中模块数量的对比试验

    方法第1阶段第2阶段PSNRSSIMFLOPs(G)Params(M)
    13331.2542±1.70980.8761±0.037115.92184.7142
    2(本文)4431.6528±1.69590.8851±0.039117.21355.3913
    35531.4813±1.65460.8750±0.040521.65295.8974
    下载: 导出CSV

    表  7  损失函数中不同系数比率的定量结果

    系数比例PSNRSSIM系数比例PSNRSSIM
    1:0.05:0.131.25170.87211:0.01:0.1531.48130.8812
    1:0.05:0.1531.65280.88511:0.05:0.1531.65280.8851
    1:0.05:0.231.32190.87821:0.1:0.1531.52350.8795
    下载: 导出CSV
  • [1] 张权. 低剂量X线CT重建若干问题研究[D]. [博士论文], 东南大学, 2015.

    ZHANG Quan. A study on some problems in image reconstruction for low-dose CT system[D]. [Ph. D. dissertation], Southeast University, 2015.
    [2] DE BASEA M B, THIERRY-CHEF I, HARBRON R, et al. Risk of hematological malignancies from CT radiation exposure in children, adolescents and young adults[J]. Nature Medicine, 2023, 29(12): 3111–3119. doi: 10.1038/s41591-023-02620-0.
    [3] CHEN Hu, ZHANG Yi, KALRA M K, et al. Low-dose CT with a residual encoder-decoder convolutional neural network (RED-CNN)[J]. IEEE Transactions on Medical Imaging, 2017, 36(12): 2524–2535. doi: 10.1109/TMI.2017.2715284.
    [4] LIANG Tengfei, JIN Yi, LI Yidong, et al. EDCNN: Edge enhancement-based densely connected network with compound loss for low-dose CT denoising[C]. The 15th IEEE International Conference on Signal Processing, Beijing, China, 2020: 193–198, doi: 10.1109/ICSP48669.2020.9320928.
    [5] SAIDULU N and MUDULI P R. Asymmetric convolution-based GAN framework for low-dose CT image denoising[J]. Computers in Biology and Medicine, 2025, 190: 109965. doi: 10.1016/j.compbiomed.2025.109965.
    [6] 张雄, 杨琳琳, 上官宏, 等. 基于生成对抗网络和噪声水平估计的低剂量CT图像降噪方法[J]. 电子与信息学报, 2021, 43(8): 2404–2413. doi: 10.11999/JEIT200591.

    ZHANG Xiong, YANG Linlin, SHANGGUAN Hong, et al. A low-dose ct image denoising method based on generative adversarial network and noise level estimation[J]. Journal of Electronics & Information Technology, 2021, 43(8): 2404–2413. doi: 10.11999/JEIT200591.
    [7] HAN Zefang, SHANGGUAN Hong, ZHANG Xiong, et al. A dual-encoder-single-decoder based low-dose CT denoising network[J]. IEEE Journal of Biomedical and Health Informatics, 2022, 26(7): 3251–3260. doi: 10.1109/JBHI.2022.3155788.
    [8] HAN Zefang, SHANGGUAN Hong, ZHANG Xiong, et al. A coarse-to-fine multi-scale feature hybrid low-dose CT denoising network[J]. Signal Processing: Image Communication, 2023, 118: 117009. doi: 10.1016/j.image.2023.117009.
    [9] ZHAO Haoyu, GU Yuliang, ZHAO Zhou, et al. WIA-LD2ND: Wavelet-based image alignment for self-supervised low-dose CT denoising[C]. The 27th International Conference on Medical Image Computing and Computer Assisted Intervention, Marrakesh, Morocco, 2024: 764–774. doi: 10.1007/978-3-031-72104-5_73.
    [10] LUTHRA A, SULAKHE H, MITTAL T, et al. Eformer: Edge enhancement based transformer for medical image denoising[J]. arXiv preprint arXiv: 2109.08044, 2021. (查阅网上资料, 不确定文献类型及格式是否正确, 请确认).
    [11] ZHANG Zhicheng, YU Lequan, LIANG Xiaokun, et al. TransCT: Dual-path transformer for low dose computed tomography[C]. The 24th International Conference on Medical Image Computing and Computer Assisted Intervention, Strasbourg, France, 2021: 55–64. doi: 10.1007/978-3-030-87231-1_6.
    [12] WANG Dayang, FAN Fenglei, WU Zhan, et al. CTformer: Convolution-free Token2Token dilated vision transformer for low-dose CT denoising[J]. Physics in Medicine & Biology, 2023, 68(6): 065012. doi: 10.1088/1361-6560/acc000.
    [13] JIAN Muwei, YU Xiaoyang, ZHANG Haoran, et al. SwinCT: Feature enhancement based low-dose CT images denoising with swin transformer[J]. Multimedia Systems, 2024, 30(1): 1. doi: 10.1007/s00530-023-01202-x.
    [14] LI Haoran, YANG Xiaomin, YANG Sihan, et al. Transformer with double enhancement for low-dose CT denoising[J]. IEEE Journal of Biomedical and Health Informatics, 2023, 27(10): 4660–4671. doi: 10.1109/JBHI.2022.3216887.
    [15] GU A and DAO T. Mamba: Linear-time sequence modeling with selective state spaces[J]. arXiv preprint arXiv: 2312.00752, 2024. (查阅网上资料, 不确定文献类型及格式是否正确, 请确认).
    [16] DAO T and GU A. Transformers are SSMs: Generalized models and efficient algorithms through structured state space duality[C]. Proceedings of the 41st International Conference on Machine Learning, Vienna, Austria, 2024: 399.
    [17] LIU Yue, TIAN Yunjie, ZHAO Yuzhong, et al. VMamba: Visual state space model[C]. Proceedings of the 38th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2024: 3273.
    [18] öZTÜRK Ş, DURAN O C, and çUKUR T. DenoMamba: A fused state-space model for low-dose CT denoising[J]. arXiv Preprint arXiv: 2409.13094, 2024. (查阅网上资料, 不确定文献类型及格式是否正确, 请确认).
    [19] LI Linxuan, WEI Wenjia, YANG Luyao, et al. CT-Mamba: A hybrid convolutional state space model for low-dose CT denoising[J]. Computerized Medical Imaging and Graphics, 2025, 124: 102595. doi: 10.1016/j.compmedimag.2025.102595.
    [20] XU Guoping, LIAO Wentao, ZHANG Xuan, et al. Haar wavelet downsampling: A simple but effective downsampling module for semantic segmentation[J]. Pattern Recognition, 2023, 143: 109819. doi: 10.1016/j.patcog.2023.109819.
    [21] AAPM. Low dose CT grand challenge[EB/OL]. http://www.aapm.org/GrandChallenge/LowDoseCT/, 2017.
    [22] Piglet dataset[EB/OL]. https://universe.roboflow.com/piglet-dataset, 2025.
    [23] YAN Ke, WANG Xiaosong, LU Le, et al. DeepLesion: Automated mining of large-scale lesion annotations and universal lesion detection with deep learning[J]. Journal of Medical Imaging, 2018, 5(3): 036501. doi: 10.1117/1.JMI.5.3.036501.
  • 加载中
图(9) / 表(7)
计量
  • 文章访问数:  16
  • HTML全文浏览量:  5
  • PDF下载量:  6
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-06-03
  • 修回日期:  2033-05-10
  • 网络出版日期:  2025-10-23

目录

    /

    返回文章
    返回