高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于图像退化模型的红外与可见光图像融合方法

蒋一纯 刘云清 詹伟达 朱德鹏

蒋一纯, 刘云清, 詹伟达, 朱德鹏. 基于图像退化模型的红外与可见光图像融合方法[J]. 电子与信息学报, 2022, 44(12): 4405-4415. doi: 10.11999/JEIT211112
引用本文: 蒋一纯, 刘云清, 詹伟达, 朱德鹏. 基于图像退化模型的红外与可见光图像融合方法[J]. 电子与信息学报, 2022, 44(12): 4405-4415. doi: 10.11999/JEIT211112
JIANG Yichun, LIU Yunqing, ZHAN Weida, ZHU Depeng. Infrared and Visible Image Fusion Method Based on Degradation Model[J]. Journal of Electronics & Information Technology, 2022, 44(12): 4405-4415. doi: 10.11999/JEIT211112
Citation: JIANG Yichun, LIU Yunqing, ZHAN Weida, ZHU Depeng. Infrared and Visible Image Fusion Method Based on Degradation Model[J]. Journal of Electronics & Information Technology, 2022, 44(12): 4405-4415. doi: 10.11999/JEIT211112

基于图像退化模型的红外与可见光图像融合方法

doi: 10.11999/JEIT211112
基金项目: 吉林省发展与改革委员会创新能力建设专项(2021C045-5)
详细信息
    作者简介:

    蒋一纯:男,博士生,研究方向为深度学习、红外图像超分辨率重建、图像增强和图像融合等

    刘云清:男,博士生导师,研究方向为数字信号处理、自动控制与测试技术等

    詹伟达:男,博士生导师,研究方向为数字图像处理、红外图像技术和自动目标识别等

    朱德鹏:男,博士生,研究方向为红外与可见光图像融合、图像配准和目标识别等

    通讯作者:

    刘云清 liuyunqing@cust.edu.cn

  • 中图分类号: TN911.73; TP391

Infrared and Visible Image Fusion Method Based on Degradation Model

Funds: The Special Project for Innovation Capacity Building of Jilin Provinc Development and Reform Commission (2021C045-5)
  • 摘要: 基于深度学习的红外与可见光图像融合算法依赖人工设计的相似度函数衡量输入与输出的相似度,这种无监督学习方式不能有效利用神经网络提取深层特征的能力,导致融合结果不理想。针对该问题,该文首先提出一种新的红外与可见光图像融合退化模型,把红外和可见光图像视为理想融合图像通过不同退化过程后产生的退化图像。其次,提出模拟图像退化的数据增强方案,采用高清数据集生成大量模拟退化图像供训练网络。最后,基于提出的退化模型设计了简单高效的端到端网络模型及其网络训练框架。实验结果表明,该文所提方法不仅拥有良好视觉效果和性能指标,还能有效地抑制光照、烟雾和噪声等干扰。
  • 图  1  本文所提理想图像退化模型

    图  2  各类模糊核对比

    图  3  模拟退化图像效果图

    图  4  红外与可见光图像融合网络结构图

    图  5  网络训练总体框架

    图  6  场景1融合效果对比

    图  7  场景2融合效果对比

    图  8  场景3融合效果对比

    图  9  场景4融合效果对比

    图  10  不同退化过程的融合效果对比

    图  11  不同损失函数的融合效果对比

    表  1  TNO数据集中各对比算法定性对比表

    方法PSNRSSIMRMSEQAB/FQCBQCVMICEAGEI
    CSR59.60391.54110.07490.50610.41841127.61.42392.33642.758628.1597
    ADF59.60721.48970.07480.41000.42941109.41.32952.28833.017430.0279
    CBF59.08031.19890.09010.43860.41491232.61.70151.73115.155553.3435
    CNN59.74541.46300.08560.55830.42731290.51.73081.54763.721037.6166
    DenseFuse59.61211.55030.07470.32740.39751132.51.50571.91232.098721.2670
    FusionGAN57.14941.21900.13190.22430.33382311.50.93052.70542.046821.0156
    GANMcC58.23991.34880.10770.24860.36561510.71.52572.34452.157522.3753
    IFCNN59.40011.46370.08330.47180.4239878.51.70122.36403.821838.0881
    SEDR58.79891.41870.09470.44100.41481016.61.70852.05403.507335.5028
    本文算法60.22631.56150.06500.39970.42961169.21.77601.46193.225033.7834
    下载: 导出CSV

    表  2  VIFB数据集中各对比算法定性对比表

    方法PSNRSSIMRMSEQAB/FQCBQCVMICEAGEI
    CSR58.31431.42930.11780.59570.4909748.81.91551.48515.144052.8505
    ADF58.40531.40010.10430.52020.4743777.81.92111.46414.582146.5293
    CBF57.59511.17110.12570.57860.52631575.32.16120.99467.154174.5901
    CNN57.93231.39140.11780.65800.6221512.62.65331.03015.808260.2415
    DenseFuse58.44491.45860.10350.36370.4386763.22.02591.32933.526335.9694
    FusionGAN57.44761.30010.12950.23950.36411632.01.59882.23313.054631.7554
    GANMcC57.55741.34710.12580.30290.39861012.61.96651.99553.273234.5361
    IFCNN56.49701.12850.15350.33390.45071021.92.13922.56033.961240.5405
    SEDR57.79891.41870.09470.44100.41481016.61.70852.05403.507335.5028
    本文算法58.62511.48150.08500.59360.5149669.22.75600.95443.225033.7834
    下载: 导出CSV

    表  3  各退化过程对融合结果的影响

    情况PSNRSSIMRMSEQAB/FQCBQCVMICEAGEI
    方案159.70911.27660.07640.35880.41711761.71.56521.47594.872852.6447
    方案259.75361.48930.07270.40590.42561537.81.41741.59753.543436.5904
    方案359.40521.45180.07820.23270.36861665.11.26521.79591.807518.5773
    完备本文算法60.22631.56150.06500.39970.42961169.21.77601.46193.225033.7834
    下载: 导出CSV

    表  4  不同损失函数对融合结果的影响

    情况PSNRSSIMRMSEQAB/FQCBQCVMICEAGEI
    仅有像素损失60.08211.58260.06840.40450.42231682.42.01321.28082.909330.8049
    仅有感知损失60.23801.49700.06520.36760.43091342.61.73131.14933.483236.7749
    混合损失60.22631.56150.06500.39970.42961169.21.77601.46193.225033.7834
    下载: 导出CSV
  • [1] LI Shutao, KANG Xudong, and HU Jianwen. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing, 2013, 22(7): 2864–2875. doi: 10.1109/TIP.2013.2244222
    [2] LIU Chunhui and DING Wenrui. Variational model for infrared and visible light image fusion with saliency preservation[J]. Journal of Electronic Imaging, 2019, 28(2): 023023. doi: 10.1117/1.JEI.28.2.023023
    [3] ZHAO Jufeng, ZHOU Qiang, CHEN Yueting, et al. Fusion of visible and infrared images using saliency analysis and detail preserving based image decomposition[J]. Infrared Physics & Technology, 2013, 56: 93–99. doi: 10.1016/j.infrared.2012.11.003
    [4] HE Guiqing, XING Siyuan, HE Xingjian, et al. Image fusion method based on simultaneous sparse representation with non‐subsampled contourlet transform[J]. IET Computer Vision, 2019, 13(2): 240–248. doi: 10.1049/iet-cvi.2018.5496
    [5] YANG Bin and LI Shutao. Visual attention guided image fusion with sparse representation[J]. Optik, 2014, 125(17): 4881–4888. doi: 10.1016/j.ijleo.2014.04.036
    [6] WANG Jun, PENG Jinye, FENG Xiaoyi, et al. Fusion method for infrared and visible images by using non-negative sparse representation[J]. Infrared Physics & Technology, 2014, 67: 477–489. doi: 10.1016/j.infrared.2014.09.019
    [7] 雷大江, 杜加浩, 张莉萍, 等. 联合多流融合和多尺度学习的卷积神经网络遥感图像融合方法[J]. 电子与信息学报, 2022, 44(1): 237–244. doi: 10.11999/JEIT200792

    LEI Dajiang, DU Jiahao, ZHANG Liping, et al. Multi-stream architecture and multi-scale convolutional neural network for remote sensing image fusion[J]. Journal of Electronics &Information Technology, 2022, 44(1): 237–244. doi: 10.11999/JEIT200792
    [8] 陈书贞, 曹世鹏, 崔美玥, 等. 基于深度多级小波变换的图像盲去模糊算法[J]. 电子与信息学报, 2021, 43(1): 154–161. doi: 10.11999/JEIT190947

    CHEN Shuzhen, CAO Shipeng, CUI Meiyue, et al. Image blind deblurring algorithm based on deep multi-level wavelet transform[J]. Journal of Electronics &Information Technology, 2021, 43(1): 154–161. doi: 10.11999/JEIT190947
    [9] 李明鸿, 常侃, 李恒鑫, 等. 双阶段信息蒸馏的轻量级图像超分辨率网络[J]. 中国图象图形学报, 2021, 26(5): 991–1005. doi: 10.11834/jig.200265

    LI Minghong, CHANG Kan, LI Hengxin, et al. Lightweight image super-resolution network via two-stage information distillation[J]. Journal of Image and Graphics, 2021, 26(5): 991–1005. doi: 10.11834/jig.200265
    [10] PRABHAKAR K R, SRIKAR V S, and BABU R V. DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]. The 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 4714–4722.
    [11] LI Hui and WU Xiaojun. DenseFuse: A fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614–2623. doi: 10.1109/TIP.2018.2887342
    [12] LIU Yu, CHEN Xun, PENG Hu, et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion, 2017, 36: 191–207. doi: 10.1016/j.inffus.2016.12.001
    [13] XU Han, MA Jiayi, JIANG Junjun, et al. U2Fusion: A unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502–518. doi: 10.1109/TPAMI.2020.3012548
    [14] MA Jiayi, TANG Linfeng, XU Meilong, et al. STDFusionNet: An infrared and visible image fusion network based on salient target detection[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 5009513. doi: 10.1109/TIM.2021.3075747
    [15] ZHU Depeng, ZHAN Weida, JIANG Yichun, et al. MIFFuse: A multi-level feature fusion network for infrared and visible images[J]. IEEE Access, 2021, 9: 130778–130792. doi: 10.1109/ACCESS.2021.3111905
    [16] JIANG Yichun, LIU Yunqing, ZHAN Weida, et al. Lightweight dual-stream residual network for single image super-resolution[J]. IEEE Access, 2021, 9: 129890–129901. doi: 10.1109/ACCESS.2021.3112002
    [17] JIANG Haijun, CHEN Fei, LIU Xining, et al. Thermal wave image deblurring based on depth residual network[J]. Infrared Physics & Technology, 2021, 117: 103847. doi: 10.1016/j.infrared.2021.103847
    [18] HAN J, LEE H, and KANG M G. Thermal image restoration based on LWIR sensor statistics[J]. Sensors, 2021, 21(16): 5443. doi: 10.3390/s21165443
    [19] ZHANG Kai, ZUO Wangmeng, and ZHANG Lei. Learning a single convolutional super-resolution network for multiple degradations[C]. The 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 3262–3271.
    [20] IGNATOV A, TIMOFTE R, VAN VU T, et al. PIRM challenge on perceptual image enhancement on smartphones: Report[C]. The European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 2018: 315–333.
    [21] WANG Zhou, BOVIK A C, SHEIKH H R, et al. Image quality assessment: From error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600–612. doi: 10.1109/TIP.2003.819861
    [22] ZHANG Xingchen, YE Ping, and XIAO Gang. VIFB: A visible and infrared image fusion benchmark[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, USA, 2020: 104–105.
    [23] LIU Yu, CHEN Xun, WARD R K, et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 2016, 23(12): 1882–1886. doi: 10.1109/LSP.2016.2618776
    [24] BAVIRISETTI D P and DHULI R. Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform[J]. IEEE Sensors Journal, 2016, 16(1): 203–209. doi: 10.1109/JSEN.2015.2478655
    [25] SHREYAMSHA KUMAR B K. Image fusion based on pixel significance using cross bilateral filter[J]. Signal, Image and Video Processing, 2015, 9(5): 1193–1204. doi: 10.1007/s11760–013-0556–9
    [26] MA Jiayi, YU Wei, LIANG Pengwei, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11–26. doi: 10.1016/j.inffus.2018.09.004
    [27] MA Jiayi, ZHANG Hao, SHAO Zhenfeng, et al. GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 5005014. doi: 10.1109/TIM.2020.3038013
    [28] ZHANG Yu, LIU Yu, SUN Peng, et al. IFCNN: A general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54: 99–118. doi: 10.1016/j.inffus.2019.07.011
    [29] JIAN Lihua, YANG Xiaomin, LIU Zheng, et al. SEDRFuse: A symmetric encoder–decoder with residual block network for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 5002215. doi: 10.1109/TIM.2020.3022438
  • 加载中
图(11) / 表(4)
计量
  • 文章访问数:  1058
  • HTML全文浏览量:  467
  • PDF下载量:  162
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-10-11
  • 修回日期:  2022-04-29
  • 网络出版日期:  2022-05-08
  • 刊出日期:  2022-12-16

目录

    /

    返回文章
    返回