高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

改进双路径生成对抗网络的红外与可见光图像融合

杨莘 田立凡 梁佳明 黄泽丰

杨莘, 田立凡, 梁佳明, 黄泽丰. 改进双路径生成对抗网络的红外与可见光图像融合[J]. 电子与信息学报, 2023, 45(8): 3012-3021. doi: 10.11999/JEIT220819
引用本文: 杨莘, 田立凡, 梁佳明, 黄泽丰. 改进双路径生成对抗网络的红外与可见光图像融合[J]. 电子与信息学报, 2023, 45(8): 3012-3021. doi: 10.11999/JEIT220819
YANG Shen, TIAN Lifan, LIANG Jiaming, HUANG Zefeng. Infrared and Visible Image Fusion Based on Improved Dual Path Generation Adversarial Network[J]. Journal of Electronics & Information Technology, 2023, 45(8): 3012-3021. doi: 10.11999/JEIT220819
Citation: YANG Shen, TIAN Lifan, LIANG Jiaming, HUANG Zefeng. Infrared and Visible Image Fusion Based on Improved Dual Path Generation Adversarial Network[J]. Journal of Electronics & Information Technology, 2023, 45(8): 3012-3021. doi: 10.11999/JEIT220819

改进双路径生成对抗网络的红外与可见光图像融合

doi: 10.11999/JEIT220819
基金项目: 国家自然科学基金(61702384),武汉科技大学基金(2017xz008)
详细信息
    作者简介:

    杨莘:女,副教授,工学博士,研究方向为多媒体通信与信号处理

    田立凡:男,工学硕士,研究方向为图像处理与模式识别

    梁佳明:男,工学硕士,研究方向为图像处理与模式识别

    黄泽丰:男,工学硕士,研究方向为图像处理与模式识别

    通讯作者:

    田立凡 1811108099@qq.com

  • 中图分类号: TN911.73

Infrared and Visible Image Fusion Based on Improved Dual Path Generation Adversarial Network

Funds: The National Natural Science Foundation of China (61702384), The Foundation of Wuhan University of Science and Technology (2017xz008)
  • 摘要: 为了使融合图像保留更多源图像的信息,该文提出一种端到端的双融合路径生成对抗网络(GAN)。首先,在生成器中采用结构相同、参数独立的双路径密集连接网络,构建红外差值路径和可见光差值路径以提高融合图像的对比度,引入通道注意力机制以使网络更聚焦于红外典型目标和可见光纹理细节;其次,将两幅源图像直接输入到网络的每一层,以提取更多的源图像特征信息;最后,考虑损失函数之间的互补,加入差值强度损失函数、差值梯度损失函数和结构相似性损失函数,以获得更具对比度的融合图像。实验表明,与多分类约束的生成对抗网络(GANMcC)、残差融合网络(RFnest)等相关融合算法相比,该方法得到的融合图像不仅在多个评价指标上均取得了最好的效果,而且具有更好的视觉效果,更符合人类视觉感知。
  • 图  1  源图像及其差值图像

    图  2  整体融合框架

    图  3  生成器的网络结构

    图  4  通道注意力模块

    图  5  判别器的网络结构

    图  6  5对红外与可见光源图像

    图  7  定性比较结果图

    图  8  20组图片定量评价指标

    图  9  4组融合模型的定性对比结果

    表  1  TNO定量对比实验结果

    SFAGEIENVIFVar
    DWT6.81542.647326.00326.37530.290124.9950
    DBN6.11922.457424.80126.33750.281424.3822
    DIDF7.56092.988429.55666.58250.341730.0428
    FusionGAN6.23952.416824.14246.57610.257531.1204
    GANMcC6.13912.545725.89466.74740.421733.6386
    MFEIF7.21042.903429.35226.65680.358733.0184
    RFnest5.87272.682128.64416.99070.513337.2477
    本文9.08603.580535.16967.07310.411233.6727
    下载: 导出CSV

    表  2  FLIR定量对比实验结果

    SFAGEIENVIFVar
    DWT9.05113.542537.00156.84260.333631.4294
    DBN8.34593.362335.31996.78450.330631.0446
    DIDF9.34343.690538.60176.78630.294331.5181
    FusionGAN8.11423.204534.42987.01670.289237.4859
    GANMcC8.66653.674439.42197.20890.426942.4833
    MFEIF9.47523.771939.88417.01710.380737.8447
    RFnest7.62793.310336.21517.29680.450344.1210
    本文9.74884.135944.12987.41630.439447.7148
    下载: 导出CSV

    表  3  不同融合方法计算效率对比结果(s)

    DWTDBNDIDFFusionGANGANMcCMFEIFRFnest本文
    40.04707.603110.065218.468031.825315.488319.85786.0944
    下载: 导出CSV

    表  4  4组融合模型定量对比结果

    SFAGEIENVIFVar
    No_both4.38941.531015.70626.52300.174934.9732
    No_dir5.61192.296423.09506.71650.350441.0505
    No_resource4.96311.739618.64366.60730.294339.2500
    本文6.91452.861529.07777.15940.409348.0217
    下载: 导出CSV
  • [1] GOSHTASBY A A and NIKOLOV S. Image fusion: Advances in the state of the art[J]. Information Fusion, 2007, 8(2): 114–118. doi: 10.1016/j.inffus.2006.04.001
    [2] TOET A, HOGERVORST M A, NIKOLOV S G, et al. Towards cognitive image fusion[J]. Information Fusion, 2010, 11(2): 95–113. doi: 10.1016/j.inffus.2009.06.008
    [3] 朱浩然, 刘云清, 张文颖. 基于对比度增强与多尺度边缘保持分解的红外与可见光图像融合[J]. 电子与信息学报, 2018, 40(6): 1294–1300. doi: 10.11999/JEIT170956

    ZHU Haoran, LIU Yunqing, and ZHANG Wenying. Infrared and visible image fusion based on contrast enhancement and multi-scale edge-preserving decomposition[J]. Journal of Electronics &Information Technology, 2018, 40(6): 1294–1300. doi: 10.11999/JEIT170956
    [4] GAO Yuan, MA Jiayi, and YUILLE A L. Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples[J]. IEEE Transactions on Image Processing, 2017, 26(5): 2545–2560. doi: 10.1109/TIP.2017.2675341
    [5] LIU C H, QI Y, and DING W R. Infrared and visible image fusion method based on saliency detection in sparse domain[J]. Infrared Physics & Technology, 2017, 83: 94–102. doi: 10.1016/j.infrared.2017.04.018
    [6] HE Changtao, LIU Quanxi, LI Hongliang, et al. Multimodal medical image fusion based on IHS and PCA[J]. Procedia Engineering, 2010, 7: 280–285. doi: 10.1016/j.proeng.2010.11.045
    [7] 张介嵩, 黄影平, 张瑞. 基于CNN的点云图像融合目标检测[J]. 光电工程, 2021, 48(5): 200418. doi: 10.12086/oee.2021.200418

    ZHANG Jiesong, HUANG Yingping, and ZHANG Rui. Fusing point cloud with image for object detection using convolutional neural networks[J]. Opto-electronic Engineering, 2021, 48(5): 200418. doi: 10.12086/oee.2021.200418
    [8] 陈永, 张娇娇, 王镇. 多尺度密集连接注意力的红外与可见光图像融合[J]. 光学 精密工程, 2022, 30(18): 2253–2266. doi: 10.37188/OPE.20223018.2253

    CHEN Yong, ZHANG Jiaojiao, and WANG Zhen. Infrared and visible image fusion based on multi-scale dense attention connection network[J]. Optics and Precision Engineering, 2022, 30(18): 2253–2266. doi: 10.37188/OPE.20223018.2253
    [9] AN Wenbo and WANG Hongmei. Infrared and visible image fusion with supervised convolutional neural network[J]. Optik, 2020, 219: 165120. doi: 10.1016/j.ijleo.2020.165120
    [10] LI Jing, HUO Hongtao, LIU Kejian, et al. Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance[J]. Information Sciences, 2020, 529: 28–41. doi: 10.1016/j.ins.2020.04.035
    [11] MA Jiayi, YU Wei, LIANG Pengwei, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11–26. doi: 10.1016/j.inffus.2018.09.004
    [12] MA Jiayi, ZHANG Hao, SHAO Zhenfeng, et al. GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 5005014. doi: 10.1109/TIM.2020.3038013
    [13] QU Guihong, ZHANG Dali, and YAN Pingfan. Information measure for performance of image fusion[J]. Electronics Letters, 2002, 38(7): 313–315. doi: 10.1049/el:20020212
    [14] XYDEAS C S and PETROVIĆ V. Objective image fusion performance measure[J]. Electronics Letters, 2000, 36(4): 308–309. doi: 10.1049/el:20000267
    [15] CUI Guangmang, FENG Huajun, XU Zhihai, et al. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition[J]. Optics Communications, 2015, 341: 199–209. doi: 10.1016/j.optcom.2014.12.032
    [16] ESKICIOGLU A M and FISHER P S. Image quality measures and their performance[J]. IEEE Transactions on Communications, 1995, 43(12): 2959–2965. doi: 10.1109/26.477498
    [17] LI H, MANJUNATH B S, and MITRA S K. Multisensor image fusion using the wavelet transform[J]. Graphical Models and Image Processing, 1995, 57(3): 235–245. doi: 10.1006/gmip.1995.1022
    [18] FU Yu and WU Xiaojun. A dual-branch network for infrared and visible image fusion[C]. 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 2021: 10675–10680.
    [19] ZHAO Zixiang, XU Shuang, ZHANG Chunxia, et al. DIDFuse: Deep image decomposition for infrared and visible image fusion[C]. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Yokohama, Japan, 2020: 970–976.
    [20] LIU Jinyuan, FAN Xin, JIANG Ji, et al. Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(1): 105–119. doi: 10.1109/TCSVT.2021.3056725
    [21] LI Hui, WU Xiaojun, and KITTLER J. RFN-Nest: An end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 2021, 73: 72–86. doi: 10.1016/j.inffus.2021.02.023
    [22] ROBERTS J W, AARDT J A V, and AHMED F B. Assessment of image fusion procedures using entropy, image quality, and multispectral classification[J]. Journal of Applied Remote Sensing, 2008, 2(1): 023522. doi: 10.1117/1.2945910
    [23] RAO Yunjiang. In-fibre Bragg grating sensors[J]. Measurement Science and Technology, 1997, 8(4): 355–375. doi: 10.1088/0957-0233/8/4/002
    [24] HAN Yu, CAI Yunze, CAO Yin, et al. A new image fusion performance metric based on visual information fidelity[J]. Information Fusion, 2013, 14(2): 127–135. doi: 10.1016/j.inffus.2011.08.002
    [25] ZHANG Xingchen, YE Ping, and XIAO Gang. VIFB: A visible and infrared image fusion benchmark[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, USA, 2020: 468–478.
  • 加载中
图(9) / 表(4)
计量
  • 文章访问数:  821
  • HTML全文浏览量:  497
  • PDF下载量:  100
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-06-21
  • 修回日期:  2023-01-15
  • 网络出版日期:  2023-02-03
  • 刊出日期:  2023-08-21

目录

    /

    返回文章
    返回