高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

视觉多通路机制启发的多场景感知红外与可见光图像融合框架

高绍兵 詹宗逸 匡梅

高绍兵, 詹宗逸, 匡梅. 视觉多通路机制启发的多场景感知红外与可见光图像融合框架[J]. 电子与信息学报, 2023, 45(8): 2749-2758. doi: 10.11999/JEIT221361
引用本文: 高绍兵, 詹宗逸, 匡梅. 视觉多通路机制启发的多场景感知红外与可见光图像融合框架[J]. 电子与信息学报, 2023, 45(8): 2749-2758. doi: 10.11999/JEIT221361
GAO Shaobing, ZHAN Zongyi, KUANG Mei. Multi-Scenario Aware Infrared and Visible Image Fusion Framework Based on Visual Multi-Pathway Mechanism[J]. Journal of Electronics & Information Technology, 2023, 45(8): 2749-2758. doi: 10.11999/JEIT221361
Citation: GAO Shaobing, ZHAN Zongyi, KUANG Mei. Multi-Scenario Aware Infrared and Visible Image Fusion Framework Based on Visual Multi-Pathway Mechanism[J]. Journal of Electronics & Information Technology, 2023, 45(8): 2749-2758. doi: 10.11999/JEIT221361

视觉多通路机制启发的多场景感知红外与可见光图像融合框架

doi: 10.11999/JEIT221361
基金项目: 国家自然科学基金 (62076170),厅市共建智能终端四川省重点实验室开放课题 (SCITLAB-20001)
详细信息
    作者简介:

    高绍兵:男,博士,副研究员,研究方向为类脑智能、视觉认知计算、图像处理等

    詹宗逸:男,硕士生,研究方向为图像融合、计算机视觉等

    匡梅:女,硕士生,研究方向为医学图像处理、信息融合等

    通讯作者:

    詹宗逸 zhan_zongyi@163.com

  • 中图分类号: TN911.73; TP391

Multi-Scenario Aware Infrared and Visible Image Fusion Framework Based on Visual Multi-Pathway Mechanism

Funds: The National Natural Science Foundation of China (62076170), Intelligent Terminal Key Laboratory of Sichuan Province (SCITLAB-20001)
  • 摘要: 现有的红外与可见光图像融合算法往往将日间场景与夜间场景下的图像融合视为同一个问题,这种方式忽略了在日间场景与夜间场景下进行图像融合的差异性,使得算法融合性能受限。生物视觉系统强大的自适应特性能够在不同场景下最大限度地捕获输入视觉刺激中的有效信息,实现自适应的视觉信息处理,有可能为实现性能更为优异的红外与可见光图像融合算法带来新的思路启发。针对上述问题,该文提出一种视觉多通路机制启发的多场景感知红外与可见光图像融合框架。其中,受生物视觉多通路特性启发,该文框架中设计了分别感知日间场景信息与夜间场景信息的两条信息处理通路,源图像首先分别输入感知日间场景信息与感知夜间场景信息的融合网络得到两幅中间结果图像,而后再通过可学习的加权网络生成最终的融合图像。此外,该文设计了模拟生物视觉中广泛存在的中心-外周感受野结构的中心-外周卷积模块,并将其应用于所提出框架中。定性与定量实验结果表明,该文所提方法在主观上能够显著提升融合图像的图像质量,同时在客观评估指标上优于现有融合算法。
  • 图  1  混合模型、日间模型与夜间模型在日间场景与夜间场景下的融合效果图

    图  2  本文所提出的红外与可见光图像融合总体框架图

    图  3  融合模型网络结构图

    图  4  加权模型网络结构图

    图  5  网络训练策略框架图

    图  6  MSRS数据集测试案例算法效果对比图

    图  7  TNO数据集测试案例算法效果对比图

    表  1  验证实验中各统计检验实验结果

    条件ENSFSDVIFAG
    对于日间测试集:
    日间模型优于混合模型

    (p = $\text{1.36×}{\text{10} }^{{-8} }$)

    (p = $ \text{2.59×}{\text{10}}^{{-16}} $)

    (p = $ \text{8.90×}{\text{10}}^{{-15}} $)

    (p = $ \text{2.24×}{\text{10}}^{{-5}} $)

    (p = $ \text{5.18×}{\text{10}}^{{-11}} $)
    对于混合测试集:
    日间模型或夜间模型优于混合模型

    (p = $\text{2.19×}{\text{10} }^{{-30} }$)

    (p = $ \text{5.08×}{\text{10}}^{{-16}} $)

    (p = $ \text{1.34×}{\text{10}}^{{-23}} $)

    (p = $ \text{6.32×}{\text{10}}^{{-17}} $)

    (p = $ \text{4.17×}{\text{10}}^{{-15}} $)
    对于夜间测试集:
    夜间模型优于混合模型

    (p = $\text{2.57×}{\text{10} }^{{-27} }$)

    (p = 1.21$ \text{×}{\text{10}}^{{-2}} $)

    (p = $ \text{7.38×}{\text{10}}^{{-24}} $)

    (p = $ \text{1.81×}{\text{10}}^{{-20}} $)

    (p = $ \text{1.45×}{\text{10}}^{{-10}} $)
    下载: 导出CSV

    表  2  MSRS数据集定量评估表

    方法ENSFSDVIFAGMIQAB/FSSIMMS-SSIMFMIpixelFMIw
    CSR5.94780.03457.31810.70692.70302.34140.57760.96530.94330.92640.3167
    GTF5.46180.03146.34790.59362.38571.70410.39390.90850.85420.91190.3543
    DenseFuse6.61460.02468.49640.74822.37772.62140.30060.89310.91160.88810.2075
    FusionGAN5.63670.01926.37230.59081.70051.93600.14760.79840.67110.89140.2990
    PMGI6.43990.03508.13800.71873.25192.13710.43270.92590.86570.88670.3624
    GANMcC6.27890.02358.65470.67602.15912.58630.28250.88430.85250.89660.3402
    RFN-Nest6.61130.02758.40710.76922.57012.52920.43510.92540.92260.90480.2745
    本文算法7.03260.04809.22061.03104.03745.18350.66250.94900.94860.92020.3655
    下载: 导出CSV

    表  3  TNO数据集定量评估表

    方法ENSFSDVIFAGMIQAB/FSSIMMS-SSIMFMIpixelFMIw
    CSR6.48810.03448.78110.69283.20252.03490.52840.94280.90370.91440.3837
    GTF6.88160.03549.57380.62283.25162.76060.40310.87660.81640.90420.4408
    DenseFuse6.98830.02229.40560.78952.56222.09750.27450.84320.89650.89280.1998
    FusionGAN6.63210.02440.83780.65832.31332.38700.23280.81060.74740.88550.3907
    PMGI7.07440.03239.65150.87593.35192.38850.41080.93050.90300.90090.3992
    GANMcC6.78650.02319.15370.71472.41842.32240.27950.88030.86230.89830.3885
    RFN-Nest7.04180.02189.43290.83492.51762.16210.33260.87570.90910.90210.3003
    本文算法6.89750.04029.36600.91463.91263.68620.56270.89940.84790.91100.3936
    下载: 导出CSV

    表  4  消融实验结果表

    方法ENSFSDVIFAGMIQAB/FSSIMMS-SSIMFMIpixelFMIw
    无CS Conv6.65270.04148.42491.03523.39694.36520.61800.94090.94510.93050.3542
    仅日间模型7.02370.04749.29021.03694.07464.98320.66240.96810.96070.91930.3626
    仅夜间模型6.94720.04799.05830.97334.15724.73460.68950.91120.92210.91730.3631
    本文算法7.03260.04809.22061.03104.03745.18350.66250.94900.94860.92020.3655
    下载: 导出CSV

    表  5  权重图分析实验结果

    条件最小单幅占比(%)最大单幅占比(%)平均单幅占比(%)统计检验P
    对于日间测试集图像:
    日间结果权重图大于等于夜间结果权重图
    94.5498.5296.07$ \text{4.16×}{\text{10}}^{{-101}} $
    对于夜间测试集图像:
    夜间结果权重图大于等于日间结果权重图
    40.8484.1755.37$ \text{7.74×}{\text{10}}^{{-8}} $
    下载: 导出CSV
  • [1] MA Jiayi, MA Yong, and LI Chang. Infrared and visible image fusion methods and applications: A survey[J]. Information Fusion, 2019, 45: 153–178. doi: 10.1016/j.inffus.2018.02.004
    [2] ZHANG Hao, XU Han, TIAN Xin, et al. Image fusion meets deep learning: A survey and perspective[J]. Information Fusion, 2021, 76: 323–336. doi: 10.1016/j.inffus.2021.06.008
    [3] 朱浩然, 刘云清, 张文颖. 基于对比度增强与多尺度边缘保持分解的红外与可见光图像融合[J]. 电子与信息学报, 2018, 40(6): 1294–1300. doi: 10.11999/JEIT170956

    ZHU Haoran, LIU Yunqing, and ZHANG Wenying. Infrared and visible image fusion based on contrast enhancement and multi-scale edge-preserving decomposition[J]. Journal of Electronics &Information Technology, 2018, 40(6): 1294–1300. doi: 10.11999/JEIT170956
    [4] LIU Yu, CHEN Xun, WARD R K, et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 2016, 23(12): 1882–1886. doi: 10.1109/LSP.2016.2618776
    [5] FU Zhizhong, WANG Xue, XU Jin, et al. Infrared and visible images fusion based on RPCA and NSCT[J]. Infrared Physics & Technology, 2016, 77: 114–123. doi: 10.1016/j.infrared.2016.05.012
    [6] MA Jinlei, ZHOU Zhiqiang, WANG Bo, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8–17. doi: 10.1016/j.infrared.2017.02.005
    [7] LI Hui, WU Xiaojun, and KITTLER J. Infrared and visible image fusion using a deep learning framework[C]. Proceedings of the 24th International Conference on Pattern Recognition, Beijing, China, 2018: 2705–2710.
    [8] ZHANG Hao, XU Han, XIAO Yang, et al. Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity[C]. Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, USA, 2020: 12797–12804.
    [9] LI Hui and WU Xiaojun. DenseFuse: A fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614–2623. doi: 10.1109/TIP.2018.2887342
    [10] LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: Common objects in context[C]. Proceedings of the 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 740–755.
    [11] LI Hui, WU Xiaojun, and KITTLER J. RFN-Nest: An end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 2021, 73: 72–86. doi: 10.1016/j.inffus.2021.02.023
    [12] MA Jiayi, YU Wei, LIANG Pengwei, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11–26. doi: 10.1016/j.inffus.2018.09.004
    [13] MA Jiayi, ZHANG Hao, SHAO Zhenfeng, et al. GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 5005014. doi: 10.1109/TIM.2020.3038013
    [14] TAN Minjie, GAO Shaobing, XU Wenzheng, et al. Visible-infrared image fusion based on early visual information processing mechanisms[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 31(11): 4357–4369. doi: 10.1109/TCSVT.2020.3047935
    [15] WAXMAN A M, GOVE A N, FAY D A, et al. Color night vision: Opponent processing in the fusion of visible and IR imagery[J]. Neural Networks, 1997, 10(1): 1–6. doi: 10.1016/S0893-6080(96)00057-3
    [16] GOODALE M A and MILNER D A. Separate visual pathways for perception and action[J]. Trends in Neurosciences, 1992, 15(1): 20–25. doi: 10.1016/0166-2236(92)90344-8
    [17] CHEN Ke, SONG Xuemei, and LI Chaoyi. Contrast-dependent variations in the excitatory classical receptive field and suppressive nonclassical receptive field of cat primary visual cortex[J]. Cerebral Cortex, 2013, 23(2): 283–292. doi: 10.1093/cercor/bhs012
    [18] TANG Linfeng, YUAN Jiteng, ZHANG Hao, et al. PIAFusion: A progressive infrared and visible image fusion network based on illumination aware[J]. Information Fusion, 2022, 83/84: 79–92. doi: 10.1016/j.inffus.2022.03.007
    [19] ANGELUCCI A and SHUSHRUTH S. Beyond the classical receptive field: Surround modulation in primary visual cortex[M]. WERNER J S and CHALUPA L M. The New Visual Neurosciences. Cambridge: MIT Press, 2013: 425–444.
    [20] GAO Shaobing, YANG Kaifu, LI Chaoyi, et al. Color constancy using double-opponency[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(10): 1973–1985. doi: 10.1109/TPAMI.2015.2396053
    [21] RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. Proceedings of 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241.
    [22] VINKER Y, HUBERMAN-SPIEGELGLAS I, and FATTAL R. Unpaired learning for high dynamic range image tone mapping[C]. Proceedings of 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 14637–14646.
    [23] TOET A. The TNO multiband image data collection[J]. Data in Brief, 2017, 15: 249–251. doi: 10.1016/j.dib.2017.09.038
    [24] MA Jiayi, CHEN Chen, LI Chang, et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 2016, 31: 100–109. doi: 10.1016/j.inffus.2016.02.001
    [25] WANG Di, LIU Jinyuan, FAN Xin, et al. Unsupervised misaligned infrared and visible image fusion via cross-modality image generation and registration[C]. Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, Vienna, Austria, 2022: 3508–3515.
  • 加载中
图(7) / 表(5)
计量
  • 文章访问数:  677
  • HTML全文浏览量:  421
  • PDF下载量:  159
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-10-31
  • 修回日期:  2023-05-06
  • 网络出版日期:  2023-05-10
  • 刊出日期:  2023-08-21

目录

    /

    返回文章
    返回