Advanced Search
Volume 44 Issue 12
Dec.  2022
Turn off MathJax
Article Contents
JIANG Yichun, LIU Yunqing, ZHAN Weida, ZHU Depeng. Infrared and Visible Image Fusion Method Based on Degradation Model[J]. Journal of Electronics & Information Technology, 2022, 44(12): 4405-4415. doi: 10.11999/JEIT211112
Citation: JIANG Yichun, LIU Yunqing, ZHAN Weida, ZHU Depeng. Infrared and Visible Image Fusion Method Based on Degradation Model[J]. Journal of Electronics & Information Technology, 2022, 44(12): 4405-4415. doi: 10.11999/JEIT211112

Infrared and Visible Image Fusion Method Based on Degradation Model

doi: 10.11999/JEIT211112
Funds:  The Special Project for Innovation Capacity Building of Jilin Provinc Development and Reform Commission (2021C045-5)
  • Received Date: 2021-10-11
  • Rev Recd Date: 2022-04-29
  • Available Online: 2022-05-08
  • Publish Date: 2022-12-16
  • Infrared and visible image fusion algorithms based on deep learning rely on artificially designed similarity functions to measure the similarity between input and output. The unsupervised learning method can not effectively utilize the ability of neural networks to extract deep features, resulting in unsatisfactory fusion results. Considering this problem, a new fusion degradation model of infrared and visible image is proposed in this paper, which regards infrared and visible images as the degraded images produced by ideal fusion images through mixed degradation processes. Secondly, a data enhancement scheme for simulating image degradation is proposed, and a large number of simulated degradation images are generated by using high-definition datasets for training the network. Finally, a simple and efficient end-to-end network model and its network training framework are designed based on the proposed degradation model. The experimental results show that the method proposed in this paper not only has good visual effects and performance indicators, but also can effectively suppress interferences such as illumination, smoke and noise.
  • loading
  • [1]
    LI Shutao, KANG Xudong, and HU Jianwen. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing, 2013, 22(7): 2864–2875. doi: 10.1109/TIP.2013.2244222
    [2]
    LIU Chunhui and DING Wenrui. Variational model for infrared and visible light image fusion with saliency preservation[J]. Journal of Electronic Imaging, 2019, 28(2): 023023. doi: 10.1117/1.JEI.28.2.023023
    [3]
    ZHAO Jufeng, ZHOU Qiang, CHEN Yueting, et al. Fusion of visible and infrared images using saliency analysis and detail preserving based image decomposition[J]. Infrared Physics & Technology, 2013, 56: 93–99. doi: 10.1016/j.infrared.2012.11.003
    [4]
    HE Guiqing, XING Siyuan, HE Xingjian, et al. Image fusion method based on simultaneous sparse representation with non‐subsampled contourlet transform[J]. IET Computer Vision, 2019, 13(2): 240–248. doi: 10.1049/iet-cvi.2018.5496
    [5]
    YANG Bin and LI Shutao. Visual attention guided image fusion with sparse representation[J]. Optik, 2014, 125(17): 4881–4888. doi: 10.1016/j.ijleo.2014.04.036
    [6]
    WANG Jun, PENG Jinye, FENG Xiaoyi, et al. Fusion method for infrared and visible images by using non-negative sparse representation[J]. Infrared Physics & Technology, 2014, 67: 477–489. doi: 10.1016/j.infrared.2014.09.019
    [7]
    雷大江, 杜加浩, 张莉萍, 等. 联合多流融合和多尺度学习的卷积神经网络遥感图像融合方法[J]. 电子与信息学报, 2022, 44(1): 237–244. doi: 10.11999/JEIT200792

    LEI Dajiang, DU Jiahao, ZHANG Liping, et al. Multi-stream architecture and multi-scale convolutional neural network for remote sensing image fusion[J]. Journal of Electronics &Information Technology, 2022, 44(1): 237–244. doi: 10.11999/JEIT200792
    [8]
    陈书贞, 曹世鹏, 崔美玥, 等. 基于深度多级小波变换的图像盲去模糊算法[J]. 电子与信息学报, 2021, 43(1): 154–161. doi: 10.11999/JEIT190947

    CHEN Shuzhen, CAO Shipeng, CUI Meiyue, et al. Image blind deblurring algorithm based on deep multi-level wavelet transform[J]. Journal of Electronics &Information Technology, 2021, 43(1): 154–161. doi: 10.11999/JEIT190947
    [9]
    李明鸿, 常侃, 李恒鑫, 等. 双阶段信息蒸馏的轻量级图像超分辨率网络[J]. 中国图象图形学报, 2021, 26(5): 991–1005. doi: 10.11834/jig.200265

    LI Minghong, CHANG Kan, LI Hengxin, et al. Lightweight image super-resolution network via two-stage information distillation[J]. Journal of Image and Graphics, 2021, 26(5): 991–1005. doi: 10.11834/jig.200265
    [10]
    PRABHAKAR K R, SRIKAR V S, and BABU R V. DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]. The 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 4714–4722.
    [11]
    LI Hui and WU Xiaojun. DenseFuse: A fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614–2623. doi: 10.1109/TIP.2018.2887342
    [12]
    LIU Yu, CHEN Xun, PENG Hu, et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion, 2017, 36: 191–207. doi: 10.1016/j.inffus.2016.12.001
    [13]
    XU Han, MA Jiayi, JIANG Junjun, et al. U2Fusion: A unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502–518. doi: 10.1109/TPAMI.2020.3012548
    [14]
    MA Jiayi, TANG Linfeng, XU Meilong, et al. STDFusionNet: An infrared and visible image fusion network based on salient target detection[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 5009513. doi: 10.1109/TIM.2021.3075747
    [15]
    ZHU Depeng, ZHAN Weida, JIANG Yichun, et al. MIFFuse: A multi-level feature fusion network for infrared and visible images[J]. IEEE Access, 2021, 9: 130778–130792. doi: 10.1109/ACCESS.2021.3111905
    [16]
    JIANG Yichun, LIU Yunqing, ZHAN Weida, et al. Lightweight dual-stream residual network for single image super-resolution[J]. IEEE Access, 2021, 9: 129890–129901. doi: 10.1109/ACCESS.2021.3112002
    [17]
    JIANG Haijun, CHEN Fei, LIU Xining, et al. Thermal wave image deblurring based on depth residual network[J]. Infrared Physics & Technology, 2021, 117: 103847. doi: 10.1016/j.infrared.2021.103847
    [18]
    HAN J, LEE H, and KANG M G. Thermal image restoration based on LWIR sensor statistics[J]. Sensors, 2021, 21(16): 5443. doi: 10.3390/s21165443
    [19]
    ZHANG Kai, ZUO Wangmeng, and ZHANG Lei. Learning a single convolutional super-resolution network for multiple degradations[C]. The 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 3262–3271.
    [20]
    IGNATOV A, TIMOFTE R, VAN VU T, et al. PIRM challenge on perceptual image enhancement on smartphones: Report[C]. The European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 2018: 315–333.
    [21]
    WANG Zhou, BOVIK A C, SHEIKH H R, et al. Image quality assessment: From error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600–612. doi: 10.1109/TIP.2003.819861
    [22]
    ZHANG Xingchen, YE Ping, and XIAO Gang. VIFB: A visible and infrared image fusion benchmark[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, USA, 2020: 104–105.
    [23]
    LIU Yu, CHEN Xun, WARD R K, et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 2016, 23(12): 1882–1886. doi: 10.1109/LSP.2016.2618776
    [24]
    BAVIRISETTI D P and DHULI R. Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform[J]. IEEE Sensors Journal, 2016, 16(1): 203–209. doi: 10.1109/JSEN.2015.2478655
    [25]
    SHREYAMSHA KUMAR B K. Image fusion based on pixel significance using cross bilateral filter[J]. Signal, Image and Video Processing, 2015, 9(5): 1193–1204. doi: 10.1007/s11760–013-0556–9
    [26]
    MA Jiayi, YU Wei, LIANG Pengwei, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11–26. doi: 10.1016/j.inffus.2018.09.004
    [27]
    MA Jiayi, ZHANG Hao, SHAO Zhenfeng, et al. GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 5005014. doi: 10.1109/TIM.2020.3038013
    [28]
    ZHANG Yu, LIU Yu, SUN Peng, et al. IFCNN: A general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54: 99–118. doi: 10.1016/j.inffus.2019.07.011
    [29]
    JIAN Lihua, YANG Xiaomin, LIU Zheng, et al. SEDRFuse: A symmetric encoder–decoder with residual block network for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 5002215. doi: 10.1109/TIM.2020.3022438
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(11)  / Tables(4)

    Article Metrics

    Article views (1106) PDF downloads(166) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return