高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于密集残差和质量评估引导的频率分离生成对抗超分辨率重构网络

韩玉兰 崔玉杰 罗轶宏 兰朝凤

韩玉兰, 崔玉杰, 罗轶宏, 兰朝凤. 基于密集残差和质量评估引导的频率分离生成对抗超分辨率重构网络[J]. 电子与信息学报. doi: 10.11999/JEIT240388
引用本文: 韩玉兰, 崔玉杰, 罗轶宏, 兰朝凤. 基于密集残差和质量评估引导的频率分离生成对抗超分辨率重构网络[J]. 电子与信息学报. doi: 10.11999/JEIT240388
HAN Yulan, CUI Yujie, LUO Yihong, LAN Chaofeng. Frequency Separation Generative Adversarial Super-resolution Network Based on Dense Residual and Quality Assessment[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240388
Citation: HAN Yulan, CUI Yujie, LUO Yihong, LAN Chaofeng. Frequency Separation Generative Adversarial Super-resolution Network Based on Dense Residual and Quality Assessment[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240388

基于密集残差和质量评估引导的频率分离生成对抗超分辨率重构网络

doi: 10.11999/JEIT240388
基金项目: 国家自然科学基金(11804068),黑龙江省省属高等学校基本科研业务(2020-KYYWF-0342)
详细信息
    作者简介:

    韩玉兰:女,讲师,研究方向为人工智能与计算机视觉、大数据分析与预测等

    崔玉杰:女,硕士,研究方向为图像重构

    罗轶宏:女,硕士,研究方向为图像重构

    兰朝凤:女,副教授,研究方向为语音信号处理与分析、水下信号分析与处理等

    通讯作者:

    韩玉兰 hanyulan@hrbust.edu.cn

  • 中图分类号: TN911.73; TP391

Frequency Separation Generative Adversarial Super-resolution Network Based on Dense Residual and Quality Assessment

Funds: The National Natural Science Foundation of China (11804068), The Fundamental Research Funds for the Provincial Universities (2020-KYYWF-0342)
  • 摘要: 生成对抗网络因其为盲超分辨率重构提供了新的思路而备受关注。针对现有方法未充分考虑图像退化过程中的低频保留特性而对高低频成分采用相同的处理方式,缺乏对频率细节有效利用,难以获得较好重构效果的问题,该文提出一种基于密集残差和质量评估引导的频率分离生成对抗超分辨率重构网络。该网络采用频率分离思想,对图像的高频和低频信息分开处理,从而提高高频信息捕捉能力,简化低频特征处理。该文对生成器中的基础块进行设计,将空间特征变换层融入密集宽激活残差中,增强深层特征表征能力的同时对局部信息差异化处理。此外,利用视觉几何组网络(VGG)设计了专门针对超分辨率重构图像的无参考质量评估网络,为重构网络提供全新的质量评估损失,进一步提高重构图像的视觉效果。实验结果表明,同当前先进的同类方法比,该方法在多个数据集上具有更佳的重构效果。由此表明,采用频率分离思想的生成对抗网络进行超分辨率重构,可以有效利用图像频率成分,提高重构效果。
  • 图  1  DR-QA-FSGAN网络总体结构

    图  2  生成器

    图  3  带SFT层的密集宽激活残差块DWRB

    图  4  质量评估网络

    图  5  不同方法在BSDS100数据集“69015”图像4倍超分辨率重构比较

    图  6  不同方法在自制数据集“02”图像4倍超分辨率重构比较

    图  7  不同滤波器在Set5数据集“baby”图像4倍超分辨率重构比较

    图  8  不同模块在Set5数据集“butterfly”图像4倍超分辨率重构比较

    图  9  不同损失函数在Set5数据集“bird”图像4倍超分辨率重构比较

    表  1  不同方法各数据集的PSNR/dB和SSIM均值比较(×4)

    算法Set5Set14BSDS100Manga109
    PSNR↑SSIM↑PSNR↑SSIM↑PSNR↑SSIM↑PSNR↑SSIM↑
    SRGAN[11]28.5740.81825.6740.69225.1560.65426.4880.828
    ESRGAN[12]30.4380.85226.2780.69925.3230.65128.2450.859
    SFTGAN[14]27.5780.80926.9680.72925.5010.65328.1820.858
    DSGAN[17]30.3920.85426.6440.71425.4470.65527.9650.853
    SRCGAN[13]28.0680.78926.0710.69625.6590.65725.2950.796
    FxSR[15]30.6370.84926.7080.71926.1440.68427.6470.844
    SROOE[16]30.8620.86627.2310.73126.1950.68727.8520.849
    WGSR[19]30.3730.85127.0230.72726.3720.68428.2870.861
    本文30.9040.87227.7150.74926.8380.70128.3120.867
    下载: 导出CSV

    表  2  自制数据集不同方法NIQE和FVSD平均值比较(×4)

    算法NIQE↓FVSD↑
    SRGAN[11]12.843.84
    ESRGAN[12]8.626.46
    SFTGAN[14]8.466.35
    DSGAN[17]8.416.51
    SRCGAN[13]10.214.25
    FxSR[15]8.376.48
    SROOE[16]8.196.49
    WGSR[19]8.146.52
    本文8.116.54
    下载: 导出CSV

    表  3  不同滤波器重构效果的影响

    滤波器PSNR(dB)↑SSIM↑
    28.8310.835
    邻域平均28.9410.833
    高斯差分29.0150.837
    下载: 导出CSV

    表  4  含有不同模块对应的PSNR/dB和SSIM均值

    分支结构SFT层质量评估网络PSNR↑SSIM↑
    $\surd $$ \times $$ \times $28.7720.828
    $ \times $$\surd $$ \times $28.4020.821
    $ \times $$ \times $$\surd $28.6420.823
    $\surd $$\surd $$\surd $29.0150.837
    下载: 导出CSV

    表  5  不同损失函数的影响

    损失
    组合
    颜色损失多层感知损失对抗损失FVSD损失PSNR↑SSIM↑
    LcolLcol-1LadvLadv-1
    组合1$ \times $$\surd $$\surd $$ \times $$\surd $$ \times $28.3520.818
    组合2$ \times $$\surd $$\surd $$ \times $$\surd $$\surd $28.8310.835
    组合3$\surd $$ \times $$\surd $$\surd $$ \times $$ \times $28.4370.821
    本文$\surd $$ \times $$\surd $$\surd $$ \times $$\surd $29.0150.837
    下载: 导出CSV

    表  6  重构时间与参数量的比较

    算法重构时间(msec)参数量(MB)
    SRGAN[11]0.04011.51
    ESRGAN[12]0.160316.69
    SFTGAN[14]0.06641.83
    DSGAN[17]0.172316.69
    SRCGAN[13]0.00960.38
    FxSR[15]0.354118.30
    SROOE[16]0.388070.20
    WGSR[19]0.180616.69
    本文0.15689.62
    下载: 导出CSV
  • [1] 蔡文郁, 张美燕, 吴岩, 等. 基于循环生成对抗网络的超分辨率重建算法研究[J]. 电子与信息学报, 2022, 44(1): 178–186. doi: 10.11999/JEIT201046.

    CAI Wenyu, ZHANG Meiyan, WU Yan, et al. Research on cyclic generation countermeasure network based super-resolution image reconstruction algorithm[J]. Journal of Electronics & Information Technology, 2022, 44(1): 178–186. doi: 10.11999/JEIT201046.
    [2] ZHOU Chaowei and XIONG Aimin. Fast image super-resolution using particle swarm optimization-based convolutional neural networks[J]. Sensors, 2023, 23(4): 1923. doi: 10.3390/s23041923.
    [3] WU Zhijian, LIU Wenhui, LI Jun, et al. SFHN: Spatial-frequency domain hybrid network for image super-resolution[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(11): 6459–6473. doi: 10.1109/TCSVT.2023.3271131.
    [4] 程德强, 袁航, 钱建生, 等. 基于深层特征差异性网络的图像超分辨率算法[J]. 电子与信息学报, 2024, 46(3): 1033–1042. doi: 10.11999/JEIT230179.

    CHENG Deqiang, YUAN Hang, QIAN Jiansheng, et al. Image super-resolution algorithms based on deep feature differentiation network[J]. Journal of Electronics & Information Technology, 2024, 46(3): 1033–1042. doi: 10.11999/JEIT230179.
    [5] SAHARIA C, HO J, CHAN W, et al. Image super-resolution via iterative refinement[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(4): 4713–4726. doi: 10.1109/TPAMI.2022.3204461.
    [6] DONG Chao, LOY C C, HE Kaiming, et al. Learning a deep convolutional network for image super-resolution[C]. Proceedings of the 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 184–199. doi: 10.1007/978-3-319-10593-2_13.
    [7] KIM J, LEE J K, and LEE K M. Accurate image super-resolution using very deep convolutional networks[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016. doi: 10.1109/CVPR.2016.182.
    [8] TONG Tong, LI Gen, LIU Xiejie, et al. Image super-resolution using dense skip connections[C]. Proceedings of 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 4809–4817. doi: 10.1109/ICCV.2017.514.
    [9] LAN Rushi, SUN Long, LIU Zhenbing, et al. MADNet: A fast and lightweight network for single-image super resolution[J]. IEEE Transactions on Cybernetics, 2021, 51(3): 1443–1453. doi: 10.1109/TCYB.2020.2970104.
    [10] WEI Pengxu, XIE Ziwei, LU Hannan, et al. Component divide-and-conquer for real-world image super-resolution[C]. Proceedings of the 16th Europe Conference on Computer Vision, Glasgow, UK, 2020: 101–117. doi: 10.1007/978-3-030-58598-3_7.
    [11] LEDIG C, THEIS L, HUSZÁR F, et al. Photo-realistic single image super-resolution using a generative adversarial network[C]. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 105–114. doi: 10.1109/CVPR.2017.19.
    [12] WANG Xintao, YU Ke, WU Shixiang, et al. ESRGAN: Enhanced super-resolution generative adversarial networks[C]. Proceedings of the European Conference on Computer Vision, Munich, Germany, 2019: 63–79. doi: 10.1007/978-3-030-11021-5_5.
    [13] UMER R M, FORESTI G L, and MICHELONI C. Deep generative adversarial residual convolutional networks for real-world super-resolution[C]. Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, USA, 2020: 1769–1777. doi: 10.1109/CVPRW50498.2020.00227.
    [14] WANG Xintao, YU Ke, DONG Chao, et al. Recovering realistic texture in image super-resolution by deep spatial feature transform[C]. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 606–615. doi: 10.1109/CVPR.2018.00070.
    [15] PARK S H, MOON Y S, and CHO N I. Flexible style image super-resolution using conditional objective[J]. IEEE Access, 2022, 10: 9774–9792. doi: 10.1109/ACCESS.2022.3144406.
    [16] PARK S H, MOON Y S, and CHO N I. Perception-oriented single image super-resolution using optimal objective estimation[C]. Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 1725–1735. doi: 10.1109/CVPR52729.2023.00172.
    [17] FRITSCHE M, GU Shuhang, and TIMOFTE R. Frequency separation for real-world super-resolution[C]. Proceedings of 2019 IEEE/CVF International Conference on Computer Vision Workshop, Seoul, Korea (South), 2019: 3599–3608. doi: 10.1109/ICCVW.2019.00445.
    [18] PRAJAPATI K, CHUDASAMA V, PATEL H, et al. Direct unsupervised super-resolution using generative adversarial network (DUS-GAN) for real-world data[J]. IEEE Transactions on Image Processing, 2021, 30: 8251–8264. doi: 10.1109/TIP.2021.3113783.
    [19] KORKMAZ C, TEKALP A M, and DOGAN Z. Training generative image super-resolution models by wavelet-domain losses enables better control of artifacts[C]. Proceedings of 2014 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2024: 5926–5936. doi: 10.1109/CVPR52733.2024.00566.
    [20] MA Chao, YANG C Y, YANG Xiaokang, et al. Learning a no-reference quality metric for single-image super-resolution[J]. Computer Vision and Image Understanding, 2017, 158: 1–16. doi: 10.1016/j.cviu.2016.12.009.
    [21] RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241. doi: 10.1007/978-3-319-24574-4_28.
    [22] YANG Jianchao, WRIGHT J, HUANG T S, et al. Image super-resolution via sparse representation[J]. IEEE Transactions on Image Processing, 2010, 19(11): 2861–2873. doi: 10.1109/TIP.2010.2050625.
    [23] ZHANG Kai, ZUO Wangmeng, and ZHANG Lei. Deep plug-and-play super-resolution for arbitrary blur kernels[C]. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019. doi: 10.1109/CVPR.2019.00177.
    [24] TIMOFTE R, AGUSTSSON E, VAN GOOL L, et al. NTIRE 2017 challenge on single image super-resolution: Methods and results[C]. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, USA, 2017: 114–125. doi: 10.1109/CVPRW.2017.149.
    [25] BEVILACQUA M, ROUMY A, GUILLEMOT C, et al. Low-complexity single image super-resolution based on nonnegative neighbor embedding[C]. Proceedings of the British Machine Vision Conference, 2012. doi: 10.5244/C.26.135. (查阅网上资料,未找到对应的出版地信息,请确认) .
    [26] ZEYDE R, ELAD M, and PROTTER M. On single image scale-up using sparse-representations[C]. Proceedings of the 7th International Conference on Curves and Surfaces, Avignon, France, 2012: 711–730. doi: 10.1007/978-3-642-27413-8_47.
    [27] ARBELÁEZ P, MAIRE M, FOWLKES C, et al. Contour detection and hierarchical image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(5): 898–916. doi: 10.1109/tpami.2010.161.
    [28] MATSUI Y, ITO K, ARAMAKI Y, et al. Sketch-based manga retrieval using manga109 dataset[J]. Multimedia Tools and Applications, 2017, 76(20): 21811–21838. doi: 10.1007/s11042-016-4020-z.
  • 加载中
图(9) / 表(6)
计量
  • 文章访问数:  20
  • HTML全文浏览量:  9
  • PDF下载量:  4
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-05-16
  • 修回日期:  2024-11-11
  • 网络出版日期:  2024-11-18

目录

    /

    返回文章
    返回