Citation: | SHI Yonggang, ZHANG Yue, ZHOU Zhiguo, LI Yi, XIA Zhuoyan. Deblurring and Restoration of Gastroscopy Image Based on Gradient-guidance Generative Adversarial Networks[J]. Journal of Electronics & Information Technology, 2022, 44(1): 70-77. doi: 10.11999/JEIT210920 |
[1] |
All cancers source: Globocan 2020[EB/OL]. https://gco.iarc.fr/today/data/factsheets/cancers/39-All-cancers-fact-sheet.pdf, 2020.
|
[2] |
MAHESH M M R, RAJAGOPALAN A N, and SEETHARAMAN G. Going unconstrained with rolling shutter deblurring[C]. 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017: 4030–4038.
|
[3] |
TAO Xin, GAO Hongyun, SHEN Xiaoyong, et al. Scale-recurrent network for deep image deblurring[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 8174–8182.
|
[4] |
KUPYN O, BUDZAN V, MYKHAILYCH M, et al. DeblurGAN: Blind motion deblurring using conditional adversarial networks[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 8183–8192.
|
[5] |
KUPYN O, MARTYNIUK T, WU Junru, et al. DeblurGAN-v2: Deblurring (orders-of-magnitude) faster and better[C]. The IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 8877–8886.
|
[6] |
YAN Qing, XU Yi, YANG Xiaokang, et al. Single image superresolution based on gradient profile sharpness[J]. IEEE Transactions on Image Processing, 2015, 24(10): 3187–3202. doi: 10.1109/TIP.2015.2414877
|
[7] |
ZHU Yu, ZHANG Yanning, BONEV B, et al. Modeling deformable gradient compositions for single-image super-resolution[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 5417–5425.
|
[8] |
MA Cheng, RAO Yongming, CHENG Ye’an, et al. Structure-preserving super resolution with gradient guidance[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 7766–7775.
|
[9] |
TRAN P, TRAN A T, PHUNG Q, et al. Explore image deblurring via encoded blur kernel space[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 11951–11960.
|
[10] |
CHI Zhixiang, WANG Yang, YU Yuanhao, et al. Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 9133–9142.
|
[11] |
CHEN Liang, ZHANG Jiawei, PAN Jinshan, et al. Learning a non-blind deblurring network for night blurry images[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 10537–10545.
|
[12] |
DONG Jiangxin, ROTH S, and SCHIELE B. Learning spatially-variant MAP models for non-blind image deblurring[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 4884–4893.
|
[13] |
GAO Shanghua, CHENG Mingming, ZHAO Kai, et al. Res2Net: A new multi-scale backbone architecture[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(2): 652–662. doi: 10.1109/TPAMI.2019.2938758
|
[14] |
WANG Xintao, YU Ke, WU Shixiang, et al. ESRGAN: Enhanced super-resolution generative adversarial networks[C]. The European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 2019: 63–79.
|
[15] |
HUANG Gao, LIU Zhuang, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]. IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 2261–2269.
|
[16] |
SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. 3rd International Conference on Learning Representations, San Diego, USA, 2015: 1–14.
|
[17] |
GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]. The 27th International Conference on Neural Information Processing Systems, Montreal, Canada, 2014: 2672–2680.
|
[18] |
RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241.
|
[19] |
HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
|
[20] |
ALI S, ZHOU F, DAUL C, et al. Endoscopy artifact detection (EAD 2019) challenge dataset[EB/OL]. https://arxiv.org/abs/1905.03209, 2019.
|
[21] |
KOULAOUZIDIS A, IAKOVIDIS D K, YUNG D E, et al. KID Project: An internet-based digital video atlas of capsule endoscopy for research purposes[J]. Endoscopy International Open, 2017, 5(6): E477–E483. doi: 10.1055/s-0043-105488
|
[22] |
HORÉ A and ZIOU D. Image quality metrics: PSNR vs. SSIM[C]. 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 2010: 2366–2369.
|
[23] |
WANG Zhou, BOVIK A C, SHEIKH H R, et al. Image quality assessment: From error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600–612. doi: 10.1109/TIP.2003.819861
|
[24] |
KINGMA D P and BA J. Adam: A method for stochastic optimization[C]. 3rd International Conference on Learning Representations, San Diego, USA, 2015.
|