高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于空间特征的生成对抗网络数据生成方法

孙磊 杨宇 毛秀青 汪小芹 李佳欣

孙磊, 杨宇, 毛秀青, 汪小芹, 李佳欣. 基于空间特征的生成对抗网络数据生成方法[J]. 电子与信息学报, 2023, 45(6): 1959-1969. doi: 10.11999/JEIT211285
引用本文: 孙磊, 杨宇, 毛秀青, 汪小芹, 李佳欣. 基于空间特征的生成对抗网络数据生成方法[J]. 电子与信息学报, 2023, 45(6): 1959-1969. doi: 10.11999/JEIT211285
SUN Lei, YANG Yu, MAO Xiuqing, WANG Xiaoqin, LI Jiaxin. Data Generation Based on Generative Adversarial Network with Spatial Features[J]. Journal of Electronics & Information Technology, 2023, 45(6): 1959-1969. doi: 10.11999/JEIT211285
Citation: SUN Lei, YANG Yu, MAO Xiuqing, WANG Xiaoqin, LI Jiaxin. Data Generation Based on Generative Adversarial Network with Spatial Features[J]. Journal of Electronics & Information Technology, 2023, 45(6): 1959-1969. doi: 10.11999/JEIT211285

基于空间特征的生成对抗网络数据生成方法

doi: 10.11999/JEIT211285
详细信息
    作者简介:

    孙磊:男,教授,研究方向为密码与系统安全、机器学习安全

    杨宇:女,硕士生,研究方向为计算机视觉、生成对抗网络

    毛秀青:男,副教授,研究方向为智能信息系统安全

    汪小芹:女,硕士生,研究方向为计算机视觉

    通讯作者:

    杨宇 yuyoung0107@163.com

  • 中图分类号: TP183;TN03

Data Generation Based on Generative Adversarial Network with Spatial Features

  • 摘要: 传统的生成对抗网络(GAN)在特征图较大的情况下,忽略了原始特征的表示和结构信息,并且生成图像的像素之间缺乏远距离相关性,从而导致生成的图像质量较低。为了进一步提高生成图像的质量,该文提出一种基于空间特征的生成对抗网络数据生成方法(SF-GAN)。该方法首先将空间金字塔网络加入生成器和判别器,来更好地捕捉图像的边缘等重要的描述信息;然后将生成器和判别器进行特征加强,来建模像素之间的远距离相关性。使用CelebA,SVHN,CIFAR-10等小规模数据集进行实验,通过定性和盗梦空间得分(IS)、弗雷歇距离(FID)定量评估证明了所提方法相比梯度惩罚生成对抗网络(WGAN-GP)、自注意力生成对抗网络(SAGAN)能使生成的图像具有更高的质量。并且通过实验证明了该方法生成的数据能够进一步提升分类模型的训练效果。
  • 图  1  GAN结构示意图

    图  2  DCGAN生成器结构示意图

    图  3  SF-GAN的模型框架

    图  4  SF-GAN的网络训练流程图

    图  5  空间金字塔结构

    图  6  不同模型生成样本对比

    图  7  不同模型在CelebA数据集上的生成效果

    图  8  不同模型在数字8上的生成效果

    图  9  不同模型在CIFAR-10上的生成效果

    图  10  SVHN增强后训练集损失的变化

    图  11  SVHN增强后训练集和测试集分类准确率

    图  12  CIFAR-10增强后训练集损失的变化

    图  13  CIFAR-10增强后训练集和测试集分类准确率

    表  1  在CelebA数据集上不同模型的对比结果

    模型IS↑FID↓
    WGAN-GP2.18955.324
    SAGAN2.23847.624
    SF-GAN2.46847.064
    下载: 导出CSV

    表  2  在SVHN数据集上不同模型的IS对比结果

    模型0123456789
    WGAN-GP2.6532.2232.4842.5072.3282.5072.7092.4232.8102.689
    SAGAN2.6032.2912.5142.4462.4602.4932.5992.5592.6812.704
    SF-GAN2.9672.5842.8932.8262.6832.8032.9702.9163.0633.005
    下载: 导出CSV

    表  3  在SVHN数据集上不同模型的FID对比结果

    模型0123456789
    WGAN-GP106.800101.01789.446100.50296.38796.058101.426124.202111.576129.814
    SAGAN113.99995.865101.99498.82987.57499.081109.677103.439111.394108.792
    SF-GAN82.16774.75477.22275.66075.00871.06672.05182.13280.74883.660
    下载: 导出CSV

    表  4  在CIFAR-10数据集上不同模型的IS对比结果

    模型飞机汽车鹿轮船卡车
    WGAN-GP3.7383.1563.0182.9902.4913.3542.4963.4263.2062.853
    SAGAN3.7563.2733.0422.9712.6273.5232.5063.6193.0733.099
    SF-GAN4.0903.8033.5513.2953.0384.1762.9123.8203.4953.281
    下载: 导出CSV

    表  5  在CIFAR-10数据集上不同模型的FID对比结果

    模型飞机汽车鹿轮船卡车
    WGAN-GP150.220117.988138.229135.831107.385125.356109.235104.255101.760110.521
    SAGAN144.611164.207131.030162.071102.087134.925106.684112.162120.274150.129
    SF-GAN124.756144.981107.929128.80781.551106.69887.42895.5292.263108.855
    下载: 导出CSV

    表  6  不同方法增强后SVHN测试集的分类准确率(%)

    模型0123456789平均准确率
    未增强58.3375.0090.0080.00100.0066.6792.8696.8870.0096.6787.65
    真实图片增强100.0087.5090.0090.00100.0083.33100.0090.6270.0096.6790.51
    WGAN-GP
    增强
    91.6775.00100.0090.0087.50100.0096.4396.8880.0093.3392.39
    SAGAN增强100.0087.50100.0090.00100.00100.0092.8696.8890.0096.6795.29
    SF-GAN增强100.00100.00100.00100.00100.0091.67100.00100.0090.00100.0098.31
    下载: 导出CSV

    表  7  不同方法增强后CIFAR-10测试集的分类准确率(%)

    模型飞机汽车鹿轮船卡车平均准确率
    未增强50.0050.5050.0040.0025.0033.3387.5075.0083.3375.0052.47
    真实图片增强66.6787.5060.0040.0062.5050.0087.5050.0083.3387.5066.67
    WGAN-GP
    增强
    66.6775.0080.0080.0087.5091.6787.5062.5083.3375.0076.87
    SAGAN增强83.33100.0060.0080.0062.5075.0075.0087.5083.3387.5078.47
    SF-GAN增强91.6775.0070.0080.0062.5091.67100.00100.0091.6787.5086.33
    下载: 导出CSV
  • [1] TAN Mingxing and LE Q V. EfficientNetV2: Smaller models and faster training[C]. The 38th International Conference on Machine Learning, San Diego, USA, 2021: 10096–10106.
    [2] XIAO Zihao, GAO Xianfeng, FU Chilin, et al. Improving transferability of adversarial patches on face recognition with generative models[C]. 2021 IEEE/CVF Conference on Computer vision and Pattern Recognition, Nashville, USA, 2021: 11840–11849.
    [3] CHEN Xiangning, XIE Cihang, TAN Mingxing, et al. Robust and accurate object detection via adversarial learning[C]. 2021 IEEE/CVF Computer vision and Pattern Recognition, Nashville, USA, 2021: 16617–16626.
    [4] CHEN Pinchun, KUNG B H, and CHEN Juncheng. Class-aware robust adversarial training for object detection[C]. 2021 IEEE/CVF Conference on Computer vision and Pattern Recognition, Nashville, USA, 2021: 10415–10424.
    [5] 张春霞, 姬楠楠, 王冠伟. 受限波尔兹曼机[J]. 工程数学学报, 2015, 32(2): 159–173. doi: 10.3969/j.issn.1005-3085.2015.02.001

    ZHANG Chunxia, JI Nannan, and WANG Guanwei. Restricted Boltzmann machines[J]. Chinese Journal of Engineering Mathematics, 2015, 32(2): 159–173. doi: 10.3969/j.issn.1005-3085.2015.02.001
    [6] LOPES N and RIBEIRO B. Deep belief networks (DBNs)[M]. LOPES N and RIBEIRO B. Machine Learning for Adaptive Many-Core Machines - A Practical Approach. Cham: Springer, 2015: 155–186.
    [7] KINGMA D P and WELLING M. Auto-encoding variational Bayes[C]. The 2nd International Conference on Learning Representations, Banff, Canada, 2014.
    [8] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]. The 27th International Conference on Neural Information Processing Systems, Montreal, Canada, 2014: 2672–2680.
    [9] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278–2324. doi: 10.1109/5.726791
    [10] RADFORD A, METZ L, and CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks[C]. The 4th International Conference on Learning Representations, San Juan, Puerto Rico, 2016.
    [11] ARJOVSKY M, CHINTALA S, and BOTTOU L. Wasserstein generative adversarial networks[C]. The 34th International Conference on Machine Learning, Sydney, Australia, 2017: 214–223.
    [12] BLEI D M, KUCUKELBIR A, and MCAULIFFE J D. Variational inference: A review for statisticians[J]. Journal of the American statistical Association, 2017, 112(518): 859–877. doi: 10.1080/01621459.2017.1285773
    [13] WEAVER N. Lipschitz Algebras[M]. Singapore: World Scientific, 1999.
    [14] GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved training of wasserstein GANs[C]. The 31st International Conference on Neural Information Processing Systems, Long Beach, USA, 2017: 5769–5779.
    [15] ZHANG Han, GOODFELLOW I, METAXAS D, et al. Self-attention generative adversarial networks[C]. The 36th International Conference on Machine Learning, Long Beach, USA, 2019.
    [16] GUO Jingda, MA Xu, SANSOM A, et al. Spanet: Spatial pyramid attention network for enhanced image recognition[C]. 2020 IEEE International Conference on Multimedia and Expo, London, UK, 2020: 1–6.
    [17] 丁斌, 夏雪, 梁雪峰. 基于深度生成对抗网络的海杂波数据增强方法[J]. 电子与信息学报, 2021, 43(7): 1985–1991. doi: 10.11999/JEIT200447

    DING Bin, XIA Xue, and LIANG Xuefeng. Sea clutter data augmentation method based on deep generative adversarial network[J]. Journal of Electronics &Information Technology, 2021, 43(7): 1985–1991. doi: 10.11999/JEIT200447
    [18] 曹志义, 牛少彰, 张继威. 基于半监督学习生成对抗网络的人脸还原算法研究[J]. 电子与信息学报, 2018, 40(2): 323–330. doi: 10.11999/JEIT170357

    CAO Zhiyi, NIU Shaozhang, and ZHANG Jiwei. Research on face reduction algorithm based on generative adversarial nets with semi-supervised learning[J]. Journal of Electronics &Information Technology, 2018, 40(2): 323–330. doi: 10.11999/JEIT170357
    [19] ZEILER M D, KRISHNAN D, TAYLOR G W, et al. Deconvolutional networks[C]. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, USA, 2010: 2528–2535.
    [20] IOFFE S and SZEGEDY C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[C]. The 32nd International Conference on International Conference on Machine Learning, Lille, France, 2015: 448–456.
    [21] DAHL G E, SAINATH T N, and HINTON G E. Improving deep neural networks for LVCSR using rectified linear units and dropout[C]. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, Canada, 2013: 8609–8613.
    [22] XIAO F, HONMA Y, and KONO T. A simple algebraic interface capturing scheme using hyperbolic tangent function[J]. International Journal for Numerical Methods in Fluids, 2005, 48(9): 1023–1040. doi: 10.1002/fld.975
    [23] XU Bing, WANG Naiyan, CHEN Tianqi, et al. Empirical evaluation of rectified activations in convolutional network[J]. arXiv preprint arXiv: 1505.00853, 2015.
    [24] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [25] LIU Ziwei, LUO Ping, WANG Xiaogang, et al. Large-scale CelebFaces attributes (CelebA) dataset[Z]. Retrieved August, 2018.
    [26] KRIZHEVSKY A. Learning multiple layers of features from tiny images[D]. [Master dissertation], University of Toronto, 2009.
    [27] SALIMANS T, GOODFELLOW I, ZAREMBA W, et al. Improved techniques for training GANs[C]. The 30th Conference on Neural Information Processing Systems, Barcelona, Spain, 2016: 2234–2242.
    [28] DOWSON D C and LANDAU B V. The Fréchet distance between multivariate normal distributions[J]. Journal of Multivariate Analysis, 1982, 12(3): 450–455. doi: 10.1016/0047-259X(82)90077-X
    [29] KINGMA D P and BA J. Adam: A method for stochastic optimization[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015.
  • 加载中
图(13) / 表(7)
计量
  • 文章访问数:  738
  • HTML全文浏览量:  559
  • PDF下载量:  209
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-11-17
  • 修回日期:  2022-01-10
  • 录用日期:  2022-01-20
  • 网络出版日期:  2022-02-03
  • 刊出日期:  2023-06-10

目录

    /

    返回文章
    返回