高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于两通道深度卷积神经网络的图像隐藏方法

段新涛 王文鑫 李磊 邵志强 王鲜芳 秦川

段新涛, 王文鑫, 李磊, 邵志强, 王鲜芳, 秦川. 基于两通道深度卷积神经网络的图像隐藏方法[J]. 电子与信息学报, 2022, 44(5): 1782-1791. doi: 10.11999/JEIT210280
引用本文: 段新涛, 王文鑫, 李磊, 邵志强, 王鲜芳, 秦川. 基于两通道深度卷积神经网络的图像隐藏方法[J]. 电子与信息学报, 2022, 44(5): 1782-1791. doi: 10.11999/JEIT210280
DUAN Xintao, WANG Wenxin, LI Lei, SHAO Zhiqiang, WANG Xianfang, QIN Chuan. Image Hiding Method Based on Two-Channel Deep Convolutional Neural Network[J]. Journal of Electronics & Information Technology, 2022, 44(5): 1782-1791. doi: 10.11999/JEIT210280
Citation: DUAN Xintao, WANG Wenxin, LI Lei, SHAO Zhiqiang, WANG Xianfang, QIN Chuan. Image Hiding Method Based on Two-Channel Deep Convolutional Neural Network[J]. Journal of Electronics & Information Technology, 2022, 44(5): 1782-1791. doi: 10.11999/JEIT210280

基于两通道深度卷积神经网络的图像隐藏方法

doi: 10.11999/JEIT210280
基金项目: 国家自然科学基金(U1904123, 61672354, 62072157), 教育人工智能与个性化学习河南省重点实验室基金
详细信息
    作者简介:

    段新涛:男,1972年生,副教授,研究方向为图像信息隐藏、图像盲取证、深度学习、盲源分离等

    王文鑫:男,1994年生,硕士生,研究方向为图像信息隐藏、深度学习

    李磊:男,1995年生,硕士生,研究方向为图像信息隐藏、深度学习

    邵志强:男,1996年生,硕士生,研究方向为图像信息隐藏、深度学习

    王鲜芳:女,1969年生,教授,研究方向为人工智能与模式识别、数据挖掘、机器学习及应用

    秦川:男,1980年生,教授,研究方向为多媒体信息安全、数字图像处理、信息隐藏、AI安全、深度学习、密文域信号处理、数字取证

    通讯作者:

    段新涛 duanxintao@htu.edu.cn

  • 中图分类号: TN911.73; TP309.2

Image Hiding Method Based on Two-Channel Deep Convolutional Neural Network

Funds: The National Natural Science Foundation of China (U1904123, 61672354, 62072157), The Key Laboratory Foundation of Artificial Intelligence and Personalized Learning in Education of Henan Province
  • 摘要: 现有的基于深度卷积神经网络(DCNN)实现的图像信息隐藏方法存在图像视觉质量差和隐藏容量低的问题。针对此类问题,该文提出一种基于两通道深度卷积神经网络的图像隐藏方法。首先,与以往的隐藏框架不同,该文提出的隐藏方法中包含1个隐藏网络和2个结构相同的提取网络,实现了在1幅载体图像上同时对2幅全尺寸秘密图像进行有效的隐藏和提取;其次,为了提高图像的视觉质量,在隐藏网络和提取网络中加入了改进的金字塔池化模块和预处理模块。在多个数据集上的测试结果表明,所提方法较现有的图像信息隐藏方法在视觉质量上有显著提升,载体图像PSNR和SSIM分别提高了3.75 dB和3.61%,实现的相对容量为2,同时具有良好的泛化能力。
  • 图  1  隐藏框架

    图  2  改进模块

    图  3  隐藏网络和提取网络

    图  4  主观效果对比

    图  5  放大效果对比

    图  6  StegExpose隐写分析结果

    图  7  不同数据集的测试结果

    表  1  与文献[9]比较

    图像方法载体图像-隐写图像秘密图像1-提取图像1秘密图像2-提取图像2
    PSNR(dB)SSIM(%)PSNR(dB)SSIM(%)PSNR(dB)SSIM(%)
    图4(a)文献[9]31.9594.4429.6883.9026.9378.43
    本文方法34.4899.2840.1397.9132.6098.22
    图4(b)文献[9]31.0895.1928.7293.0133.2288.67
    本文方法38.1798.4737.1698.1134.3597.28
    平均值文献[9]32.3294.8130.4090.7030.5590.29
    本文方法36.0798.4234.9796.5635.1196.48
    下载: 导出CSV

    表  2  消融实验的PSNR和SSIM比较

    图像载体图像-隐写图像秘密图像1-提取图像1秘密图像2-提取图像2
    PSNR(dB)SSIM(%)PSNR(dB)SSIM(%)PSNR(dB)SSIM(%)
    ImageNet36.0798.4234.9796.5635.1196.48
    *Prep35.6696.8934.2395.1234.4495.17
    *Pyramid34.8695.6533.8594.9334.1096.27
    下载: 导出CSV

    表  3  隐写分析结果

    隐藏模型StegExpose AUCSRNet
    隐写分析准确率
    本文方法0.55330.6844
    文献[9]0.6975
    *Prep0.62230.7195
    *Pyramid0.56980.6994
    下载: 导出CSV

    表  4  嵌入容量比较

    方法绝对容量(Byte)隐写图像大小(Byte)相对容量
    文献[21]18.3~135.464$ \times $641.49×10–3~1.10×10–2
    文献[22]1535~43001024$ \times $10241.46×10–3~4.10×10–3
    文献[23]26214~104857512$ \times $5121×10–1~4×10–1
    文献[10]3$ \times $224$ \times $2243$ \times $224$ \times $2241
    文献[12]3$ \times $256$ \times $256~3$ \times $512$ \times $5123$ \times $512$ \times $5122.5×10–1~1
    本文方法6$ \times $256$ \times $2563$ \times $256$ \times $2562
    下载: 导出CSV

    表  5  修改率和提取率比较(%)

    图像方法载体图像修改率秘密图像1提取率秘密图像2提取率
    图4(a)文献[9]1.3598.6597.44
    本文方法1.6899.3098.18
    图4(b)文献[9]1.2098.4798.86
    本文方法1.0998.9798.47
    ImageNet平均值文献[9]1.8098.0297.62
    本文方法1.6198.9998.92
    下载: 导出CSV

    表  6  5组数据集的测试结果

    数据集载体图像-隐写图像秘密图像1-提取图像1秘密图像2-提取图像2
    PSNR(dB)SSIM(%)修改率(%)PSNR(dB)SSIM(%)提取率(%)PSNR(dB)SSIM(%)提取率(%)
    CeleA35.8196.401.8336.1896.9098.2236.2497.5098.24
    COCO34.0396.412.2233.2993.7997.6233.8294.9897.76
    VOC201234.0696.482.2333.4593.5497.6634.1094.8097.80
    AID34.8397.292.0732.3593.2896.7534.1295.2297.65
    UCMerced Land Use34.5396.882.1631.2592.5296.4432.1294.7696.85
    下载: 导出CSV
  • [1] ZHANG Chaoning, LIN Chenguo, BENZ P, et al. A brief survey on deep learning based data hiding, steganography and watermarking[EB/OL]. https://arxiv.org/abs/2103.01607, 2021.
    [2] KER A D. Improved detection of LSB steganography in grayscale images[C]. The 6th International Workshop on Information Hiding (IH), Toronto, Canada, 2004: 97–115.
    [3] FILLER T, JUDAS J, and FRIDRICH J. Minimizing additive distortion in steganography using Syndrome-Trellis Codes[J]. IEEE Transactions on Information Forensics and Security, 2011, 6(3): 920–935. doi: 10.1109/TIFS.2011.2134094
    [4] FRIDRICH J and FILLER T. Practical methods for minimizing embedding impact in steganography[C]. SPIE 6505, Security, Steganography, and Watermarking of Multimedia Contents IX, San Jose, USA, 2007: 13–27.
    [5] FRIDRICH J, GOLJAN M, LISONEK P, et al. Writing on wet paper[C]. SPIE 5681, Security, Steganography, and Watermarking of Multimedia Contents VII, San Jose, USA, 2005: 328–340.
    [6] TANG Weixuan, LI Bin, BARNI M, et al. An automatic cost learning framework for image steganography using deep reinforcement learning[J]. IEEE Transactions on Information Forensics and Security, 2021, 16: 952–967. doi: 10.1109/TIFS.2020.3025438
    [7] ZHANG Chaoning, BENZ P, KARJAUV A, et al. UDH: Universal Deep Hiding for steganography, watermarking, and light field messaging[J]. Advances in Neural Information Processing Systems, 2020, 33: 10223–10234.
    [8] LUO Xiyang, ZHAN Ruohan, CHANG Huiwen, et al. Distortion agnostic deep watermarking[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 13545–13554.
    [9] BALUJA S. Hiding images within images[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(7): 1685–1697. doi: 10.1109/TPAMI.2019.2901877
    [10] CHEN Feng, XING Qinghua, and LIU Fuxian. Technology of hiding and protecting the secret image based on two-channel deep hiding network[J]. IEEE Access, 2020, 8: 21966–21979. doi: 10.1109/ACCESS.2020.2969524
    [11] SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 2818–2826.
    [12] YU Chong. Attention based data hiding with generative adversarial networks[C]. The 34th AAAI Conference on Artificial Intelligence (AAAI), New York, USA, 2020: 1120–1128.
    [13] ZHU Junyan, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]. 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017: 2242–2251.
    [14] ZHOU Bolei, KHOSLA A, LAPEDRIZA À, et al. Learning deep features for discriminative localization[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 2921–2929.
    [15] ZHOU Bolei, KHOSLA A, LAPEDRIZA À, et al. Object detectors emerge in deep scene CNNs[EB/OL]. http://arxiv.org/abs/1412.6856, 2015.
    [16] ZHAO Hengshuang, SHI Jianping, QI Xiaojuan, et al. Pyramid scene parsing network[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, 2017: 6230–6239.
    [17] RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. The 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241.
    [18] HORÉ A and ZIOU D. Image quality metrics: PSNR vs. SSIM[C]. The 20th International Conference on Pattern Recognition, Istanbul, Turkey, 2010: 2366–2369.
    [19] YE Yuanxin, SHAN Jie, BRUZZONE L, et al. Robust registration of multimodal remote sensing images based on structural similarity[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(5): 2941–2958. doi: 10.1109/TGRS.2017.2656380
    [20] BOEHM B. StegExpose - A tool for detecting LSB steganography[EB/OL]. http://arxiv.org/abs/1410.6656, 2014.
    [21] BOROUMAND M, CHEN Mo, and FRIDRICH J. Deep residual network for steganalysis of digital images[J]. IEEE Transactions on Information Forensics and Security, 2019, 14(5): 1181–1193. doi: 10.1109/TIFS.2018.2871749
    [22] WU Kuochen and WANG C. Steganography using reversible texture synthesis[J]. IEEE Transactions on Image Processing, 2015, 24(1): 130–139. doi: 10.1109/TIP.2014.2371246
    [23] YANG Jianhua, LIU Kai, KANG Xiangui, et al. Spatial image steganography based on generative adversarial network[OL]. http://arxiv.org/abs/1804.07939, 2018.
  • 加载中
图(7) / 表(6)
计量
  • 文章访问数:  958
  • HTML全文浏览量:  289
  • PDF下载量:  128
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-04-06
  • 修回日期:  2021-09-13
  • 录用日期:  2021-09-13
  • 网络出版日期:  2021-12-22
  • 刊出日期:  2022-05-25

目录

    /

    返回文章
    返回