高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于模型重建的深度卷积网络权值可视化方法

刘嘉铭 邢孟道 符吉祥 徐丹

刘嘉铭, 邢孟道, 符吉祥, 徐丹. 基于模型重建的深度卷积网络权值可视化方法[J]. 电子与信息学报, 2019, 41(9): 2194-2200. doi: 10.11999/JEIT180916
引用本文: 刘嘉铭, 邢孟道, 符吉祥, 徐丹. 基于模型重建的深度卷积网络权值可视化方法[J]. 电子与信息学报, 2019, 41(9): 2194-2200. doi: 10.11999/JEIT180916
Jiaming LIU, Mengdao XING, Jixiang FU, Dan XU. A Method to Visualize Deep Convolutional Networks Based on Model Reconstruction[J]. Journal of Electronics & Information Technology, 2019, 41(9): 2194-2200. doi: 10.11999/JEIT180916
Citation: Jiaming LIU, Mengdao XING, Jixiang FU, Dan XU. A Method to Visualize Deep Convolutional Networks Based on Model Reconstruction[J]. Journal of Electronics & Information Technology, 2019, 41(9): 2194-2200. doi: 10.11999/JEIT180916

基于模型重建的深度卷积网络权值可视化方法

doi: 10.11999/JEIT180916
基金项目: 国防科技卓越青年人才基金(2017-JCJQ-ZQ-061)
详细信息
    作者简介:

    刘嘉铭:男,1994年生,博士,研究方向为目标识别

    邢孟道:男,1975年生,教授,研究方向为SAR/ISAR成像、动目标检测等

    符吉祥:男,1992年生,博士,研究方向为ISAR成像

    徐丹:女,1992年生,博士,研究方向为电磁特征提取

    通讯作者:

    刘嘉铭 liujiaming@stu.xidian.edu.cn

  • 中图分类号: TN957.51

A Method to Visualize Deep Convolutional Networks Based on Model Reconstruction

Funds: The National Defense Science and Technology Excellent Youth Talent Foundation of China (2017-JCJQ-ZQ-061)
  • 摘要: 针对深度卷积网络原理分析的问题,该文提出一种基于模型重建的权值可视化方法。首先利用原有的神经网络对测试样本进行前向传播,以获取重建模型所需要的先验信息;然后对原本网络中的部分结构进行修改,使其便于后续的参数计算;再利用正交向量组,逐一地计算重建模型的参数;最后将计算所得的参数按照特定的顺序进行重排列,实现权值的可视化。实验结果表明,对于满足一定条件的深度卷积网络,利用该文所提方法重建的模型在分类过程的前向传播运算中与原模型完全等效,并且可以明显观察到重建后模型的权值所具有的特征,从而分析神经网络实现图像分类的原理。
  • 图  1  正交向量组

    图  2  最大池化

    图  3  权值的可视化排列

    图  4  训练样本

    图  5  学习曲线

    图  6  安-26作为样本输入的权值可视化

    图  7  雅克-42作为样本输入的权值可视化

    图  8  安-26为样本的反卷积结果

    图  9  雅克-42为样本的反卷积结果

    图  10  间隔100次迭代的重建模型的权值可视化

    图  11  相邻100次迭代后重建模型权值的相关系数

    表  1  两种模型分类输出比较

    输出结果安-26飞机图像雅克-42飞机图像
    原模型–13.45–9.22–8.14–6.75–5.6310.6511.015.939.127.31
    13.429.308.236.835.69–10.81–11.15–6.01–9.22–7.38
    重建后模型–13.45–9.22–8.14–6.75–5.6310.6511.015.939.127.31
    13.429.308.236.835.69–10.81–11.15–6.01–9.22–7.38
    下载: 导出CSV
  • FARAHBAKHSH E, KOZEGAR E, and SORYANI M. Improving Persian digit recognition by combining data augmentation and AlexNet[C]. Iranian Conference on Machine Vision and Image Processing, Isfahan, Iran, 2017: 265–270.
    HOU Saihui, LIU Xu, and WANG Zilei. DualNet: Learn complementary features for image recognition[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 502–510.
    SZEGEDY C, LIU Wei, JIA Yangqing, et al.. Going deeper with convolutions[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 1–9.
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al.. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    王俊, 郑彤, 雷鹏, 等. 深度学习在雷达中的研究综述[J]. 雷达学报, 2018, 7(4): 395–411. doi: 10.12000/JR18040

    WANG Jun, ZHENG Tong, LEI Peng, et al. Study on deep learning in radar[J]. Journal of Radars, 2018, 7(4): 395–411. doi: 10.12000/JR18040
    PUNJABI A and KATSAGGELOS A K. Visualization of feature evolution during convolutional neural network training[C]. The 25th European Signal Processing Conference, Kos, Greece, 2017: 311–315.
    ZEILER M D and FERGUS R. Visualizing and understanding convolutional networks[C]. The 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 818–833.
    ZHOU Bolei, KHOSLA A, LAPEDRIZA A, et al.. Learning deep features for discriminative localization[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 2921–2929.
    SUZUKI S and SHOUNO H. A study on visual interpretation of network in network[C]. 2017 International Joint Conference on Neural Networks, Anchorage, USA, 2017: 903–910.
    GAL Y and GHAHRAMANI Z. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning[C]. The 33rd International Conference on Machine Learning, New York, USA, 2016: 1050–1059.
    NAIR V and HINTON G E. Rectified linear units improve restricted Boltzmann machines[C]. The 27th International Conference on International Conference on Machine Learning, Haifa, Israel, 2010: 807–814.
    PEHLEVAN C and CHKLOVSKII D B. A normative theory of adaptive dimensionality reduction in neural networks[C]. The 28th International Conference on Neural Information Processing Systems, Montreal, Canada, 2015: 2269–2277.
    IOFFE S and SZEGEDY C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[C]. The 32nd International Conference on International Conference on Machine Learning, Lille, France, 2015: 448–456.
    王思雨, 高鑫, 孙皓, 等. 基于卷积神经网络的高分辨率SAR图像飞机目标检测方法[J]. 雷达学报, 2017, 6(2): 195–203. doi: 10.12000/JR17009

    WANG Siyu, GAO Xin, SUN Hao, et al. An aircraft detection method based on convolutional neural networks in high-resolution SAR images[J]. Journal of Radars, 2017, 6(2): 195–203. doi: 10.12000/JR17009
    NOH H, HONG S, and HAN B. Learning deconvolution network for semantic segmentation[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1520–1528.
  • 加载中
图(11) / 表(1)
计量
  • 文章访问数:  2082
  • HTML全文浏览量:  936
  • PDF下载量:  81
  • 被引次数: 0
出版历程
  • 收稿日期:  2018-09-21
  • 修回日期:  2019-02-19
  • 网络出版日期:  2019-03-21
  • 刊出日期:  2019-09-10

目录

    /

    返回文章
    返回