Advanced Search
Volume 45 Issue 6
Jun.  2023
Turn off MathJax
Article Contents
DONG Qingkuan, HE Junlin. Robustness Enhancement Method of Deep Learning Model Based on Information Bottleneck[J]. Journal of Electronics & Information Technology, 2023, 45(6): 2197-2204. doi: 10.11999/JEIT220603
Citation: DONG Qingkuan, HE Junlin. Robustness Enhancement Method of Deep Learning Model Based on Information Bottleneck[J]. Journal of Electronics & Information Technology, 2023, 45(6): 2197-2204. doi: 10.11999/JEIT220603

Robustness Enhancement Method of Deep Learning Model Based on Information Bottleneck

doi: 10.11999/JEIT220603
Funds:  The Science Basic Research Plan in Shaanxi Province of China (2020JM-184)
  • Received Date: 2022-05-12
  • Rev Recd Date: 2022-10-13
  • Available Online: 2022-10-20
  • Publish Date: 2023-06-10
  • As the core algorithm of deep learning technology, deep neural network is easy to make wrong judgment on the adversarial examples with imperceptive perturbation. This situation brings new challenges to the security of deep learning model. The resistance of deep learning model to adversarial examples is called robustness. In order to improve the robustness of the model trained by adversarial training algorithm, an adversarial training algorithm of deep learning model based on information bottleneck is proposed. Among this, information bottleneck describes the process of deep learning based on information theory, so that the deep learning model can converge faster. The proposed algorithm uses the conclusions derived from the optimization objective proposed based on the information bottleneck theory, adds the tensor input to the linear classification layer in the model to the loss function, and aligns the clean samples with the high-level features obtained when the adversarial samples are input to the model by means of sample cross-training, so that the model can better learn the relationship between the input samples and their true labels during the training process and has finally good robustness to the adversarial samples. Experimental results show that the proposed algorithm has good robustness to a variety of adversarial attacks, and has generalization ability in different data sets and models.
  • loading
  • [1]
    SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[C]. The 2nd International Conference on Learning Representations (ICLR), Banff, Canada, 2014: 1–10.
    [2]
    GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[C]. The 3rd International Conference on Learning Representations (ICLR), San Diego, USA, 2015: 1–11.
    [3]
    MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]. 6th International Conference on Learning Representations (ICLR), Vancouver, Canada, 2018: 1–28.
    [4]
    MOOSAVI-DEZFOOLI S M, FAWZI A, and FROSSARD P. DeepFool: A simple and accurate method to fool deep neural networks[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 2574–2582.
    [5]
    CARLINI N and WAGNER D. Towards evaluating the robustness of neural networks[C]. IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2017: 39–57.
    [6]
    WONG E, RICE L, and KOLTER J Z. Fast is better than free: Revisiting adversarial training[C]. The 8th International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 2020: 1–17.
    [7]
    ZHENG Haizhong, ZHANG Ziqi, GU Juncheng, et al. Efficient adversarial training with transferable adversarial examples[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 1178–1187.
    [8]
    DONG Yinpeng, DENG Zhijie, PANG Tianyu, et al. Adversarial distributional training for robust deep learning[C]. The 34th International Conference on Neural Information Processing Systems (NeurIPS), Vancouver, Canada, 2020: 693.
    [9]
    WANG Hongjun, LI Guanbin, LIU Xiaobai, et al. A Hamiltonian Monte Carlo method for probabilistic adversarial attack and learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(4): 1725–1737. doi: 10.1109/TPAMI.2020.3032061
    [10]
    CHEN Sizhe, HE Zhengbao, SUN Chengjin, et al. Universal adversarial attack on attention and the resulting dataset DAmageNet[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(4): 2188–2197. doi: 10.1109/TPAMI.2020.3033291
    [11]
    FAN Jiameng and LI Wenchao. Adversarial training and provable robustness: A tale of two objectives[C/OL]. The 35th AAAI Conference on Artificial Intelligence, 2021: 7367–7376.
    [12]
    GOKHALE T, ANIRUDH R, KAILKHURA B, et al. Attribute-guided adversarial training for robustness to natural perturbations[C/OL]. The 35th AAAI Conference on Artificial Intelligence, 2021: 7574–7582.
    [13]
    LI Xiaoyu, ZHU Qinsheng, HUANG Yiming, et al. Research on the freezing phenomenon of quantum correlation by machine learning[J]. Computers, Materials & Continua, 2020, 65(3): 2143–2151. doi: 10.32604/cmc.2020.010865
    [14]
    SALMAN H, SUN Mingjie, YANG G, et al. Denoised smoothing: A provable defense for pretrained classifiers[C]. The 34th International Conference on Neural Information Processing Systems (NeurIPS), Vancouver, Canada, 2020: 1841.
    [15]
    SHAO Rui, PERERA P, YUEN P C, et al. Open-set adversarial defense with clean-adversarial mutual learning[J]. International Journal of Computer Vision, 2022, 130(4): 1070–1087. doi: 10.1007/s11263-022-01581-0
    [16]
    MUSTAFA A, KHAN S H, HAYAT M, et al. Image super-resolution as a defense against adversarial attacks[J]. IEEE Transactions on Image Processing, 2020, 29: 1711–1724. doi: 10.1109/TIP.2019.2940533
    [17]
    GU Shuangchi, YI Ping, ZHU Ting, et al. Detecting adversarial examples in deep neural networks using normalizing filters[C]. The 11th International Conference on Agents and Artificial Intelligence (ICAART), Prague, Czech Republic, 2019: 164–173.
    [18]
    TISHBY N, PEREIRA F C, and BIALEK W. The information bottleneck method[EB/OL]. https://arxiv.org/pdf/physics/0004057.pdf, 2000.
    [19]
    TISHBY N and ZASLAVSKY N. Deep learning and the information bottleneck principle[C]. IEEE Information Theory Workshop (ITW), Jerusalem, Israel, 2015: 1–5.
    [20]
    SHWARTZ-ZIV R and TISHBY N. Opening the black box of deep neural networks via information[EB/OL]. https://arXiv.org/abs/1703.00810, 2017.
    [21]
    KOLCHINSKY A, TRACEY B D, and WOLPERT D H. Nonlinear information bottleneck[J]. Entropy, 2019, 21(12): 1181. doi: 10.3390/e21121181
    [22]
    ALEMI A A, FISCHER I, DILLON J V, et al. Deep variational information bottleneck[C]. The 5th International Conference on Learning Representations (ICLR), Toulon, France, 2017: 1–19.
    [23]
    SHAMIR O, SABATO S, and TISHBY N. Learning and generalization with the information bottleneck[J]. Theoretical Computer Science, 2010, 411(29/30): 2696–2711. doi: 10.1016/j.tcs.2010.04.006
    [24]
    STILL S and BIALEK W. How many clusters? An information-theoretic perspective[J]. Neural Computation, 2004, 16(12): 2483–2506. doi: 10.1162/0899766042321751
    [25]
    KINGMA D P and BA J. Adam: A method for stochastic optimization[C]. 3rd International Conference on Learning Representations (ICLR), San Diego, USA, 2015: 1–15.
    [26]
    ZHANG Hongyang, YU Yaodong, JIAO Jiantao, et al. Theoretically principled trade-off between robustness and accuracy[C]. The 36th International Conference on Machine Learning (ICML), Long Beach, USA, 2019: 7472–7482.
    [27]
    ZHANG Haichao and WANG Jianyu. Defense against adversarial attacks using feature scattering-based adversarial training[C]. The 33rd International Conference on Neural Information Processing Systems (NeurIPS), Vancouver, Canada, 2019: 164.
    [28]
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 770–778.
    [29]
    LIU Shuying and DENG Weihong. Very deep convolutional neural network based image classification using small training sample size[C]. The 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 2015: 730–734.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(4)  / Tables(7)

    Article Metrics

    Article views (1022) PDF downloads(178) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return