Advanced Search
Volume 45 Issue 9
Sep.  2023
Turn off MathJax
Article Contents
LI Ying, LI Yanjie, CUI Xiaoxin, NI Qinglong, ZHOU Yinhao. Weight Quantization Method for Spiking Neural Networks and Analysis of Adversarial Robustness[J]. Journal of Electronics & Information Technology, 2023, 45(9): 3218-3227. doi: 10.11999/JEIT230300
Citation: LI Ying, LI Yanjie, CUI Xiaoxin, NI Qinglong, ZHOU Yinhao. Weight Quantization Method for Spiking Neural Networks and Analysis of Adversarial Robustness[J]. Journal of Electronics & Information Technology, 2023, 45(9): 3218-3227. doi: 10.11999/JEIT230300

Weight Quantization Method for Spiking Neural Networks and Analysis of Adversarial Robustness

doi: 10.11999/JEIT230300
Funds:  STI 2030-Major Projects (2022ZD0208700)
  • Received Date: 2023-04-19
  • Rev Recd Date: 2023-08-17
  • Available Online: 2023-08-23
  • Publish Date: 2023-09-27
  • Spiking Neural Networks (SNNs) in neuromorphic chips have the advantages of high sparsity and low power consumption, which make them suitable for visual classification tasks. However, they are still vulnerable to adversarial attacks. Existing studies lack robustness metrics for the quantization process when deploying the network into hardware. The weight quantization method of SNNs during hardware mapping is studied and the adversarial robustness is analyzed in this paper. A supervised training algorithm based on backpropagation and alternative gradients is proposed, and one types of adversarial attack samples, Fast Gradient Sign Method (FGSM), on the CIFAR-10 dataset are generated. A perception quantization method and an evaluation framework that integrates adversarial training and inference are provided innovatively. Experimental results show that direct encoding leads to the worst adversarial robustness in the VGG9 network. The difference between the accuracy loss and inter-layer pulse activity change before and after weight quantization increases by 73.23% and 51.5%, respectively, for four encoding and four structural parameter combinations. The impact of sparsity factors on robustness is: threshold increase more than bit reduction in weight quantization more than sparse coding. The proposed analysis framework and weight quantization method have been proved on the PIcore neuromorphic chip.
  • loading
  • [1]
    谭铁牛: 人工智能的历史、现状和未来[EB/OL]. https://www.cas.cn/zjs/201902/t20190218_4679625.shtml, 2019.

    Tan Tieniu. The historyk, present and future of artificial intelligence. Chinese Academy of Sciences[EB/OL]. https://www.cas.cn/zjs/201902/t20190218_4679625.shtml, 2019.
    [2]
    LIU Aishan, LIU Xianglong, FAN Jiaxin, et al. Perceptual-sensitive GAN for generating adversarial patches[C]. The 33rd AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, Honolulu, USA, 2019: 127.
    [3]
    ZHANG Guoming, YAN Chen, JI Xiaoyu, et al. DolphinAttack: Inaudible voice commands[C]. The 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, USA, 2017: 103–117.
    [4]
    WARREN T. Microsoft’s Outlook spam email filters are broken for many right now[EB/OL]. https://www.theverge.com/2023/2/20/23607056/microsoft-outlook-spam-email-filters-not-working-broken, 2023.
    [5]
    董庆宽, 何浚霖. 基于信息瓶颈的深度学习模型鲁棒性增强方法[J]. 电子与信息学报, 2023, 45(6): 2197–2204. doi: 10.11999/JEIT220603

    DONG Qingkuan and HE Junlin. Robustness enhancement method of deep learning model based on information bottleneck[J]. Journal of Electronics &Information Technology, 2023, 45(6): 2197–2204. doi: 10.11999/JEIT220603
    [6]
    WEI Mingliang, YAYLA M, HO S Y, et al. Binarized SNNs: Efficient and error-resilient spiking neural networks through binarization[C]. 2021 IEEE/ACM International Conference on Computer Aided Design, Munich, Germany, 2021: 1–9.
    [7]
    EL-ALLAMI R, MARCHISIO A, SHAFIQUE M, et al. Securing deep spiking neural networks against adversarial attacks through inherent structural parameters[C]. 2021 Design, Automation & Test in Europe Conference & Exhibition, Grenoble, France, 2021: 774–779.
    [8]
    SHARMIN S, RATHI N, PANDA P, et al. Inherent adversarial robustness of deep spiking neural networks: Effects of discrete input encoding and non-linear activations[C]. The 16th European Conference, Glasgow, UK, 2020: 399–414.
    [9]
    KUNDU S, PEDRAM M, and BEEREL P A. HIRE-SNN: Harnessing the inherent robustness of energy-efficient deep spiking neural networks by training with crafted input noise[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 5209–5218.
    [10]
    KIM Y, PARK H, MOITRA A, et al. Rate coding or direct coding: Which one is better for accurate, robust, and energy-efficient spiking neural networks?[C]. 2022 IEEE International Conference on Acoustics, Speech and Signal Processing, Singapore, 2022: 71–75.
    [11]
    O'CONNOR P and WELLING M. Deep spiking networks[J]. arXiv preprint arXiv: 1602.08323, 2016.
    [12]
    RATHI N, SRINIVASAN G, PANDA P, et al. Enabling deep spiking neural networks with hybrid conversion and spike timing dependent backpropagation[C]. The 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.
    [13]
    TAVANAEI A and MAIDA A. BP-STDP: Approximating backpropagation using spike timing dependent plasticity[J]. Neurocomputing, 2019, 330: 39–47. doi: 10.1016/j.neucom.2018.11.014
    [14]
    SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[C]. The 2nd International Conference on Learning Representations, Banff, Canada, 2014.
    [15]
    GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [16]
    SHAFAHI A, NAJIBI M, GHIASI A, et al. Adversarial training for free![C]. The 32nd International Conference on Neural Information Processing Systems, Vancouver, Canada, 2019.
    [17]
    MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]. The 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
    [18]
    LI Yanjie, CUI Xiaoxin, ZHOU Yihao, et al. A comparative study on the performance and security evaluation of spiking neural networks[J]. IEEE Access, 2022, 10: 117572–117581. doi: 10.1109/ACCESS.2022.3220367
    [19]
    KUANG Yisong, CUI Xiaoxin, ZHONG Yi, et al. A 64K-neuron 64M-1b-synapse 2.64 pJ/SOP neuromorphic chip with all memory on chip for spike-based models in 65nm CMOS[J]. IEEE Transactions on Circuits and Systems II:Express Briefs, 2021, 68(7): 2655–2659. doi: 10.1109/TCSII.2021.3052172
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(5)  / Tables(7)

    Article Metrics

    Article views (387) PDF downloads(80) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return