高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于脉冲序列标识的深度脉冲神经网络时空反向传播算法

王子华 叶莹 刘洪运 许燕 樊瑜波 王卫东

王子华, 叶莹, 刘洪运, 许燕, 樊瑜波, 王卫东. 基于脉冲序列标识的深度脉冲神经网络时空反向传播算法[J]. 电子与信息学报, 2024, 46(6): 2596-2604. doi: 10.11999/JEIT230705
引用本文: 王子华, 叶莹, 刘洪运, 许燕, 樊瑜波, 王卫东. 基于脉冲序列标识的深度脉冲神经网络时空反向传播算法[J]. 电子与信息学报, 2024, 46(6): 2596-2604. doi: 10.11999/JEIT230705
WANG Zihua, YE Ying, LIU Hongyun, XU Yan, FAN Yubo, WANG Weidong. Spiking Sequence Label-Based Spatio-Temporal Back-Propagation Algorithm for Training Deep Spiking Neural Networks[J]. Journal of Electronics & Information Technology, 2024, 46(6): 2596-2604. doi: 10.11999/JEIT230705
Citation: WANG Zihua, YE Ying, LIU Hongyun, XU Yan, FAN Yubo, WANG Weidong. Spiking Sequence Label-Based Spatio-Temporal Back-Propagation Algorithm for Training Deep Spiking Neural Networks[J]. Journal of Electronics & Information Technology, 2024, 46(6): 2596-2604. doi: 10.11999/JEIT230705

基于脉冲序列标识的深度脉冲神经网络时空反向传播算法

doi: 10.11999/JEIT230705
基金项目: 科技创新—2030“新一代人工智能”重大项目(2020AAA0105800)
详细信息
    作者简介:

    王子华:男,博士,研究方向为计算机视觉在医学图像中的应用

    叶莹:女,硕士生,研究方向为脉冲神经网络算法

    刘洪运:男,副研究员,研究方向为神经调控、人工智能算法、智能硬件等

    许燕:女,教授,研究方向为医学人工智能

    樊瑜波:男,教授,研究方向为生物力学及力生物学、医疗器械、智能康复工程及航空航天医学工程等

    王卫东:男,研究员,研究方向为医学影像物理与工程、信号与信息处理、生物计算理论、脉冲神经网络算法等

    通讯作者:

    王卫东 wangwd301@126.com

  • 中图分类号: TN911.7; TP18

Spiking Sequence Label-Based Spatio-Temporal Back-Propagation Algorithm for Training Deep Spiking Neural Networks

Funds: The Scientific and Technological Innovation 2030 - “New Generation Artificial Intelligence” Major Project (2020AAA0105800)
  • 摘要: 尖峰放电的脉冲神经网络(SNN)具有接近大脑皮层的信号处理模式,被认为是实现大脑启发计算的重要途径。但是,目前对于深度脉冲神经网络的学习仍缺乏有效的监督学习算法。受尖峰放电速率标识的时空反向传播算法的启发,该文提出一种面向深度脉冲神经网络训练的基于时间脉冲序列标识的监督学习算法,通过定义突触后电位和膜电位反传迭代因子分别分析脉冲神经元的空间和时间依赖关系,使用替代梯度的方法解决反传过程中不连续可微的问题。不同于现有基于尖峰放电速率标识的学习算法,该算法能够充分反映脉冲神经网络输出的时间脉冲序列的动态特性。因此,所提算法非常适合应用于需要较长时间序列标识的计算任务,例如行为的时间脉冲序列控制。该文在静态图像数据集CIFAR10和神经形态数据集NMNIST上验证了所提算法的有效性,在所有这些数据集上都显示出良好的性能,这有助于进一步研究基于时间脉冲序列应用的大脑启发计算。
  • 图  1  脉冲神经元模型及其多层前馈脉冲神经网络结构图

    图  2  前馈结构信息流动关系

    1  伪代码

     input : Network inputs X, class label Y, initial weight vector$\{ {{\boldsymbol{W}}^l}\} _{l = 1}^L$, membrane decay factor ${\tau _{\mathrm{m}}}$, synaptic decay factor ${\tau _{\mathrm{s}}}$, threshold potential ${V_{\rm{th}}}$,simulation windows T
     output : Output Spike Sequence ${O^L}$
     1 Forward Pipeline :
     2 $v_{0:T - 1}^1 \;\; \leftarrow \;\;{\mathrm{repeat}}\;{\mathrm{input}}\;X\;{\mathrm{through}}\;T$
     3 ${\text{target}} \;\; \leftarrow \;\;{\mathrm{encode}}\_{\mathrm{repeat}}(Y)$
     4 for $l\; \leftarrow \;2$ to L do
     5  for $t\; \leftarrow \;0$ to $T\; \leftarrow \;1$ do
     6   $u_t^l,\;\;o_t^l\;\; \leftarrow \;\;{\mathrm{Update}}\_{\mathrm{neuron}}\_{\mathrm{state}}({w^l},v_t^{l - 1},u_{t - 1}^l,{\tau _m},{V_{{\mathrm{th}}}})$ /Eq.(4)–(5)
     7   $v_t^l\;\; \leftarrow \;\;{\mathrm{Compute}}\_{\mathrm{psp}}(o_t^l)$ /Eq.(3a)
     8  end
     9 end
     10 ${\mathrm{Loss}} \;\; \leftarrow \;\;{\mathrm{loss}}\_{\mathrm{function}}({O^L},{\text{target}})$
     11 Backward Pipeline :
     12 for $l \;\; \leftarrow \;\;L$ to 1 do
     13 for $t \;\; \leftarrow \;\;T - 1$ to 0 do
     14  if $t = T$ and $l = L$ then
     15    $ \hat \delta _{ T - 1}^L \;\; \leftarrow \;\;{\mathrm{Initial}}\_{\mathrm{psp}}\_{\mathrm{iteration}}\_{\mathrm{factor}} \left(\dfrac{{\partial L}}{{\partial a_{T - 1}^L}}\right) $ /Eq.(13)
     16    $ \tilde \delta _{T - 1}^L \;\; \leftarrow \;\;{\mathrm{Initial}}\_{\mathrm{mem}}\_{\mathrm{iteration}}\_{\mathrm{factor}} \left(\hat \delta _{ T - 1}^L,\dfrac{{\partial a_{T - 1}^L}}{{\partial u_{\ T - 1}^L}}\right) $ /Eq.(12)
     17   end
     18   else if $t = T - 1$ and $l! = L$ then
     19    $ \hat \delta _{ T - 1}^l \;\; \leftarrow \;\;{\mathrm{Update}}\_{\mathrm{psp}}\_i{\mathrm{teration}}\_{\mathrm{factor}}(\hat \delta _{ T - 1}^{l + 1},{w^{l + 1}},u_{ T - 1}^{l + 1},o_{ T - 2}^{l + 1}) $ /Eq.(15)
     20    $ \tilde \delta _{T - 1}^l \;\; \leftarrow \;\;{\mathrm{Update}}\_{\mathrm{mem}}\_{\mathrm{iteration}}\_{\mathrm{factor}}(\hat \delta _{ T - 1}^l,u_{ T - 1}^l) $ /Eq.(14)
     21   end
     22   else if $t! = T - 1$ and $l = L$ then
     23    $ \hat \delta _t^L \;\; \leftarrow \;\;{\mathrm{Update}}\_{\mathrm{psp}}\_{\mathrm{iteration}}\_{\mathrm{factor}}(\hat \delta _{ T + 1}^L,o_{ T + 1}^L,u_{T + 1}^L,u_t^L) $ /Eq.(17)
     24    $ \tilde \delta _t^L \;\; \leftarrow \;\;{\mathrm{Update}}\_mem\_{\mathrm{iteration}}\_{\mathrm{factor}}(\hat \delta _t^L,u_t^L,\tilde \delta _{T + 1}^L,o_t^L) $ /Eq.(16)
     25   end
     26   else
     27    $ \hat \delta _t^l \;\; \leftarrow \;\;{\mathrm{Update}}\_{\mathrm{psp}}\_{\mathrm{iteratio}}n\_{\mathrm{factor}}(\hat \delta _t^{l + 1},o_{ T - 1}^{l + 1},u_t^{l + 1},\hat \delta _{T + 1}^l,u_t^l,u_{ T + 1}^l,o_t^l) $ /Eq.(11)
     28    $ \tilde \delta _t^l \;\; \leftarrow \;\;{\mathrm{Update}}\_{\mathrm{mem}}\_{\mathrm{iteration}}\_{\mathrm{factor}}(\hat \delta _t^l,u_t^l,\tilde \delta _{ T + 1}^l,o_t^l) $ /Eq.(10)
     29   end
     30 end
     31 end
     32 $ {\mathrm{Update}}{\text{ w based on: }}\Delta {{\boldsymbol{W}}^l} = \displaystyle\sum\nolimits_{T = 0}^{T - 1} {a_t^{l - 1}\tilde \delta _t^l} $ /Eq.(18)
    下载: 导出CSV

    表  1  CIFAR10的测试结果(van Rossum距离)

    方法 网络 时间步长 周期 准确率(%)
    Converted SNN[24] AlexNet 80 83.52
    STBP[12] AlexNet 8 150 85.24
    本文算法 AlexNet 5 100 88.47
    ① : 96C3-256C3-P2-384C3-P2-384C3-256C3-1024-1024
    下载: 导出CSV

    表  2  CIFAR10的测试结果(Hamming距离)

    方法 网络 时间步长 周期 准确率(%)
    Converted SNN[24] AlexNet 80 83.52
    STBP[12] AlexNet 8 150 85.24
    本文算法 AlexNet 10 100 89.10
    ① : 96C3-256C3-P2-384C3-P2-384C3-256C3-1024-1024
    下载: 导出CSV

    表  3  NMNIST的测试结果(van Rossum距离)

    方法 网络 时间步长 周期 准确率(%)
    STBP[12] 12C5-P2-64C5-P2 50 150 98.19
    本文算法 12C5-P2-64C5-P2 25 100 98.61
    下载: 导出CSV

    表  4  NMNIST的测试结果(Hamming距离)

    方法网络时间步长周期准确率(%)
    STBP[12]12C5-P2-64C5-P25015098.19
    本文算法12C5-P2-64C5-P22510098.67
    下载: 导出CSV
  • [1] YAMAZAKI K, VO-HO V K, BULSARA D, et al. Spiking neural networks and their applications: A review[J]. Brain Sciences, 2022, 12(7): 863. doi: 10.3390/brainsci12070863.
    [2] MAKAROV V A, LOBOV S A, SHCHANIKOV S, et al. Toward reflective spiking neural networks exploiting memristive devices[J]. Frontiers in Computational Neuroscience, 2022, 16: 859874. doi: 10.3389/fncom.2022.859874.
    [3] BÜCHEL J, ZENDRIKOV D, SOLINAS S, et al. Supervised training of spiking neural networks for robust deployment on mixed-signal neuromorphic processors[J]. Scientific Reports, 2021, 11(1): 23376. doi: 10.1038/s41598-021-02779-x.
    [4] ASGHAR M S, ARSLAN S, and KIM H. A low-power spiking neural network chip based on a compact LIF neuron and binary exponential charge injector synapse circuits[J]. Sensors, 2021, 21(13): 4462. doi: 10.3390/s21134462.
    [5] ROY K, JAISWAL A, and PANDA P. Towards spike-based machine intelligence with neuromorphic computing[J]. Nature, 2019, 575(7784): 607–617. doi: 10.1038/s41586-019-1677-2.
    [6] HODGKIN A L and HUXLEY A F. A quantitative description of membrane current and its application to conduction and excitation in nerve[J]. The Journal of Physiology, 1952, 117(4): 500–544. doi: 10.1113/jphysiol.1952.sp004764.
    [7] ZHAN Qiugang, LIU Guisong, XIE Xiurui, et al. Effective transfer learning algorithm in spiking neural networks[J]. IEEE Transactions on Cybernetics, 2022, 52(12): 13323–13335. doi: 10.1109/TCYB.2021.3079097.
    [8] LUO Xiaoling, QU Hong, WANG Yuchen, et al. Supervised learning in multilayer spiking neural networks with spike temporal error backpropagation[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(12): 10141–10153. doi: 10.1109/TNNLS.2022.3164930.
    [9] JIANG Runhao, ZHANG Jie, YAN Rui, et al. Few-shot learning in spiking neural networks by multi-timescale optimization[J]. Neural Computation, 2021, 33(9): 2439–2472. doi: 10.1162/neco_a_01423.
    [10] XIE Xiurui, YU Bei, LIU Guisong, et al. Effective active learning method for spiking neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024: 1–10. doi: 10.1109/TNNLS.2023.3257333.
    [11] SENGUPTA A, YE Yuting, WANG R, et al. Going deeper in spiking neural networks: VGG and residual architectures[J]. Frontiers in Neuroscience, 2019, 13: 95. doi: 10.3389/fnins.2019.00095.
    [12] WU Yujie, DENG Lei, LI Guoji, et al. Spatio-temporal backpropagation for training high-performance spiking neural networks[J]. Frontiers in Neuroscience, 2018, 12: 331. doi: 10.3389/fnins.2018.00331.
    [13] WU Yujie, DENG Lei, LI Guoji et al. Direct training for spiking neural networks: Faster, larger, better[C]. The 33th AAAI Conference on Artificial Intelligence, California, USA, 2019: 1311–1318. doi: 10.1609/aaai.v33i01.33011311.
    [14] VAN ROSSUM M C. A novel spike distance[J]. Neural Computation, 2001, 13(4): 751–763. doi: 10.1162/089976601300014321.
    [15] BOHTE S M, KOK J N, and LA POUTRÉ H. Error-backpropagation in temporally encoded networks of spiking neurons[J]. Neurocomputing, 2002, 48(1/4): 17–37. doi: 10.1016/S0925-2312(1)00658-0.
    [16] XIAO Mingqing, MENG Qingyan, ZHANG Zongpeng, et al. SPIDE: A purely spike-based method for training feedback spiking neural networks[J]. Neural Networks, 2023, 161: 9–24. doi: 10.1016/j.neunet.2023.01.026.
    [17] GUO Yufei, HUANG Xuhui, and MA Zhe. Direct learning-based deep spiking neural networks: A review[J]. Frontiers in Neuroscience, 2023, 17: 1209795. doi: 10.3389/fnins.2023.1209795.
    [18] ZENKE F and GANGULI S. SuperSpike: Supervised learning in multilayer spiking neural networks[J]. Neural Computation, 2018, 30(6): 1514–1541. doi: 10.1162/neco_a_01086.
    [19] ZHANG Wenrui and LI Peng. Temporal spike sequence learning via backpropagation for deep spiking neural networks[C]. The 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 1008.
    [20] KRIZHEVSKY A, NAIR V, and HINTON G. The CIFAR-10 dataset[EB/OL]. https://www.cs.toronto.edu/~kriz/cifar.html, 2009.
    [21] ORCHARD G, JAYAWANT A, COHEN G K, et al. Converting static image datasets to spiking neuromorphic datasets using saccades[J]. Frontiers in Neuroscience, 2015, 9: 437. doi: 10.3389/fnins.2015.00437.
    [22] ZENKE F and VOGELS T P. The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks[J]. Neural Computation, 2021, 33(4): 899–925. doi: 10.1162/neco_a_01367.
    [23] TAVANAEI A, GHODRATI M, KHERADPISHEH S R, et al. Deep learning in spiking neural networks[J]. Neural Networks, 2019, 111: 47–63. doi: 10.1016/j.neunet.2018.12.002.
    [24] HUNSBERGER E and ELIASMITH C. Training spiking deep networks for neuromorphic hardware[EB/OL]. https://arxiv.org/abs/1611.05141, 2016.
  • 加载中
图(2) / 表(5)
计量
  • 文章访问数:  269
  • HTML全文浏览量:  178
  • PDF下载量:  38
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-07-15
  • 修回日期:  2024-03-12
  • 网络出版日期:  2024-04-09
  • 刊出日期:  2024-06-30

目录

    /

    返回文章
    返回