Advanced Search
Volume 45 Issue 9
Sep.  2023
Turn off MathJax
Article Contents
GU Xiaofeng, GUAN Qidong, YU Zhiguo. Absolute Value Circuit for Tanh Activation Function in Computing in Memory[J]. Journal of Electronics & Information Technology, 2023, 45(9): 3350-3358. doi: 10.11999/JEIT221257
Citation: GU Xiaofeng, GUAN Qidong, YU Zhiguo. Absolute Value Circuit for Tanh Activation Function in Computing in Memory[J]. Journal of Electronics & Information Technology, 2023, 45(9): 3350-3358. doi: 10.11999/JEIT221257

Absolute Value Circuit for Tanh Activation Function in Computing in Memory

doi: 10.11999/JEIT221257
Funds:  The Joint Project of Yangtze River Delta Community of Sci-Tech Innovation (2022CSJGG0400), The Fundamental Research Funds for the Central Universities (JUSRP51510), The Key R&D Program of Jiangsu Province (BE2019003-2)
  • Received Date: 2022-09-28
  • Rev Recd Date: 2023-02-10
  • Available Online: 2023-02-16
  • Publish Date: 2023-09-27
  • Based on Computing In Memory (CIM), the analog implementation of activation functions allows the neural networks to become closer to the nonlinear model. However, for CIM, the negative value of Tanh function is difficult to process; A high-speed and high-precision absolute value operation circuit is proposed to solve this problem. The input voltage is passed through the comparator first, the negative voltage input is converted into positive voltage by the proportional inverting amplifier and then delivered through a switch. In this way, the absolute value operation processing of the discrete output function is realized. Compared with traditional absolute value circuits using the diode full-wave rectification, this circuit avoids effectively the introduction of diodes, and has the following advantages, faster speed, lower power consumption and a smaller overall area. Designed on 55 nm CMOS technology, the simulation results show that, under a 50 ns operating clock period, the error between the output voltage and the input voltage after conversion of the absolute value circuit can be controlled within 1%. Moreover, the comparator output delay is 5 ns, and the amplified voltage error in the zero point region is less than 400 µV. At a power supply voltage of 1.2 V, the power consumption is 670 µW, and the layout area is 4 447 µm2.
  • loading
  • [1]
    SZE V, CHEN Y H, EMER J, et al. Hardware for machine learning: Challenges and opportunities[C]. 2017 IEEE Custom Integrated Circuits Conference (CICC), Austin, USA, 2017: 1–8.
    [2]
    KIM H, CHUNG J, SHIN K, et al. Live demonstration: A neural processor for AI acceleration[C]. 2021 IEEE International Symposium on Circuits and Systems (ISCAS), Daegu, Korea, 2021: 1.
    [3]
    ZHOU Yushan and LI Wenxin. Discovering of game AIs’ characters using a neural network based AI imitator for AI clustering[C]. 2020 IEEE Conference on Games (CoG), Osaka, Japan, 2020: 198–205.
    [4]
    顾晓峰, 刘彦航, 虞致国, 等. 一种面向基于闪存的脉冲卷积神经网络的模拟神经元电路[J]. 电子与信息学报, 2023, 45(1): 116–124. doi: 10.11999/JEIT211249

    GU Xiaofeng, LIU Yanhang, YU Zhiguo, et al. An analog neuron circuit for spiking convolutional neural networks based on flash array[J]. Journal of Electronics &Information Technology, 2023, 45(1): 116–124. doi: 10.11999/JEIT211249
    [5]
    MOONS B and VERHELST M. A 0.3–2.6 TOPS/W precision-scalable processor for real-time large-scale ConvNets[C]. 2016 IEEE Symposium on VLSI Circuits (VLSI-Circuits), Honolulu, USA, 2016: 1–2.
    [6]
    CHIU C C, SAINATH T N, WU Yonghui, et al. State-of-the-art speech recognition with sequence-to-sequence models[C]. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, Canada, 2018: 4774–4778.
    [7]
    TANG Tianqi, XIA Lixue, LI Boxun, et al. Binary convolutional neural network on RRAM[C]. 2017 22nd Asia and South Pacific Design Automation Conference (ASP-DAC), Chiba, Japan, 2017: 782–787.
    [8]
    ZHANG Jintao, WANG Zhuo, and VERMA N. In-memory computation of a machine-learning classifier in a standard 6T SRAM array[J]. IEEE Journal of Solid-State Circuits, 2017, 52(4): 915–924. doi: 10.1109/JSSC.2016.2642198
    [9]
    BISWAS A and CHANDRAKASAN A P. Conv-RAM: An energy-efficient SRAM with embedded convolution computation for low-power CNN-based machine learning applications[C]. 2018 IEEE International Solid - State Circuits Conference-(ISSCC), San Francisco, USA, 2018: 488–490.
    [10]
    XUE Chengxin, CHEN Weihao, LIU J Y, et al. 24.1 A 1Mb multibit ReRAM computing-in-memory macro with 14.6ns parallel MAC computing time for CNN based AI edge processors[C]. 2019 IEEE International Solid- State Circuits Conference (ISSCC), San Francisco, USA, 2019: 388–390.
    [11]
    WU T F, LI Haitong, HUANG Pingchen, et al. Brain-inspired computing exploiting carbon nanotube FETs and resistive RAM: Hyperdimensional computing case study[C]. 2018 IEEE International Solid - State Circuits Conference (ISSCC), San Francisco, USA, 2019: 492–494.
    [12]
    GUO Xinjie, BAYAT F M, BAVANDPOUR M, et al. Fast, energy-efficient, robust, and reproducible mixed-signal neuromorphic classifier based on embedded NOR flash memory technology[C]. 2017 IEEE International Electron Devices Meeting (IEDM), San Francisco, USA, 2017: 6.5. 1–6.5. 4.
    [13]
    YAN Bonan, YANG Qing, CHEN Weihao, et al. RRAM-based spiking nonvolatile computing-in-memory processing engine with precision-configurable in situ nonlinear activation[C]. 2019 Symposium on VLSI Technology, Kyoto, Japan, 2019: T86–T87.
    [14]
    VEIRE L V, DE BOOM C, and DE BIE T. Sigmoidal NMFD: Convolutional NMF with saturating activations for drum mixture decomposition[J]. Electronics, 2021, 10(3): 284. doi: 10.3390/electronics10030284
    [15]
    SHAMSI J, AMIRSOLEIMANI A, MIRZAKUCHAKI S, et al. Hyperbolic tangent passive resistive-type neuron[C]. 2015 IEEE International Symposium on Circuits and Systems (ISCAS), Lisbon, Portugal, 2015: 581–584.
    [16]
    SHAKIBA F M and ZHOU Mengchu. Novel analog implementation of a hyperbolic tangent neuron in artificial neural networks[J]. IEEE Transactions on Industrial Electronics, 2021, 68(11): 10856–10867. doi: 10.1109/TIE.2020.3034856
    [17]
    LIU Feng, ZHANG Bowen, CHEN Gang, et al. A novel configurable high-precision and low-cost circuit design of sigmoid and tanh activation function[C]. 2021 IEEE International Conference on Integrated Circuits, Technologies and Applications (ICTA), Zhuhai, China, 2021: 222–223.
    [18]
    HAENSCH W, GOKMEN T, and PURI R. The next generation of deep learning hardware: Analog computing[J]. Proceedings of the IEEE, 2019, 107(1): 108–122. doi: 10.1109/JPROC.2018.2871057
    [19]
    JOUBERT A, BELHADJ B, TEMAM O, et al. Hardware spiking neurons design: Analog or digital[C]. The 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, Australia, 2012: 1–5.
    [20]
    KUMNGERN M. Absolute value circuit for biological signal processing applications[C]. 2013 4th International Conference on Intelligent Systems, Modelling and Simulation, Bangkok, Thailand, 2013: 601–604.
    [21]
    IRANMANESH S, RAIKOS G, ZHOU Jiang, et al. CMOS implementation of a low power absolute value comparator circuit[C]. 2016 14th IEEE International New Circuits and Systems Conference (NEWCAS), Vancouver, Canada, 2016: 1–4.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(15)  / Tables(2)

    Article Metrics

    Article views (617) PDF downloads(92) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return