Advanced Search
Turn off MathJax
Article Contents
LU Di, WANG Zhen Fa. Design of a CNN Accelerator Based on Systolic Array Collaboration with Inter-Layer Fusion[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250867
Citation: LU Di, WANG Zhen Fa. Design of a CNN Accelerator Based on Systolic Array Collaboration with Inter-Layer Fusion[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250867

Design of a CNN Accelerator Based on Systolic Array Collaboration with Inter-Layer Fusion

doi: 10.11999/JEIT250867 cstr: 32379.14.JEIT250867
  • Accepted Date: 2025-12-29
  • Rev Recd Date: 2025-12-29
  • Available Online: 2026-01-05
  •   Objective  With the rapid deployment of deep learning in edge computing, the demand for efficient Convolutional Neural Network (CNN) accelerators has become increasingly urgent. Although traditional CPUs and GPUs provide strong computational power, they suffer from high power consumption, large latency, and limited scalability in real-time embedded scenarios. FPGA-based accelerators, owing to their reconfigurability and parallelism, present a promising alternative. However, existing implementations often face challenges such as low resource utilization, memory access bottlenecks, and difficulties in balancing throughput with energy efficiency. To address these issues, this paper proposes a systolic array–based CNN accelerator with layer-fusion optimization, combined with an enhanced memory hierarchy and computation scheduling strategy. By designing hardware-oriented convolution mapping methods and employing lightweight quantization schemes, the proposed accelerator achieves improved computational efficiency and reduced resource consumption while meeting real-time inference requirements, making it suitable for complex application scenarios such as intelligent surveillance and autonomous driving.  Methods  This paper addresses the critical challenges commonly observed in FPGA-based Convolutional Neural Network (CNN) accelerators, including data transfer bottlenecks, insufficient resource utilization, and low processing unit efficiency. We propose a hybrid CNN accelerator architecture based on systolic array–assisted layer fusion, in which computation-intensive adjacent layers are deeply bound and executed consecutively within the same systolic array. This design reduces frequent off-chip memory access of intermediate results, decreases data transfer overhead and power consumption, and improves both computation speed and overall energy efficiency. A dynamically reconfigurable systolic array method is further developed to provide hardware-level adaptability for multi-dimensional matrix multiplications, thereby avoiding the resource waste of deploying dedicated hardware for different computation scales, reducing overall FPGA logic resource consumption, and enhancing adaptability and flexibility of hardware resources. In addition, a streaming systolic array computation scheme is introduced through carefully orchestrated computation flow and control logic, ensuring that processing elements within the systolic array remain in a high-efficiency working state. Data continuously flows through the computation engine in a highly pipelined and parallelized manner, improving the utilization of internal processing units, reducing idle cycles, and ultimately enhancing overall throughput.  Results and Discussions  To explore the optimal quantization precision of neural network models, experiments were conducted on the MNIST dataset using two representative architectures, VGG16 and ResNet50, under fixed-point quantization with 12-bit, 10-bit, 8-bit, and 6-bit precision. The results, as shown in Table 1, indicate that when the quantization bit width falls below 8 bits, model inference accuracy drops significantly, suggesting that excessively low precision severely compromises the representational capacity of the model. On the proposed accelerator architecture, VGG16, ResNet50, and YOLOv8n achieved peak computational performances of 390.25 GOPS, 360.27 GOPS, and 348.08 GOPS, respectively. To comprehensively evaluate the performance advantages of the proposed accelerator, comparisons were made with FPGA accelerator designs reported in existing literature, as summarized in Table 4. Table 5 further presents a comparison of the proposed accelerator with conventional CPU and GPU platforms in terms of performance and energy efficiency. During the acceleration of VGG16, ResNet50, and YOLOv8n, the proposed accelerator achieved computational throughput that was 1.76×, 3.99×, and 2.61× higher than that of the corresponding CPU platforms, demonstrating significant performance improvements unattainable by general-purpose processors. Moreover, in terms of energy efficiency, the proposed accelerator achieved improvements of 3.1× (VGG16), 2.64× (ResNet50), and 2.96× (YOLOv8n) compared with GPU platforms, highlighting its superior energy utilization efficiency.  Conclusions  This paper proposes a systolic array–assisted layer-fusion CNN accelerator architecture. First, a theoretical analysis of the accelerator’s computational density is conducted, demonstrating the performance advantages of the proposed design. Second, to address the design challenge arising from the variability in local convolution window sizes of the second layer, a novel dynamically reconfigurable systolic array method is introduced. Furthermore, to enhance the overall computational efficiency, a streaming systolic array scheme is developed, in which data continuously flows through the computation engine in a highly pipelined and parallelized manner. This design reduces idle cycles within the systolic array and improves the overall throughput of the accelerator. Experimental results show that the proposed accelerator achieves high throughput with minimal loss in inference accuracy. Specifically, peak performance levels of 390.25 GOPS, 360.27 GOPS, and 348.08 GOPS were attained for VGG16, ResNet50, and YOLOv8n, respectively. Compared with traditional CPU and GPU platforms, the proposed design exhibits superior energy efficiency, demonstrating that the accelerator architecture is particularly well-suited for resource-constrained and energy-sensitive application scenarios such as edge computing.
  • loading
  • [1]
    SHAO Jie and CHENG Qiyu. E-FCNN for tiny facial expression recognition[J]. Applied Intelligence, 2021, 51(1): 549–559. doi: 10.1007/s10489-020-01855-5.
    [2]
    KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. Proceedings of the 26th Annual Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2012: 1106–1114.
    [3]
    SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. 3rd International Conference on Learning Representations, San Diego, USA, 2015: 236–238. (查阅网上资料, 未找到本条文献页码, 请确认).
    [4]
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Identity mappings in deep residual networks[C]. 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 630–645. doi: 10.1007/978-3-319-46493-0_38.
    [5]
    IOFFE S and SZEGEDY C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[C]. Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 2015: 448–456.
    [6]
    BAO Chun, XIE Tao, FENG Wenbin, et al. A power-efficient optimizing framework FPGA accelerator based on Winograd for YOLO[J]. IEEE Access, 2020, 8: 94307–94317. doi: 10.1109/ACCESS.2020.2995330.
    [7]
    YE Jinlin, LIU Yuhan, CHEN Haiyong, et al. Edge computing accelerator for real-time defect detection of photovoltaic panel on lightweight FPGAs[J]. IEEE Transactions on Instrumentation and Measurement, 2025, 74: 3001815. doi: 10.1109/TIM.2025.3563001.
    [8]
    ZHANG Chen, WANG Xin’an, YONG Shanshan, et al. An energy-efficient convolutional neural network processor architecture based on a systolic array[J]. Applied Sciences, 2022, 12(24): 12633. doi: 10.3390/app122412633.
    [9]
    XU Yuhua, LUO Jie, and SUN Wei. Flare: An FPGA-based full precision low power CNN accelerator with reconfigurable structure[J]. Sensors, 2024, 24(7): 2239. doi: 10.3390/s24072239.
    [10]
    ZHANG Yonghua, WANG Haojie, and PAN Zhenhua. An efficient CNN accelerator for pattern-compressed sparse neural networks on FPGA[J]. Neurocomputing, 2025, 611: 128700. doi: 10.1016/j.neucom.2024.128700.
    [11]
    HU Xianghong, FU Shansen, LIN Yuanmiao, et al. An FPGA-based bit-level weight sparsity and mixed-bit accelerator for neural networks[J]. Journal of Systems Architecture, 2025, 166: 103463. doi: 10.1016/j.sysarc.2025.103463.
    [12]
    LI Gang, LIU Zejian, LI Fanrong, et al. Block convolution: Toward memory-efficient inference of large-scale CNNs on FPGA[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2022, 41(5): 1436–1447. doi: 10.1109/TCAD.2021.3082868.
    [13]
    PACINI T, RAPUANO E, DINELLI G, et al. A multi-cache system for on-chip memory optimization in FPGA-based CNN accelerators[J]. Electronics, 2021, 10(20): 2514. doi: 10.3390/electronics10202514.
    [14]
    OU Yaozhong, YU Weihan, UN K F, et al. A 119.64 GOPs/W FPGA-based ResNet50 mixed-precision accelerator using the dynamic DSP packing[J]. IEEE Transactions on Circuits and Systems II: Express Briefs, 2024, 71(5): 2554–2558. doi: 10.1109/TCSII.2024.3377356.
    [15]
    FUKUSHIMA Y, IIZUKA K, and AMANO H. Parallel implementation of CNN on multi-FPGA cluster[J]. IEICE Transactions on Information and Systems, 2023, E106. D(7): 1198–1208. doi: 10.1587/transinf.2022EDP7175.
    [16]
    ALWANI M, CHEN Han, FERDMAN M, et al. Fused-layer CNN accelerators[C]. 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), Taipei, China, 2016: 1–12. doi: 10.1109/MICRO.2016.7783725.
    [17]
    陈云霁, 李玲, 赵永威, 等. 智能计算系统: 从深度学习到大模型[M]. 2版. 北京: 机械工业出版社, 2024: 256–257.

    CHEN Yunji, LI Ling, ZHAO Yongwei, et al. AI Computing Systems[M]. 2nd ed. Beijing: China Machine Press, 2024: 256–257.
    [18]
    LIU Yanyi, DU Hang, WU Yin, et al. FPGA accelerated deep learning for industrial and engineering applications: Optimal design under resource constraints[J]. Electronics, 2025, 14(4): 703. doi: 10.3390/electronics14040703.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(11)  / Tables(6)

    Article Metrics

    Article views (53) PDF downloads(5) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return