A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware. 2021

Chenglong Zou, and Xiaoxin Cui, and Yisong Kuang, and Kefei Liu, and Yuan Wang, and Xinan Wang, and Ru Huang
Institute of Microelectronics, Peking University, Beijing, China.

Artificial neural networks (ANNs), like convolutional neural networks (CNNs), have achieved the state-of-the-art results for many machine learning tasks. However, inference with large-scale full-precision CNNs must cause substantial energy consumption and memory occupation, which seriously hinders their deployment on mobile and embedded systems. Highly inspired from biological brain, spiking neural networks (SNNs) are emerging as new solutions because of natural superiority in brain-like learning and great energy efficiency with event-driven communication and computation. Nevertheless, training a deep SNN remains a main challenge and there is usually a big accuracy gap between ANNs and SNNs. In this paper, we introduce a hardware-friendly conversion algorithm called "scatter-and-gather" to convert quantized ANNs to lossless SNNs, where neurons are connected with ternary {-1,0,1} synaptic weights. Each spiking neuron is stateless and more like original McCulloch and Pitts model, because it fires at most one spike and need be reset at each time step. Furthermore, we develop an incremental mapping framework to demonstrate efficient network deployments on a reconfigurable neuromorphic chip. Experimental results show our spiking LeNet on MNIST and VGG-Net on CIFAR-10 datasetobtain 99.37% and 91.91% classification accuracy, respectively. Besides, the presented mapping algorithm manages network deployment on our neuromorphic chip with maximum resource efficiency and excellent flexibility. Our four-spike LeNet and VGG-Net on chip can achieve respective real-time inference speed of 0.38 ms/image, 3.24 ms/image, and an average power consumption of 0.28 mJ/image and 2.3 mJ/image at 0.9 V, 252 MHz, which is nearly two orders of magnitude more efficient than traditional GPUs.

UI MeSH Term Description Entries

Related Publications

Chenglong Zou, and Xiaoxin Cui, and Yisong Kuang, and Kefei Liu, and Yuan Wang, and Xinan Wang, and Ru Huang
October 2017, Nanotechnology,
Chenglong Zou, and Xiaoxin Cui, and Yisong Kuang, and Kefei Liu, and Yuan Wang, and Xinan Wang, and Ru Huang
September 2022, Sensors (Basel, Switzerland),
Chenglong Zou, and Xiaoxin Cui, and Yisong Kuang, and Kefei Liu, and Yuan Wang, and Xinan Wang, and Ru Huang
August 2019, Materials (Basel, Switzerland),
Chenglong Zou, and Xiaoxin Cui, and Yisong Kuang, and Kefei Liu, and Yuan Wang, and Xinan Wang, and Ru Huang
May 2022, Neural computation,
Chenglong Zou, and Xiaoxin Cui, and Yisong Kuang, and Kefei Liu, and Yuan Wang, and Xinan Wang, and Ru Huang
January 2019, Frontiers in neuroscience,
Chenglong Zou, and Xiaoxin Cui, and Yisong Kuang, and Kefei Liu, and Yuan Wang, and Xinan Wang, and Ru Huang
August 2022, IEEE transactions on neural networks and learning systems,
Chenglong Zou, and Xiaoxin Cui, and Yisong Kuang, and Kefei Liu, and Yuan Wang, and Xinan Wang, and Ru Huang
March 2024, International journal of neural systems,
Chenglong Zou, and Xiaoxin Cui, and Yisong Kuang, and Kefei Liu, and Yuan Wang, and Xinan Wang, and Ru Huang
August 2023, Communications biology,
Chenglong Zou, and Xiaoxin Cui, and Yisong Kuang, and Kefei Liu, and Yuan Wang, and Xinan Wang, and Ru Huang
April 2018, IEEE transactions on neural networks and learning systems,
Chenglong Zou, and Xiaoxin Cui, and Yisong Kuang, and Kefei Liu, and Yuan Wang, and Xinan Wang, and Ru Huang
January 2020, Nature,
Copied contents to your clipboard!