loading page

Accelerating Spiking Neural Networks with Parallelizable Leaky Integrate-and-Fire Neurons
  • Sidi Yaya Arnaud Yarga,
  • Sean U N Wood
Sidi Yaya Arnaud Yarga
Université de Sherbrooke

Corresponding Author:[email protected]

Author Profile
Sean U N Wood
Université de Sherbrooke

Abstract

Spiking Neural Networks (SNNs) express higher biological plausibility and excel at learning spatiotemporal features while consuming less energy than conventional Artificial Neural Networks (ANNs), particularly on neuromorphic hardware. The Leaky Integrate-and-Fire (LIF) neuron stands out as one of the most widely used spiking neurons in deep learning. However, its sequential information processing leads to slow training on lengthy sequences, presenting a critical challenge for real-world applications that rely on extensive datasets. This paper introduces the Parallelizable Leaky Integrate-and-Fire (ParaLIF) neuron, which accelerates SNNs by parallelizing their simulation over time, for both feedforward and recurrent architectures. When compared to LIF in neuromorphic speech, image and gesture classification tasks, ParaLIF demonstrates speeds up to 200 times faster and, on average, achieves greater accuracy with similar sparsity. Integrated into a state-of-the-art architecture, ParaLIF's accuracy matches the highest reported performance in the literature on the Spiking Heidelberg Digits (SHD) dataset. These findings highlight ParaLIF as a promising approach for the development of rapid, accurate and energy-efficient SNNs, particularly well-suited for handling massive datasets containing long sequences.
24 Feb 2024Submitted to TechRxiv
27 Feb 2024Published in TechRxiv