Training biologically-inspired spiking neural networks (SNNs) is challenging compared to artificial neural networks (ANNs). This paper analyzes the learning dynamics of time-to-first-spike (TTFS) networks, identifying a vanishing-or-exploding gradient problem. A specific TTFS network parameterization (identity mapping) ensures equivalent training trajectories between SNNs and ANNs with rectified linear units (ReLUs). Deep SNN models are trained on MNIST and Fashion-MNIST, and fine-tuned on CIFAR10, CIFAR100, and PLACES365, achieving performance comparable to ANNs with less than 0.3 spikes per neuron. Fine-tuning with a robust gradient descent algorithm optimizes for low latency and resilience to hardware constraints.
Publisher
Nature Communications
Published On
Aug 09, 2024
Authors
Ana Stanojevic, Stanisław Woźniak, Guillaume Bellec, Giovanni Cherubini, Angeliki Pantazi, Wulfram Gerstner
Tags
spiking neural networks
time-to-first-spike
learning dynamics
deep learning
low latency
gradient descent
hardware constraints
Related Publications
Explore these studies to deepen your understanding of the subject.