logo
ResearchBunny Logo
Abstract
Training biologically-inspired spiking neural networks (SNNs) is challenging compared to artificial neural networks (ANNs). This paper analyzes the learning dynamics of time-to-first-spike (TTFS) networks, identifying a vanishing-or-exploding gradient problem. A specific TTFS network parameterization (identity mapping) ensures equivalent training trajectories between SNNs and ANNs with rectified linear units (ReLUs). Deep SNN models are trained on MNIST and Fashion-MNIST, and fine-tuned on CIFAR10, CIFAR100, and PLACES365, achieving performance comparable to ANNs with less than 0.3 spikes per neuron. Fine-tuning with a robust gradient descent algorithm optimizes for low latency and resilience to hardware constraints.
Publisher
Nature Communications
Published On
Aug 09, 2024
Authors
Ana Stanojevic, Stanisław Woźniak, Guillaume Bellec, Giovanni Cherubini, Angeliki Pantazi, Wulfram Gerstner
Tags
spiking neural networks
time-to-first-spike
learning dynamics
deep learning
low latency
gradient descent
hardware constraints
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny