Biology
Introducing the Dendrify framework for incorporating dendrites to spiking neural networks
M. Pagkalos, S. Chavlis, et al.
Discover how Dendrify, an innovative Python package developed by Michalis Pagkalos, Spyridon Chavlis, and Panayiota Poirazi, enhances our understanding of spiking neural networks by incorporating the vital role of dendrites. This exciting research makes it possible to explore dendritic contributions efficiently while advancing neuromorphic systems.
~3 min • Beginner • English
Introduction
The study addresses how to incorporate biologically realistic dendritic computations into efficient spiking neural network (SNN) models. While SNNs are widely used for neuroscience and neuromorphic computing, they typically employ point integrate-and-fire neurons that omit dendrites, missing key nonlinear, location-dependent, and multi-timescale processing capabilities provided by dendritic mechanisms. Conversely, morphologically detailed neuron models capture dendritic computations but are computationally prohibitive for large networks. The paper introduces Dendrify, a framework to generate reduced, compartmental neuron models that retain essential dendritic and synaptic properties, thereby enabling investigation of dendritic contributions to network-level functions with manageable computational cost.
Literature Review
Prior work shows dendrites support local regenerative events (dendritic spikes via Na+/Ca2+ channels and NMDA receptors) and complex computations such as coincidence detection, filtering, segregation/amplification, and nonlinear operations. Synaptic arrangement along dendrites critically shapes responses, with location-dependent effects of inhibition and clustering of excitatory synapses facilitating dendritic spikes and increasing computational capacity. Morphology and passive properties determine electrotonic structure and filtering. Simplified models capturing essential electrophysiology have demonstrated network-level benefits from dendrites: improved associative learning, enhanced pattern separation, binding/linking of information, and increased memory capacity. In ANNs, adding dendritic nodes can reduce trainable parameters and improve continual learning in neuro-inspired systems. Traditional frameworks for dendritic modeling often rely on complex Hodgkin–Huxley (HH) equations with many parameters, limiting adoption in SNNs. Existing simulators (NEURON, GENESIS, NEST, Brian 2, Arbor) typically implement active dendrites via HH/NMODL/NESTML, posing usability barriers. Brian 2 includes multicompartment models but lacks straightforward network-level implementation. This context motivates a phenomenological, Python-centric approach that balances realism and efficiency.
Methodology
The authors develop Dendrify, a Python package built on Brian 2 that auto-generates reduced compartmental neuron models via simple commands. It includes a library of premade mechanisms and supports user-defined equations. Key elements: (1) Phenomenological, event-driven models of dendritic spikes (dSpikes) (e.g., Na+ and partially Ca2+) that avoid HH formalism, using threshold-triggered conductances with exponential dynamics and refractory periods; (2) Synaptic mechanisms (AMPA and NMDA) placed on specified dendritic compartments; (3) Cable coupling via coupling conductances computed from morphological parameters using absolute or half-cylinder formulas; (4) A guide for constructing reduced morphologies that preserve electrotonic properties and pathway segregation. The methodology is demonstrated through four modeling paradigms: Example 1: a 3-compartment neuron (soma: leaky I&F; two passive dendrites) receiving AMPA/NMDA synapses and noise; Example 2: a 4-compartment neuron (soma plus apical trunk/proximal/distal) with Na+-type VGICs for local dSpikes and AMPA/NMDA synapses on specified branches; Example 3: a biologically constrained CA1 pyramidal cell (PC) with 9 segments reflecting anatomical layers and EC/CA3 pathway segregation, calibrated to match electrophysiological metrics (τm, Rin, sag, F–I), somatodendritic attenuation, synaptic attenuation vs distance, BPAPs, and nonlinear dendritic integration; Example 4: a pool of 10,000 CA1 PCs receiving independent Poisson EC and CA3 inputs mapped to distinct dendritic segments to probe coincidence detection with dSpikes ON/OFF across a grid of input intensities. Scalability analysis: three test cases (passive dendrites; active dendrites; active dendrites with ~50 recurrent synapses/neuron) on laptop and iPad, measuring combined build and 1 s simulation times for N from 10^2 to 10^5. Methods section details the somatic leaky I&F with conductance-based adaptation and modified spike/reset to mimic realistic AP shape, dendritic event-based Na+ and delayed K+ currents, axial coupling computations, and validation steps for reduced models (passive/active parameter fitting, dendritic integration tests).
Key Findings
- Reduced compartmental models reproduce essential dendritic properties absent in point neurons. Example 1 shows electrical segregation and attenuation across compartments and location-dependent synaptic integration. NMDA receptors induce supralinear summation; blocking NMDA switches integration to sublinear in a dendrite-specific manner. - Example 2 (active dendrites) demonstrates local Na+ dSpikes with location-dependent thresholds and propagation: distal dSpike generation requires ~2.5× less current than proximal due to higher input resistance; as dSpikes propagate to the soma, they attenuate and broaden. Input-output curves on distal/proximal branches are sigmoidal with supralinear regions; turning off Na+ dSpikes yields sublinear integration. The model generates backpropagating action potentials (BPAPs) from somatic spiking, producing distal spikelets, consistent with experiments. - Example 3 (CA1 PC) reproduces passive properties (τm, Rin, sag ratio, somatodendritic attenuation), F–I curves aligning with superficial/deep CA1b data, distance-dependent synaptic attenuation consistent with experiments and optimized biophysical models, BPAPs during current steps, and dendritic nonlinear integration on obliques (sigmoidal I–O and somatic dV/dt signatures). - Example 4 (CA1 pool, coincidence detection): With EC+CA3 coactivation and dSpikes ON, ~80% of neurons fire ≥1 somatic spike; with dSpikes OFF, active neurons drop to ~10% (≈70% decrease), and all active neurons fire only a single spike. Across 121 EC/CA3 input combinations (rates 50–150% of baseline), dSpike deactivation reduces mean firing rate by ~40–100%, with low-activity cases often silenced. dSpikes significantly shorten inter-spike intervals, increasing temporal precision without inducing somatic bursting (no Ca2+ plateaus modeled). - Scalability: For 1 s simulations, N≤10^3 runs complete in ≤4 s even with recurrence; N=10^4 ~11 s; N=10^5 ~101 s on a laptop. On an iPad, runtimes were faster than a Linux laptop for N<10^4. Performance degrades with added mechanisms and recurrence but remains practical, indicating suitability for sizable networks. Overall, Dendrify balances biological realism, flexibility, and computational efficiency, enabling dendrite-aware SNNs and neuromorphic exploration.
Discussion
The results demonstrate that Dendrify enables inclusion of key dendritic computations in efficient SNN models, directly addressing the gap between biologically detailed but costly morphologies and simplistic point-neuron SNNs. By employing event-driven phenomenological dSpike mechanisms and reduced morphologies that preserve pathway segregation and electrotonic structure, the framework reproduces: electrical compartmentalization, supralinear/sublinear integration modes, branch-specific nonlinear thresholds shaped by local geometry, and BPAPs. In a biologically grounded CA1 model and network, dendritic Na+ spikes critically govern coincidence detection of EC and CA3 inputs, control mean firing rates and temporal output precision, and expand computational capabilities beyond point neurons. Scalability tests indicate feasibility for large simulations on commodity hardware, suggesting applicability to research and education. The findings support the hypothesis that incorporating dendritic heterogeneity in SNNs can enhance learning, memory capacity, and credit assignment, bridging biological mechanisms and artificial computation while remaining compatible with neuromorphic implementations.
Conclusion
The paper introduces Dendrify, a theoretical and software framework that automates construction of reduced compartmental neuron models with biologically meaningful dendritic and synaptic properties in Brian 2. Through four exemplar models and a scalability study, the authors show that Dendrify reproduces hallmark dendritic phenomena (supralinear integration, BPAPs, branch-specific thresholds) and complex pathway interactions in CA1, while remaining computationally efficient up to networks of 10^5 neurons. Contributions include an event-driven, phenomenological dSpike mechanism, a practical guide to reduced-model design/validation, and demonstrations of network-level consequences of dendritic processing. Future work could extend supported ion channel types (e.g., fuller Ca2+ dynamics and other VGICs), incorporate built-in synaptic plasticity rules, improve numerical methods and accuracy for larger compartment counts, and explore deployment on neuromorphic hardware to exploit dendritic computing for low-power AI.
Limitations
- Spatial resolution: Reduced compartmental models cannot match detailed morphologies for fine-grained dendritic structure or synaptic placement; they approximate key regions with few large compartments. - Numerical integration: Current reliance on Brian’s explicit methods limits the number of compartments before numerical inaccuracies arise; small time steps (e.g., ≤0.1 ms) mitigate issues for few-compartment models. - Phenomenological spikes: Event-based models do not capture some HH-dependent phenomena (e.g., depolarization block under strong currents, reduction of BPAP efficiency during prolonged activity). - Mechanism coverage: Current version supports Na+ and partially Ca2+ VGICs; other channel types are not yet included. - Plasticity: Synaptic plasticity rules require manual implementation with Brian 2 objects, not built into Dendrify. - Biological variability: A single reduced model cannot capture the full range of CA1 PC morphological and electrophysiological diversity; multiple specialized reduced models may be needed for different feature sets.
Related Publications
Explore these studies to deepen your understanding of the subject.

