logo
ResearchBunny Logo
Neuromorphic intermediate representation: A unified instruction set for interoperable brain-inspired computing

Computer Science

Neuromorphic intermediate representation: A unified instruction set for interoperable brain-inspired computing

J. E. Pedersen, S. Abreu, et al.

This research was conducted by Jens E. Pedersen, Steven Abreu, Matthias Jobst, Gregor Lenz, Vittorio Fra, Felix Christian Bauer, Dylan Richard Muir, Peng Zhou, Bernhard Vogginger, Kade Heckel, Gianvito Urgese, Sadasivan Shankar, Terrence C. Stewart, Sadique Sheik, and Jason K. Eshraghian. It introduces the Neuromorphic Intermediate Representation (NIR), a common reference frame that captures hybrid continuous-time and event-driven computations, enabling reproducible, interoperable spiking neural network models across simulators and digital neuromorphic platforms.... show more
Introduction

The paper addresses the challenge of interoperability and reproducibility in neuromorphic computing, where diverse simulators, software stacks, and hardware platforms implement neuron dynamics differently. Inspired by successes of intermediate representations (IRs) in deep learning (e.g., ONNX, MLIR, XLA, TVM), the authors propose a neuromorphic-specific IR that captures continuous-time dynamics natural to spiking systems. The core research aim is to define a unified, model-centric abstraction—Neuromorphic Intermediate Representation (NIR)—that standardizes computational primitives as hybrid systems (continuous dynamics with discrete events), enabling faithful, platform-agnostic model descriptions. The motivation is to decouple algorithm/model development from platform-specific constraints, facilitate cross-platform execution, lower entry barriers, and support comparative analysis across neuromorphic simulators and hardware.

Literature Review

The paper surveys prior efforts toward common interfaces and tooling in neuromorphic computing: PyNN provides simulator-independent SNN descriptions and connects to hardware like Spikey, BrainScaleS, and SpiNNaker; NeuroML offers serializable biological models emphasizing correctness. Frameworks such as Brian2 have bridges to GeNN and Loihi emulation. Higher-level neuromorphic libraries include Fugu (SNN graph IR), Lava (CSP-based processes mapped to Loihi 2), and Nengo (broad neuromorphic support including BrainDrop, Loihi, SpiNNaker). Additional conversion or interface toolchains include SNN-Toolbox, NxTF, and hxtorch.snn; abstractions spanning von Neumann and neuromorphic systems have been proposed, as have interfaces for crossbar arrays and specific neuron types. However, these standards remain confined to subdomains and typically assume discrete-time or runtime specifics, limiting their suitability for continuous-time neuromorphic dynamics. Compiler infrastructures like LLVM/MLIR target discrete machine-level instructions and do not directly model hybrid continuous-time systems. The authors thus position NIR as a modular, declarative, continuous-time IR, complementary to and interoperable with existing tools but tailored to neuromorphic computation.

Methodology

NIR design and specification: NIR represents computations as graphs of composable, model-centric computational primitives defined as hybrid continuous-time systems with discrete events. The authors formalize NIR via axioms: (1) computational primitives C are parameterized transformations from continuous-time inputs u(t) to outputs y(t), modeled as hybrid systems (differential equations and discrete transitions), each with named input/output ports and well-defined signal shapes; primitives can be composed into higher-order primitives. (2) Computations are NIR graphs G=(V,E) whose nodes are instantiated primitives with parameters, and edges connect ports; edges are identity maps, and multiple inputs to a port are summed element-wise by convention.

Computational primitives: NIR defines 11 base primitives and 3 higher-order composites. Base primitives include Input, Output, Affine, Convolution, Delay, Flatten, Integrator, Leaky Integrator (LI), Linear, Scale, and Spike. Higher-order primitives are defined compositionally: IF (Integrator → Spike with reset), LIF (Linear → Leaky Integrator → Spike with reset), and CuBa-LIF (composition of LIF with additional Linear and LI). NIR remains declarative about idealized dynamics and does not mandate discretization schemes, allowing platforms to approximate computations within their constraints.

Platform interoperability and constraint modeling: Executability on hardware is expressed as constraints over NIR graphs (e.g., neuron counts, fan-in). Mapping involves subgraph isomorphism and pattern matching (e.g., recognizing LIF ⊳ Linear motifs for backend-specific constructs like snnTorch’s RLeaky). Approximations may substitute primitives when acceptable (e.g., replacing LIF with CuBa-LIF under limiting parameters). The paper discusses the theoretical complexity (NP-complete) of general subgraph matching and provides a naive example constraint checker for Xylo.

Experimental evaluation across platforms: NIR integrates 7 simulators (Lava/Lava-DL, Nengo, Norse, Rockpool, Sinabs, snnTorch, Spyx) and 4 digital hardware platforms (Loihi 2 via Lava, SynSense Speck via Sinabs, SpiNNaker2, SynSense Xylo via Rockpool). Three benchmark tasks are used: (1) Single LIF neuron under identical input; (2) 9-layer SCNN (2D convolutions, sum pooling, I&F neurons) trained on N-MNIST via ANN→SNN conversion; (3) SRNN with one recurrent CuBa-LIF layer trained on Braille letter recognition via BPTT with surrogate gradients. The LIF model was authored in Norse, exported to NIR, and imported by all platforms. For SCNN, the ANN was trained (Adam, lr=0.001, 4 epochs, 98% validation accuracy) on N-MNIST frames, then converted to an SNN with Sinabs and exported to NIR for cross-platform execution. For SRNN, training in snnTorch included regularization on spike counts, surrogate gradients (fast sigmoid, k=5), and two reset variants (bias+zero reset vs. no-bias+subtractive reset), then export to NIR for evaluation on other platforms. Metrics include end-task accuracy and activity similarity (cosine similarity of spike rates) for hidden layers. The study analyzes discretization choices, quantization, and determinism effects (notably for asynchronous hardware) as sources of mismatch.

Key Findings
  • NIR interoperability: Implemented across 7 simulators and 4 digital neuromorphic hardware platforms, enabling model-centric exchange and execution without platform-specific rewrites.
  • LIF neuron dynamics: Output firing rates and spike timing align closely across platforms; systematic differences observed:
    • Rockpool and Sinabs integrate inputs before leak update, shifting membrane potential closer to threshold earlier; Lava/Loihi 2 compute spikes at t from membrane at t−1, producing one-timestep spike delays when absolute timing matters.
    • Xylo and Speck do not support the LIF primitive and are omitted for this task.
  • SCNN on N-MNIST: High cross-platform agreement in end-task accuracy with mean 97.7% and SD 0.9%. Representative accuracies: Nengo 98.1%, Norse 98.1%, Sinabs 98.5%, snnTorch 97.9%, Spyx 97.1%, Lava/Loihi 98.2%, Speck 95.4%, SpiNNaker2 98.2% (Rockpool/Xylo N/A for this model due to convolution or neuron model support). Cosine similarity of first spiking layer shows most platforms nearly identical; Rockpool, Sinabs, snnTorch, and Speck deviate more due to discretization choices.
  • SRNN on Braille: Cross-platform performance is less consistent; recurrent dynamics amplify small discrepancies. Example accuracies (two variants):
    • SRNN (subtractive reset, no bias): Norse 93.57%, snnTorch 92.1%, Spyx 92.12%, SpiNNaker2 93.57%, Xylo 85.71%, Rockpool 71.4% (Nengo 78.6%, Lava N/A).
    • SRNN (zero reset, with bias): Norse 94.29%, snnTorch 95.0%, SpiNNaker2 85.0%, Spyx 84.29%, Nengo 55.7%, Lava 48.6% (Rockpool N/R, Xylo N/A).
  • Sources of mismatch identified and characterized: (i) neuron model implementation differences (reset/spike timing, discretization), (ii) quantization on hardware (post-training quantization used; QAT not studied in depth), and (iii) non-determinism in asynchronous hardware (e.g., Speck, SpiNNaker2).
  • NIR as a measurement tool: Provides a standardized idealized model to quantify and analyze simulator and device mismatch impacts on dynamics and accuracy.
Discussion

The findings show that NIR achieves its goal of providing a platform-independent, continuous-time model representation that preserves computational intent across heterogeneous backends. For feed-forward or predominantly rate-coded computations (e.g., single LIF, SCNN), NIR-based deployments reproduce both activations and accuracies with small variation, supporting the hypothesis that an IR for neuromorphic systems can bridge software and hardware. For time-sensitive, recurrent models, small implementation differences (discretization, reset timing, quantization, determinism) can lead to divergent dynamics and end-task performance, highlighting the need for platform-aware training (e.g., quantization-aware training) and standardization efforts around neuron models. NIR’s declarative, hybrid-system formalism serves as a common reference to compare and attribute discrepancies, encouraging co-evolution of hardware and software while enabling broader accessibility and reproducibility. The work positions NIR alongside existing ecosystems (PyNN, NeuroML, Lava, etc.) as a complementary, serializable exchange format focused on continuous-time neuromorphic computation.

Conclusion

The paper introduces NIR, a neuromorphic intermediate representation that specifies computations as graphs of hybrid continuous-time primitives. NIR decouples model development from platform-specific constraints and supports deployment across 7 simulators and 4 digital neuromorphic hardware platforms. Experiments demonstrate strong cross-platform alignment for feed-forward rate-based models and reveal challenges for recurrent dynamics due to discretization and hardware-specific behaviors. NIR also functions as a tool to measure and analyze mismatches across platforms. Future work includes expanding the primitive set (e.g., adaptive thresholds, gating, resonate-and-fire, multicompartment models), refining versioning (opset-like), integrating multi-level abstractions akin to MLIR dialects, and exploring mappings to larger analog/mixed-signal and multi-chip systems to scale complex continuous-time models efficiently.

Limitations
  • Current NIR primitive set excludes mechanisms such as adaptive thresholds, gating, resonate-and-fire, and multicompartmental neurons.
  • Experiments focus on digital simulators and hardware; analog/mixed-signal platforms are discussed conceptually but not evaluated.
  • Cross-platform discrepancies persist for recurrent models due to differences in discretization schemes, reset timing, quantization precision, and hardware determinism; platform-dependent optimization (e.g., QAT) was not systematically explored.
  • Some hardware lacks support for required primitives (e.g., convolution or LIF), limiting participation in certain benchmarks.
  • Constraint checking and optimal mapping require non-trivial subgraph isomorphism and approximation strategies, which remain open engineering challenges.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny