logo
ResearchBunny Logo
Behavioral decomposition reveals rich encoding structure employed across neocortex in rats

Biology

Behavioral decomposition reveals rich encoding structure employed across neocortex in rats

B. Mimica, T. Tombaz, et al.

This fascinating study reveals how the brain regions in freely foraging rats encode their natural behaviors, highlighting unique regional processing of visual and auditory information. Conducted by Bartul Mimica and colleagues, it uncovers a complex interplay of neural activity across the dorsal cortex that informs our understanding of animal behavior and neural mechanisms.

00:00
00:00
~3 min • Beginner • English
Introduction
The study addresses how neocortical activity patterns related to movement map onto natural behavior and how such signals might support processing in sensory cortices. Traditional approaches often examine neural systems in isolation with constrained behaviors, limiting understanding of how sensory and motor cortices respond during naturalistic actions. Prior observations in head-fixed animals show that self-generated movements strongly modulate cortical activity, even in visual cortex in darkness, yet sensory systems are rarely studied together and predominantly in head-fixed preparations. The authors aim to determine the extent and nature of momentary behavior representation across visual, auditory, somatosensory, and motor cortices in freely moving rats, and whether region-specific coding of posture and movement supports locally relevant computations during active sensing and movement.
Literature Review
The authors review advances that enable studying naturalistic behavior, including quantitative 3D pose estimation and unsupervised machine learning methods for action classification, which have led to insights into subcortical control of behavior, escape behaviors, and effects of pharmacological agents on behavioral landscapes. They summarize extensive evidence that self-generated movements modulate sensory cortices (visual and auditory), including gain modulation and predictive processing, typically in head-fixed contexts. Prior work shows head-motion signals and locomotion effects in visual cortex, as well as corollary discharge mechanisms in auditory cortex, but cross-modal comparisons in freely moving conditions and mapping of behavioral feature encoding across cortical regions remain limited.
Methodology
- Subjects: 7 male Long-Evans rats (3–4 months). Neuropixels 1.0 probes implanted to record from primary motor (M1), primary somatosensory (S1HL, S1Tr), and visual (V1, V2L/V2M) and auditory (A1, A2D) cortices. Probe placements verified by MRI and histology; laminar sampling varied by area. - Behavior: Freely foraging in a 2×2 m open-field arena under light and dark conditions; additional sessions with a 15 g head weight; auditory white noise stimuli (5 s, pseudo-random ISI) in auditory recordings. - Tracking: 3D motion capture at 120 Hz with head and trunk markers; head-mounted IMU (100 Hz). Postural variables extracted for head (pitch, azimuth, roll) in allocentric and egocentric frames; back pitch/azimuth; neck elevation; whole-body variables (speed, self-motion, turning). Movement derivatives computed; variables binned for tuning analyses. - Ethogram: Time series of 6 postural parameters plus speed were detrended, wavelet-decomposed (0.5–20 Hz), normalized, reduced via PCA (22 PCs), embedded with t-SNE, and segmented by watershed to produce 44 discrete actions. Actions labeled via post hoc inspection. - Neural recordings: High-density extracellular recordings; spike sorting with Kilosort 2.0 and curation in Phy; units classified as regular-spiking (RS) or fast-spiking (FS) via waveform features. - Action encoding/decoding: Cells classified as encoding actions based on firing rate significance against shuffled data and stability across halves. Naive Bayes decoders trained to predict actions from ensemble spike counts. - Feature encoding: Generalized linear models (Bernoulli with logit link, L1-regularized) with forward selection and 10-fold cross-validation. Covariates included 23 behavioral features (postures, movements, position, speed, self-motion). Feature contributions quantified using relative log-likelihood ratio (rLLR) and model performance via McFadden pseudo-R². - Topography: Spatial gradients of tuning quantified along mediolateral (visual/auditory) and rostrocaudal (S1/M1) axes, using χ² tests vs uniform distributions. - Sensory modulation: Auditory responsiveness quantified via Sound Modulation Index (SMI) to white noise; visual responsiveness via Luminance Modulation Index (LMI) across light/dark sessions; decoders assessed sensory-state decoding from ensembles. - Functional connectivity: Putative synapses identified via cross-correlograms with stringent Poisson thresholds and latency constraints, classifying excitatory (RS-driven) and inhibitory (FS-driven) connections. Functional categories defined by best single-covariate tuning (posture vs movement) and sensory modulation (luminance/sound). Relationships between synaptic strength, anatomical distance, and functional similarity analyzed. - Weight manipulation: Compared behavior and tuning metrics across weight-free and weighted sessions to assess robustness of kinematic encoding.
Key Findings
- Discrete naturalistic actions were widely encoded: stable encoding of nearly all 44 actions by individual neurons in each region; proportions of action-encoding neurons: visual 51%, auditory 55%, motor 58%, somatosensory 56. Ensemble decoders predicted most actions above chance in all regions; accuracy increased with more simultaneously recorded cells; cross-regional decoding accuracies correlated (Spearman ρ ≈ 0.39 ± 0.11, p = 0.007). - Cell type/layer: FS and RS neurons encoded similar action ranges; action encoding in visual and auditory cortices was more common in deep layers (L5/6); motor cortex showed comparable encoding in superficial and deep layers. - Fine-grained features (GLM): • Visual cortex: 62% of cells encoded at least one covariate, strongest for allocentric head movement in the horizontal plane (azimuthal head movement) and planar body motion; also egocentric head posture. Larger fractions in L5/6. FS slightly more frequently tuned than RS (72% vs 60%). • Auditory cortex: 63% tuned; principal features were gravity-relative (allocentric) head roll and pitch, followed by egocentric head posture; larger proportions in deep layers. • Motor cortex: 79% tuned; principal features included planar body motion, back movement, and egocentric head posture; dense head kinematics in deep layers. Models were more complex and had higher explanatory power (median pseudo-R² ~0.03 vs auditory 0.02, visual 0.01, somatosensory 0.02). • Somatosensory cortex: Primarily encoded planar body motion, back movement and back posture; largest proportion of unclassified units (51%). FS neurons were more often tuned than RS (69% vs 46%). Models were sparser (1–2 covariates). • Accelerometer vs optical tracking: Both captured allocentric head features in sensory cortices; only optical tracking (including back tracking) distinguished dominant egocentric head and back-related encoding in motor/somatosensory areas. - Topographic gradients: • Visual–auditory axis: Allocentric head posture encoding increased laterally from V1 to V2L, peaking in A2D (χ²(7)=29.5, p=4.8e-5); allocentric head movement peaked near V2L (χ²(7)=13.09, p=0.04). Planar body motion representation increased laterally with a maximum at the V1–V2L border in deep layers (χ²(7)=18.2, p=0.006). • Somatosensory–motor axis: Egocentric head posture and movement more frequent anteriorly (posture χ²(7)=37.7, p=1.3e-6; movement χ²(7)=106.9, p=8.8e-21); back posture and movement dominated posteriorly (posture χ²(7)=18.8, p=0.004; movement χ²(7)=41.02, p=2.9e-7). Total fraction of classified cells peaked in M1. - Sensory–behavioral overlap: In auditory cortex, 75% of neurons were modulated by behavior or white noise; 40% behavior-only, 12% sound-only, 23% both. In visual cortex (using luminance), 42% behavior-only, 12% luminance-only, 20% both. Visual behavioral tuning was less stable across light vs dark than across light sessions; auditory tuning was stable across light/dark. - Functional connectivity: Identified putative synapses in visual (n=247), auditory (n=107), motor (n=181), and somatosensory (n=35) cortices. Visual cortex (especially V2L) showed extensive heterogeneous connectivity (86.8% of heterogeneous pairs in V2L), including excitatory posture→luminance/movement and inhibitory movement→posture motifs; homogeneous movement↔movement and posture↔posture connections also present. Auditory cortex exhibited inhibitory movement→posture (more frequent in A2D; p=0.015) and inhibitory movement→sound-modulated (more frequent in A1) motifs, consistent with corollary discharge/gain control. No correlation between synaptic strength and functional distance in any area. - Head-weight manipulation: Adding 15 g to the implant slightly shifted head roll but had minimal impact on behavior or tuning in motor or visual cortices; some increase in firing rates and stability for egocentric head azimuth tuning across weight-free sessions. GLM covariate selection was similar with or without weight. - Layer-specific trends: Dynamic low-level behavioral features preferentially represented in L5/6 in sensory cortices, supporting deep-layer integration of movement and vestibular cues.
Discussion
The findings show that ongoing behavior is robustly represented across dorsal sensorimotor cortices: nearly any ethogram-defined action can be decoded from ensembles in visual, auditory, somatosensory, and motor areas. However, when decomposed to elemental kinematic features, encoding is region-specific and organized topographically. Visual and auditory cortices preferentially encode head kinematics in allocentric (world-referenced) coordinates, consistent with supporting sensory processing of external stimuli (e.g., stabilizing/predicting visual flow; facilitating 3D sound localization via gravity-relative head orientation). Motor and somatosensory cortices primarily encode egocentric head and trunk features, aligned with generating and monitoring body kinematics. Functional coupling patterns suggest distinct regional uses of behavioral signals: V2L networks integrate posture and movement via excitatory and inhibitory motifs that could disambiguate self-generated optic flow; A1 and A2D inhibitory motifs indicate movement-dependent gain control for self-generated sounds and posture-dependent enhancement of spatial hearing, respectively. The lack of a relationship between synaptic strength and functional similarity indicates that both homogeneous and heterogeneous interactions contribute to information flow. Overall, the study illuminates how behavior-related signals are differentially utilized to support locally relevant computations across cortical areas during natural behavior.
Conclusion
The study demonstrates that dorsal cortical regions in freely moving rats ubiquitously encode discrete naturalistic actions, while elementary pose and movement features show region-specific, topographically organized encoding. Visual and auditory cortices favor allocentric head features; motor and somatosensory cortices favor egocentric head and trunk features. Functional connectivity reveals area-specific motifs suggesting distinct uses of behavioral signals in visual motion processing and auditory gain/localization. These results underscore the importance of behavioral state and kinematics in cortical computation during natural behavior. Future work should directly probe how behavioral and sensory signals are integrated at circuit and cell-type resolution, potentially using miniaturized two-photon imaging and holographic perturbations, and examine the influence of internal states and hierarchical behavioral organization over longer timescales.
Limitations
- Somatosensory cortex showed a high fraction of unclassified units (51%), likely reflecting untracked features (e.g., whiskers, limbs, tail) and the specific S1 subregions sampled (hindlimb/trunk). - Unequal laminar sampling across areas (e.g., visual/auditory recordings biased to deep layers) may influence comparisons. - The behavioral paradigm was limited to free foraging; lack of task structure limits functional interpretation in motor cortex and absence of additional sensory manipulations in somatosensory cortex. - Feature set derived primarily from head and trunk tracking; accelerometer-only features could not capture back-related encoding as effectively as optical tracking. - Head-weight manipulation was modest; broader perturbations could further test encoding robustness. - Correlational nature of connectivity analysis (putative synapses via CCGs) cannot establish causality and is limited by sampling constraints.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny