logo
ResearchBunny Logo
The computational role of structure in neural activity and connectivity

Psychology

The computational role of structure in neural activity and connectivity

S. Ostojic and S. Fusi

Dive into the mysteries of the brain with groundbreaking research by Srdjan Ostojic and Stefano Fusi. They unveil a unified approach to decode the hidden structures within neural activity, offering insights that bridge geometry, modularity, and functional computations. Discover how this innovative framework sheds light on connectivity and illuminates our understanding of brain functions.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper addresses how to identify and interpret structure within complex, mixed-selective neural activity and how such structure underpins computation. Classical views that treat single neurons as functionally interpretable building blocks are challenged by widespread mixed selectivity across the brain, motivating population-level analyses. The authors propose focusing on two complementary types of structure: geometry of population representations and modularity of neuron response patterns. They argue that trained neural network models, which expose both activity and full connectivity, offer a way to relate task structure to representational geometry and modularity in both activity and connectivity. The goal is to set up a framework connecting task demands, representational structure, and underlying connectivity, and to use it to review recent modeling work across three computational classes.
Literature Review
The paper synthesizes strands of research on population geometry, mixed selectivity, and functional cell classes, alongside modeling studies employing trained artificial networks. It reviews psychophysics-inspired task designs with latent task variables, analyses of neural representations through topology, distances, dimensionality, and linear readout separability, and clustering-based approaches to identify functional modules in selectivity space. It integrates results from studies on random pattern classification, manifold-structured inputs, and context-dependent decision-making. The authors also connect low-rank recurrent network theory that links connectivity, dynamics, and computation, and survey how learning regimes (lazy/NTK versus rich) in trained networks shape observable modularity in activity versus connectivity.
Methodology
As a conceptual and analytical framework rather than an empirical study, the methodology formalizes tasks as mappings from latent task-variable spaces to actions, acknowledging nonlinear embeddings from sensory inputs. Neural activity across C conditions and N neurons is represented by a CxN activity matrix. Two complementary analyses follow: (1) geometry via columns (population states) in the N-dimensional activity space, characterized by dimensionality, distances, topology, and linear readout hyperplanes to assess separability, flexibility (number of implementable dichotomies), and generalization/abstraction; (2) modularity via rows (single-neuron response profiles), representing neurons in condition or selectivity space (e.g., regression coefficients for K task variables) and testing for clustered structure versus isotropic mixed selectivity using clustering and comparison to null models. Connectivity is analyzed analogously via a weight matrix for an intermediate layer with inputs and readouts, where columns define vectors (input embeddings and readout directions) in activity space and rows define points in connectivity space (axes are synaptic weights). Structure is assessed both geometrically (arrangements of input and output vectors) and modularly (clusters reflecting correlations across input and output weights). For recurrent networks with low-rank structure, recurrent connectivity is decomposed into unit-rank terms with effective input (m) and output (n) vectors; unrolling the dynamics embeds these into the same weight-matrix framework. The approach is then applied to three computation classes: (i) flexible classification of random patterns, (ii) manifold-structured inputs and abstract/disentangled representations for generalization, and (iii) context-dependent readouts, highlighting how learning regimes shape activity versus connectivity modularity.
Key Findings
- Geometry and modularity are complementary structures in neural activity and connectivity; geometry is rotation-invariant and links directly to linear readout performance, while modularity captures clustered organization of neuron selectivities or weight patterns. - Flexible classification of random inputs is supported by high-dimensional embeddings and heterogeneous mixed selectivity, achievable with random unstructured input connectivity; this maximizes flexibility but limits generalization to novel inputs. - When inputs lie on low-dimensional manifolds embedded in high-dimensional sensory spaces, generalization is optimized when intermediate-layer activity forms abstract (disentangled/factorized) representations that minimize embedding dimensionality and align coding directions for latent variables, enabling linear readouts to generalize across irrelevant factors. - Training networks on multiple tasks with manifold-structured inputs naturally yields abstract/disentangled intermediate representations; similar low-dimensional manifolds arise in RNNs for working memory and flexible timing, with converging evidence from multiple brain areas. - In context-dependent decision-making, a core modular connectivity structure emerges as correlations between input and output weights that route distinct stimulus features via context-gated subpopulations. Whether modularity is also visible in neural selectivity depends on learning regime: lazy/NTK regime preserves random input weights (activity appears non-modular), whereas rich training shapes input weights and can produce modular selectivity alongside connectivity modularity. - Low-rank recurrent networks provide an interpretable link between connectivity geometry (effective input/output vectors), dynamics, and computation; modular structure in effective weights can implement context-dependent integration without necessarily producing modular selectivity. - Overall, task demands, data structure, and learning regime together determine whether structure is expressed geometrically, modularly, and/or in connectivity versus activity.
Discussion
The framework links task structure to representational geometry and modularity at both activity and connectivity levels, clarifying when each type of structure supports computation. Geometry directly constrains linear readout capabilities such as separability, flexibility, and generalization; modularity reveals subgroup organization of neurons that may reflect anatomical, developmental, or learned structure. In unstructured tasks, random high-dimensional embeddings support flexibility but hinder generalization. With manifold-structured inputs, abstract/disentangled geometry supports generalization by aligning coding directions for latent variables. For context-dependent readouts, modular connectivity (input-output weight correlations) is a fundamental mechanism for routing information, while the expression of modularity in activity depends on training dynamics. The analysis emphasizes that structure may be latent in connectivity even when activity appears category-free mixed selective. The authors note that while geometric structure can often be tied to behavior via linear readouts, the behavioral implications of modularity are less direct and require further theory. Advances in multimodal recordings linking functional responses with biological labels and connectivity promise to deepen understanding of how different structures arise and are used computationally.
Conclusion
The paper proposes a unified roadmap to study structure in neural systems by jointly analyzing geometry and modularity in both activity and connectivity and relating them to task demands and computations. It demonstrates, across three computation classes, how data structure and learning regimes shape whether structure manifests as high-dimensional mixed selectivity, abstract/disentangled manifolds, or modular connectivity/selectivity. The framework extends naturally to low-rank recurrent networks, making explicit the link between connectivity, dynamics, and computation. Future work should map functional structure to biological properties, determine the generality of structures discovered in trained artificial networks across learning regimes, broaden the taxonomy of computations toward naturalistic behavior, and integrate computational with developmental and other biological constraints.
Limitations
The work is a conceptual review and does not present new empirical datasets or quantitative benchmarks. The taxonomy of computations is simplified and task-centric, potentially insufficient for naturalistic behaviors. Learning regimes in artificial networks are not necessarily biologically plausible, and it remains unclear which resulting structures reflect universal computational constraints versus training specifics. The relationship between modularity and representational geometry, and their respective behavioral consequences, is not fully resolved. Developmental and anatomical constraints shaping structure are acknowledged but not modeled within the framework.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny