
Biology
Visuo-frontal interactions during social learning in freely moving macaques
M. Franch, S. Yellapantula, et al.
This fascinating study by Melissa Franch and colleagues reveals how visual information sharpens social cooperation decisions in macaques. By analyzing neural activity and eye-tracking data, the research demonstrates the intricate interplay between visual cues and cooperation learning, highlighting the vital role of the visual-frontal network in social interactions.
~3 min • Beginner • English
Introduction
The study addresses how visual information from conspecifics is used to guide cooperative decisions in primates. Prior work has identified neural encoding of specific social variables (for example, reward value, actions, agent identity, and social rank), but most studies used stationary animals and synthetic stimuli, limiting insights into real-time, visually guided social decision-making. Macaques, which naturally rely on social gaze and facial cues, provide an ideal model. The authors developed a naturalistic experimental paradigm combining wireless neural recordings, wireless eye tracking, and markerless behavioral tracking in freely moving macaque pairs learning to cooperate for food reward. The central hypothesis is that visual cues (reward-related objects and partner viewing) drive social decisions and that learning to cooperate is supported by refined encoding and increased visuo-frontal coordination.
Literature Review
Previous studies have shown neural representations of social variables, including value and actions, agent identity, social rank, and prediction of others' behavior, primarily in frontal and limbic circuits. Visual processing areas and prefrontal cortex are known to encode complex features and support executive control, but how these signals are integrated during naturalistic, visually guided social interactions remained unexplored. Imaging and electrophysiology studies have identified networks activated by viewing other agents and social interactions, yet lacked simultaneous measurement of eye-gaze-defined visual input and neural activity during live interactions. This work builds on these literatures by examining real-time processing of visual social cues during cooperative behavior in freely moving macaques.
Methodology
Subjects and task: Two unique pairs of familiar macaques (each pair with a subordinate ‘self’ and a dominant ‘partner’) were separated by a transparent divider allowing visual but not physical interaction. Each animal had a push button controlling their own reward tray. Trials began when pellets were dispensed; cooperation required simultaneous button press-and-hold by both animals, moving trays to deliver reward. Each session contained 100–130 trials; there were 18 learning sessions for pair 1 and 17 for pair 2.
Neural and eye tracking: Wireless recordings of spiking activity were obtained simultaneously from the subordinate (‘self’) monkey in mid-level visual cortex (area V4) and dorsolateral prefrontal cortex (dIPFC) using dual chronically implanted Utah arrays, enabling stable unit recordings across sessions (average units per session: M1, 136 total with 34 V4 and 102 dIPFC; M2, 150 total with 104 V4 and 46 dIPFC). Wireless eye tracking captured oculomotor events and a scene camera provided the field of view; DeepLabCut was used to label objects (partner, reward pellets, dispenser, buttons) for fixation identification.
Behavioral analyses: Button presses were represented as binary time series; cross-correlograms (CCGs) quantified coordination and lead–lag relationships across learning sessions. Conditional probability of cooperation and response delays (reaction times) were computed per monkey. Control experiments included an opaque divider to occlude visual access and separate ‘solo’ versus ‘social’ trials to assess social context effects.
Viewing events: Fixations were defined by eye speed below threshold for ≥100 ms. Fixation rates were computed during trials (pre-cooperation) versus intertrial intervals for objects (pellets in tray, dispenser, partner, self button, floor). ‘View reward’ comprised fixations on pellets and dispenser; ‘view partner’ comprised fixations on the partner’s head/body.
State-transition analyses: Markov models and Hidden Markov Models quantified transitional probabilities between viewing and action events (for example, view partner → self-push). Transitional probabilities were assessed across sessions.
Single-neuron analyses: Peri-event time histograms and firing rates relative to baseline (intertrial) were analyzed. Events included fixations on social cues (reward, partner) and actions (self-push, partner-push) when temporally separated (>1 s). Statistical tests were Wilcoxon signed-rank with FDR correction for within-cell effects.
Decoding analyses: Linear SVM classifiers with 10-fold cross-validation decoded: (a) fixations on reward vs partner; (b) non-social objects (self button vs random floor); (c) social vs non-social categories; (d) self-choice vs partner-choice. Shuffle-corrected accuracies and chance levels were computed. Choice decoding was evaluated with and without preceding fixations on social cues (within 1,000 ms), with event counts balanced. Decoder weights were analyzed per neuron; distributions’ variance, kurtosis, and skewness were tracked across sessions.
Spike-timing coordination: Shuffle-corrected CCGs quantified pairwise synchrony within V4 and dIPFC and between V4–dIPFC. Significant CCGs required peaks >4.5 s.d. of tails. Within-area peaks at ±0–6 ms were analyzed; inter-areal peaks at ±15–60 ms accounted for transmission delays. Mean coordination (peak coincident spikes) was tracked across sessions for social events (view reward, view partner, self-choice, partner-choice) and for non-social/random controls. Time-locking controls assessed whether coordination increases were not due to event-locked rate changes. Relations between correlated neuron pairs and decoder weights were tested (Wilcoxon signed-rank).
Key Findings
- Behavior: Cross-correlograms of button presses showed increasing coincident pushing from ~60% in first sessions to 80–90% in last sessions. Time lag between pushes decreased significantly across sessions (P<0.05), indicating improved coordination. Conditional probability to cooperate increased by a mean 54% across sessions (P<0.001). Response delay decreased by 93% across sessions (P<0.05).
- Viewing as driver: Fixation rates on reward system and partner were higher during trials (pre-cooperation) than intertrial (P<0.01). Transitional probabilities for viewing-led event pairs increased significantly across sessions; event pairs beginning with viewing social cues showed a mean 220% increase. Low transition probability between actions alone (‘self-push’→‘partner-push’ ~0.1) indicated fixations typically occur between pushes.
- Single units: In dIPFC, ~70% of cells increased firing around pushes (responses starting ~1,000 ms before push), and most responded to both self and partner choices; 55% exhibited mixed selectivity (both fixations and choice). In V4, most cells responded to fixations on social cues (36% reward, 52% partner), fewer to choice; 22% showed mixed selectivity.
- Decoding social cues and choice: Decoding of fixations on reward vs partner improved on average by 328% across learning in both V4 and dIPFC (P<0.01). Decoding non-social objects did not improve. Category decoding (social vs non-social) in dIPFC improved by 228% (P<0.01). Choice decoding in dIPFC improved dramatically by 5,481% across sessions (P<0.01); V4 decoded choice only near chance except when preceded by social cue fixations. Removing pushes with preceding social cue fixations reduced choice decoding by 65% in V4 and 24% in dIPFC (P<0.001), yet dIPFC choice decoding still correlated with learning (P<0.05), indicating genuine decision encoding.
- Population code distribution: Decoder weight distributions for social-cue models exhibited decreasing variance, kurtosis, and skewness across sessions, indicating a shift from a few highly informative neurons early to a more evenly distributed representation later.
- Spike-timing coordination: Within V4, synchrony increased by 114% during fixations on social cues but not before choice. Within dIPFC, synchrony increased by ~137% during fixations and self-choice but not partner-choice. Between V4–dIPFC, coordination increased by 160% for social events except partner-choice. No changes were seen during fixations on random objects or random time periods. V4–dIPFC CCG peaks occurred at both positive and negative lags, consistent with bidirectional (feedforward/feedback) interactions. Correlated V4–dIPFC neuron pairs had higher decoder weights than uncorrelated neurons for social events, linking coordination to improved encoding.
Discussion
The findings show that in freely moving macaques, viewing social cues (partner and reward) becomes increasingly likely to precede cooperative actions as learning progresses. This behavioral change parallels improved neural encoding of social variables and elevated spike-timing coordination within and between visual (V4) and prefrontal (dIPFC) areas. Notably, dIPFC neurons not only discriminate social visual cues better than V4 but also robustly encode egocentric and allocentric choices, suggesting integrative, multimodal processing that supports decision-making. Increased interareal coordination preferentially occurs during task-relevant social events and is strongest among neurons that contribute most to decoding, consistent with strengthened functional coupling and potentially with synaptic plasticity or alignment of inter-areal communication subspaces. The lack of increased spike-timing coordination in dIPFC before the partner’s choice suggests that prediction of others’ behavior may be represented in mean rates rather than synchrony, or may rely on other circuits. Overall, the results support a mechanism in which visuo-frontal networks prioritize and distribute socially relevant visual information to guide cooperative decisions.
Conclusion
This study provides, to the authors’ knowledge, the first evidence in freely moving primates that visual cortex contributes to encoding socially relevant cues which, together with prefrontal decision signals, support cooperative behavior. As animals learn, population codes for social cues become more accurate and more evenly distributed, and visuo-frontal spike-timing coordination increases, especially among neurons most informative about social events. The work advances naturalistic social neuroscience by combining wireless neural and eye-tracking recordings, revealing vision as a key ‘social language’ in primates. Future research should examine how additional sensory modalities (odors, vocalizations, touch) integrate with visual cues and explore broader networks beyond V4 and dIPFC that may support allocentric prediction and social decision-making.
Limitations
- Sample size and generalizability: Only two macaque pairs were studied, and neural recordings were obtained from the subordinate (‘self’) animal, which may limit generalization across social roles and individuals.
- Brain coverage: Recordings were restricted to V4 and dIPFC; other areas (for example, amygdala, STS, parietal, motor) likely contribute to social decisions but were not recorded.
- Task context: Animals were visually separated by a clear divider and cooperated via button presses for food, which may not capture all aspects of natural social cooperation and physical interactions.
- Spike-timing specificity: Increased spike-timing coordination was not observed before partner-choice, leaving open questions about the temporal dynamics and circuit loci of allocentric prediction.
- Sensory modality focus: The study centers on visual cues; contributions of other modalities were not measured and may influence social decisions.
Related Publications
Explore these studies to deepen your understanding of the subject.