logo
ResearchBunny Logo
Synchronized affect in shared experiences strengthens social connection

Psychology

Synchronized affect in shared experiences strengthens social connection

J. H. Cheong, Z. Molani, et al.

Discover how shared experiences during TV viewing enhance social bonding! This research by Jin Hyun Cheong, Zainab Molani, Sushmita Sadhukha, and Luke J. Chang reveals that emotional and physiological synchrony can predict connections among participants.

00:00
00:00
~3 min • Beginner • English
Introduction
The study asks how and when shared experiences—beyond merely co-attending to the same stimulus—foster social connection. While people often assume that co-experiencing events (e.g., films, concerts, ceremonies) elicits similar feelings, prior work shows substantial individual variability in thoughts and emotions even when exposed to identical stimuli. Such variability can lead to dynamic convergence and divergence in affect, physiology, and behavior over time. The authors posit that synchronized affective processes—captured in the temporal dynamics of facial expressions, physiological arousal, and aligned cognitive impressions—are key mechanisms that signal shared understanding and thereby strengthen interpersonal bonds. They test whether moment-to-moment temporal synchrony of facial affect, spatial alignment of how emotions are expressed, physiological synchrony (EDA), and similarity in character impressions during naturalistic co-viewing predict increases in self-reported social connection.
Literature Review
The paper situates its work within research on shared experiences and social bonding in groups, where synchrony (e.g., singing, coordinated movement) is linked to cohesion and prosociality. Prior studies have reported convergence in emotion expression frequencies among close others and physiological synchrony (e.g., EDA) across various social contexts, including rituals, conversations, and psychotherapy. However, much prior work emphasizes mean levels or frequencies of expressions rather than precise temporal alignment, leaving open whether co-experiencers “click” at the same moments versus influence one another via sequential contagion. The authors also cite research showing substantial variability in facial expression configurations across individuals and cultures, and chameleon/mimicry effects wherein behavioral alignment facilitates smoother, more affiliative interactions. They note challenges in interpreting facial displays due to strategic control (suppression/feigning), suggesting physiological measures like EDA—more difficult to volitionally control—offer complementary evidence of shared affect. This literature motivates a multimodal, temporally precise approach under naturalistic conditions to assess how synchronous affective processes support social connection.
Methodology
Design and participants: 86 participants viewed four 45-min episodes (Friday Night Lights) either alone (N=22) or as dyads (N=64; 32 dyads). Five dyads knew each other (two close friends); excluding these did not change results. Two dyads did not return for session two and were excluded. Most dyads engaged in minimal conversation during viewing. An additional online sample (Amazon Mechanical Turk; N≈192 after exclusions N=188) provided time-stamped crowd-sourced emotion ratings (joy, surprise, sadness, fear, anger, disgust) during episode 1. Stimuli and procedure: Participants completed two lab sessions (average 2.65 days apart), watching one episode per session in the alone condition or side-by-side with a partner in the dyad condition. Dyads could see each other’s faces naturally. After each episode, dyad members reported social connection to their partner and enjoyment. Participants also rated 13 characters on eight impression dimensions (e.g., liking, wanting to be friends, attractiveness, relating to, caring about outcomes). Data acquisition: - Facial expressions: Head-mounted cameras (GoPro Hero4 Black; 1280×720) recorded faces. Facial action units (20 AUs) and 6 emotion likelihoods were extracted with FACET (iMotions 6.0), after alignment using FaceSync. Time series of predicted emotion evidence were aggregated into positive (primarily joy) and negative (max across anger, fear, disgust, sadness, contempt) valence. - Electrodermal activity (EDA): Recorded at 250 Hz (BIOPAC), then band-pass filtered (0.005–15 Hz) and downsampled to 1 Hz. Three dyads lacking usable EDA (non-responders/acquisition error) were removed. - Crowd-sourced emotions: Online participants watched episode 1; at random intervals they reported current emotion intensities. Ratings were averaged, interpolated, and smoothed with a 30-s sliding window for alignment with lab data. Analyses: - Temporal facial expression synchrony: Pearson correlations between dyad partners’ positive and negative emotion trajectories per episode; compared dyads vs alone (pairwise in alone), and vs pseudo-dyads (random non-partner pairings). Significance assessed via subject-wise bootstrapping and surrogate time series by circular shifting. - Dynamic synchrony: 30-s sliding window with dynamic time warping to produce second-by-second synchrony trajectories; related to crowd-sourced emotion ratings; cluster-based permutation tests identified scenes where dyads exceeded alone in synchrony. - Linking synchrony to connection: Linear mixed-effects models predicted post-episode connection ratings from positive and negative synchrony, with episode number as covariate; penalized regression with leave-one-day-out cross-validation assessed contributions of six canonical expressions’ synchrony. - Spatial alignment of expressions: A shared response model (SRM; BrainIAK) reduced the multivariate AU time series to k=2 shared temporal components across participants, with subject-specific spatial weights (AU configurations). The first shared response aligned with joy; the second with fear/sadness. Spatial similarity of subject-specific weights within dyads indexed spatial alignment and was correlated with connection. - EDA synchrony: Pearson correlation between partners’ log-transformed EDA time series per episode; dynamic EDA synchrony related to crowd-sourced emotions via mixed-effects regression. Global EDA synchrony related to connection via mixed-effects regression. - Impressions similarity: PCA on eight character impression questions retained five components explaining ~90% variance. Dyad similarity computed in the reduced space and related to connection via mixed-effects models. - Structural equation modeling (SEM): A latent “shared experience” factor was modeled with measured indicators (positive temporal facial synchrony, positive spatial facial synchrony, EDA synchrony, impression similarity) predicting connection, controlling for time spent together and correlations among facial measures.
Key Findings
- Temporal facial expression synchrony: Positive facial expression synchrony was significant in both groups for episode 1 (dyads r=0.23, SD=0.25, p<0.001; alone r=0.06, SD=0.13, p<0.001), and similarly across subsequent episodes. Average positive synchrony increased over time (dyads: β=0.05, t(28)=4.68, p<0.001; alone: β=0.01, t(14)=2.32, p=0.04). Across episodes, dyads showed greater positive synchrony than alone (β=0.20, t(31)=5.01, p<0.001), with the difference increasing over episodes (β=0.03, t(34)=3.20, p=0.01). Paired dyads > non-paired pseudo-dyads (β=6.81, p<0.001); pseudo-dyads > alone (β=0.07, t(74)=…, p<0.001). Negative expression synchrony showed minimal effects overall, with small dyad > alone differences in some episodes (e.g., ep1 r=0.07, p=0.01; ep4 r=0.07, p=0.003). - Event-locked dynamics: In episode 1, dynamic time-warped positive synchrony correlated with crowd-sourced joy (r=0.24, p<0.001) and inversely with sadness (r=−0.12, p<0.001). Cluster tests identified specific positive/humorous scenes with stronger dyad synchrony than alone. - Synchrony predicts connection: Positive facial synchrony predicted greater connection across episodes (β=1.04, t(88)=3.36, p=0.001), with connection also increasing over time (β=0.25, t(81)=5.96, p<0.001) and a positive interaction (β=0.38, t(81)=2.15, p=0.034). Penalized regression with leave-one-day-out CV predicted connection in new dyads (r=0.51), with only joy synchrony uniquely predictive (β=0.78, t(80)=2.16, p=0.034). - Spatial alignment of expressions: SRM identified two shared temporal components; Shared Response 1 correlated with joy (r=0.42, p<0.001); Shared Response 2 with fear (r=0.29, p<0.001) and sadness (r=0.23, p<0.001). Spatial similarity in Shared Response 1 AU configurations within dyads correlated with connection (ep1 r=0.40, p=0.036; ep2 r=0.36, p=0.062; ep3 r=0.47, p=0.011; ep4 ns), indicating that expressing positive affect in similar spatial patterns relates to connection. - EDA synchrony: Dyads exhibited significant EDA synchrony and, dynamically, synchronized more than alone viewers in negatively valenced scenes (e.g., injury, arguments). Dynamic EDA synchrony in ep1 was best explained by negative crowd-sourced emotions (e.g., fear r≈0.61, p<0.001; disgust β=0.12, t(623)=7.54, p<0.001; surprise β=0.05, t(623)=3.05, p<0.001). Across episodes, EDA synchrony related to facial expressions of fear, disgust, and surprise, and inversely to sadness; joy showed weak/positive association. Global EDA synchrony predicted connection (β=1.75, t(81)=3.77, p<0.001). - Character impressions: PCA revealed five components (~90% variance), with PC1 (55%) reflecting general positive sentiment toward characters. Greater dyad similarity in impressions correlated with higher connection across episodes (Fig. 6c). - Structural equation modeling: Temporal positive facial synchrony, spatial facial synchrony, and impression similarity significantly loaded on a latent shared experience factor, which predicted connection (standardized estimate=0.59, p<0.001), controlling for time spent together (standardized variance=0.24, p<0.01).
Discussion
Findings demonstrate that social connection during shared viewing arises from synchronized affective experiences across multiple channels. Crucially, not just co-attending to the same content, but phase-aligned positive emotional responses—smiling and its specific AU configurations occurring at the same moments—predicted stronger affiliation. EDA synchrony indicated shared sympathetic arousal, often heightened during negative or intense scenes, and also related to connection, suggesting that both positive and negative shared affect can support bonding, albeit expressed differently across modalities. Cognitive alignment—similar impressions of characters—further contributed to perceived connection. The importance of precise temporal synchrony distinguishes shared experience from sequential processes like contagion and underscores synchronous signaling as a mechanism for establishing shared understanding and affiliation. The multimodal naturalistic approach shows these processes can be jointly measured to model a latent shared experience that robustly predicts social connection.
Conclusion
The study advances a holistic, multimodal account of how shared experiences foster social bonds, showing that temporally synchronized positive facial expressions, alignment in the spatial configuration of those expressions, synchronized physiological arousal, and similarity in character impressions collectively support a latent shared experience that predicts connection. Positive affect synchrony was the strongest and most reliable behavioral predictor, while EDA synchrony highlighted contributions of shared negative arousal. Future research should test generalizability across social contexts (e.g., conversations, group interactions), explore stimuli with stronger negative valence (e.g., horror/thriller), refine computer vision models for negative expressions, integrate additional physiological and neural measures, and examine cultural norms and interpersonal dynamics that shape emotional display and synchrony.
Limitations
- Stimulus characteristics: The selected TV drama may have emphasized positive affect; negative facial synchrony effects were small and context-dependent, potentially limiting generalizability to more negatively valenced content. - Measurement constraints: Automated facial expression models may have lower accuracy for negative emotions; facial displays are subject to suppression/feigning, potentially decoupling behavior from internal state. Some EDA data were unusable (non-responders/acquisition errors). - Social display norms: Participants watched with a stranger in close proximity, which may have discouraged overt negative displays, producing discrepancies between physiological and facial indices during negative events. - Context specificity: Passive co-viewing with minimal conversation may not generalize to interactive settings (e.g., dialogues, group tasks). The findings may vary with different social structures, cultural norms, or relationship histories. - Sample and design details: Limited information on broader demographics; potential minor typographical/reporting errors in descriptive stats; two dyads did not complete all sessions; online emotion ratings were from a separate sample and only for episode 1.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny