
The Arts
Exploring the combined impact of color and editing on emotional perception in authentic films: Insights from behavioral and neuroimaging experiments
Z. Cao, Y. Wang, et al.
This study by Zhengcao Cao, Yashu Wang, and others examines how film color and editing shape our emotional responses. Utilizing behavioral and fMRI techniques, the researchers provide fascinating insights into how different cinematic elements interact to influence our perceptions, revealing distinct brain activations. A must-listen for filmmakers and enthusiasts alike!
~3 min • Beginner • English
Introduction
The study addresses a key gap in film research: how color (color vs. black-and-white) and editing (fearful, neutral, happy) jointly influence viewers’ emotional perception. While color and editing are known separately to shape affect and narrative engagement, their combined impact in authentic filmic contexts has been underexplored due to methodological challenges. The authors aim to test whether color and editing interact to bias perceived emotional valence of a neutral face embedded within a shot–reverse–shot sequence and whether such interactions manifest in distinct neural activation patterns measured with fMRI. By integrating behavioral ratings and neuroimaging, the work seeks to provide theory-grounded and practically relevant insights for filmmaking, consistent with Gestalt notions that viewers perceive films as integrated wholes.
Literature Review
The paper surveys historical and empirical foundations for the emotional roles of color and editing. Color: Transition from black-and-white to color in cinema, iconic uses (e.g., Schindler’s List, Hero), and psychological evidence linking color properties (hue, saturation, brightness) to affective valence (brighter/warmer tones often positive; darker tones negative). Studies indicate color films tend to elicit higher valence ratings than black-and-white, including in neutral contexts. Editing: From early montage theories and the Kuleshov effect to contemporary replications using face–scene paradigms, editing context systematically alters perceived emotion of neutral faces. Neurocinematic and cognitive work implicates regions such as insula, ACC, amygdala, and precuneus in responses to editing and narrative continuity. Despite robust separate literatures, prior work rarely examined their joint influence; audiovisual interactions and contextual modulation suggest potential synergistic effects when color and editing are combined.
Methodology
Design: Two-factor mixed design with film color as a between-subject factor (color, black-and-white) and film editing as a within-subject factor (fearful, neutral, happy), yielding six conditions. Each participant completed 30 trials (3 blocks of 10). Participants: Total N=208 healthy adults (105 females), aged 17–40 (M=22.64±3.75). Exclusions: film studies background; screened for panic disorder; normal/corrected vision; informed consent. Allocation: Materials rating (n=24; 12 color, 12 black-and-white). Experiment 1 behavioral (n=117; 58 color, 59 black-and-white). Experiment 2 fMRI (n=67; 36 color, 31 black-and-white). Stimuli and production: Professional filmmaker-led production using shot–reverse–shot structure. Cam1 recorded 2-s neutral face clips (blue screen) from gender-balanced actors; Cam2 recorded 4-s emotional scenes (fearful, neutral, happy) with ASL typical of Hollywood. Color sequences retained full color; black-and-white sequences were grayscale conversions. Blue-screen faces were keyed and composited into corresponding scenes; color temperature, brightness, and contrast were matched. Audio was omitted to isolate visual effects. Materials rating: To ensure neutral face baseline equivalence across conditions, 24 participants rated valence (-4 to +4) for neutral faces and emotional scenes (3.5 s response; 2 s ITI). Neutral faces averaged 0.02±0.09 with no differences across conditions. Emotional scenes differed significantly within color (F2,8=119.650, p<0.001, η²=0.968) and black-and-white (F2,8=107.922, p<0.001, η²=0.964). Means (±SE): color fearful -2.46±0.21, neutral 0.30±0.18, happy 2.11±0.14; black-and-white fearful -2.19±0.24, neutral 0.48±0.08, happy 1.98±0.17. Procedure Experiment 1 (behavioral): Two practice trials; each trial: 2-s neutral face, 4-s emotional scene, 2-s same neutral face (Face_2). Post-sequence ratings: valence (-1 to 1) and emotional intensity (1–5), each 5 s; 1.5-s ITI. Conditions counterbalanced across runs; 30 trials total. Procedure Experiment 2 (fMRI): Event-related design with temporal jitters to separate BOLD for faces and scenes. Trial: 10-s instruction, 0.5-s fixation, 2-s neutral face, jitter 4–6 s, 4-s emotional scene, jitter 4–6 s, 2-s neutral face (Face_2), 0.65-s ISI, then 5-s ratings of valence (-4 to 4) and arousal (1–9); ITI 1–1.5 s; 30 trials total. MRI acquisition: Siemens 3T Prisma. Structural T1 MPRAGE (TI/TR/TE=2530/1100/2.27 ms, FA 7°, FOV 256×256 mm², 208 slices, 1-mm isotropic). Task fMRI T2* EPI (TR/TE=2000/34 ms, FA 70°, FOV 200×200 mm², matrix 100×100, 72 slices, 2-mm isotropic, 480 volumes). Field map TR/TE1/TE2=720/4.92/7.38 ms. Preprocessing: Slice-time correction; field map distortion correction; motion correction (quality 0.9, 4 mm separation); Gaussian smoothing (varied during steps); coregistration (NMI cost), segmentation (tissues), normalization to MNI (2-mm isotropic), group-level smoothing (FWHM 4 mm). fMRI first-level GLM: Modeled BOLD to Face_2 events (HRF-convolved), contrasting Face_2 versus baseline; 10 trials per condition averaged to produce activation maps per condition. Second-level: One-sample t-tests per condition; FWE-corrected p<0.05, cluster size >5 voxels; xjView for region identification. Similarity analysis: Whole-brain map correlations among conditions; computed Scolor (across colors within same edit), Sediting (across edits within same color), Ratio=Scolor/Sediting to compare relative influence. ROI analysis: Spherical ROIs (4-mm radius) centered at literature coordinates: ACC left (-4,26,34), ACC right (6,24,30); insula left (-38,-1,-2), insula right (40,-1,-1); right MFG (42,36,33). Extracted beta values for Face_2; 2×3 repeated-measures ANOVAs (color × editing) with Greenhouse-Geisser as needed. Behavioral statistics: Repeated-measures ANOVAs for valence (primary), plus emotional intensity (Exp 1) and arousal (Exp 2) as supplementary; Bonferroni-corrected post hoc and simple effects.
Key Findings
Materials check: Neutral faces showed no valence differences across conditions (mean 0.02±0.09). Emotional scenes differed strongly within each color set (ps<0.001). Behavioral Experiment 1 (n=117): - Main effect of editing on valence (Greenhouse-Geisser corrected; very large effect size). - Significant interaction of color × editing on valence (p=0.007, η²≈0.048). No main effect of color (p=0.733). Simple effects: under fearful editing, color sequences yielded lower valence than black-and-white (p=0.024); under happy editing, color sequences yielded higher valence than black-and-white (p=0.043). Editing significantly modulated valence versus neutral in both color conditions (both ps<0.001). Emotional intensity showed no main effects or interactions. Behavioral Experiment 2 (n=67): - Main effect of editing on valence (F1.4,91.2=168.000, p<0.001, η²=0.721). Post hoc: all three editing conditions differed (ps<0.001). - Interaction color × editing not significant (F1.4,91.2=2.228, p=0.129). Arousal: main effect of editing (F1.7,107.5=29.899, p<0.001, η²=0.315). fMRI whole-brain (Face_2): - All six conditions elicited robust but distinct activation patterns (FWE p<0.05). Key regions across conditions included insula, ACC/MCC, IPG/AG/SPG, right MFG, precuneus. - Correlation-based similarity analysis indicated condition-specific activation patterns (all similarities <1). Ratio analysis (Scolor/Sediting) showed color had greater influence (ratio<1) in four of six conditions: color & neutral (0.873), color & happy (0.512), black-and-white & fearful (0.934), black-and-white & happy (0.654); editing more influential (ratio>1) in color & fearful (1.133) and black-and-white & neutral (1.694). ROI results: - Significant color × editing interaction in left ACC (p=0.027, η²=0.060). - Editing main effects: left ACC (p=0.026), right ACC (p=0.010); marginal in right MFG (p=0.082). - Simple effects: right ACC beta during neutral editing > fearful (p=0.019). Left ACC beta in color & neutral > color & fearful (p=0.013) and > color & happy (p<0.001). Right insula showed a marginal color main effect (p≈0.069) and higher beta for black-and-white & happy vs color & happy (p=0.035). Overall: Behavioral data show color modulates editing-driven valence in opposite directions for fearful vs happy contexts; fMRI reveals condition-specific neural patterns, with color often exerting stronger influence on whole-brain differentiation and an interaction localized in left ACC.
Discussion
The findings show that film color and editing jointly shape emotional perception, supporting a Gestalt perspective that audiences integrate filmic elements into a coherent whole. Behaviorally, editing reliably shifted perceived valence of a neutral face, and color amplified context-driven affect: in fearful contexts, color intensified negativity relative to black-and-white, while in happy contexts, color enhanced positivity. Neurally, distinct whole-brain activation patterns emerged for each color–editing combination, and similarity analyses indicated that color more often organized neural differentiation than editing. The left ACC, implicated in detecting and regulating emotional salience, exhibited an interaction between color and editing, aligning neural indices with behavioral interactions. Differences between behavioral valence and neural recruitment (e.g., stronger right insula/ACC activation in some black-and-white conditions despite higher valence for color) suggest that black-and-white may sometimes engage broader or different emotional-processing circuitry, pointing to complex affective computations not fully captured by valence ratings alone. Practically, these results argue for integrating color palette decisions with editing strategies during pre-production to optimize emotional impact; theoretically, they extend Kuleshov-style context effects by demonstrating their modulation through color and delineating associated neural mechanisms (insula, ACC, MFG).
Conclusion
The study demonstrates that color and editing interact to influence viewers’ emotional perception of neutral faces in authentic film sequences. Behaviorally, a significant interaction (Experiment 1) showed color amplifies editing-induced valence shifts; neurally (Experiment 2), each color–editing combination produced distinct activation patterns with color often exerting stronger differential effects, and an interaction was localized in left ACC. These convergent behavioral and neural data bridge neurocinematics and practice, offering guidance to filmmakers to co-design color palettes and editing approaches to enhance emotional resonance. The work lays groundwork for future investigations into multi-element cinematic interactions (including sound) and for translational testing of these insights within real-world production workflows.
Limitations
- Limited fMRI sample size (~30 per color group) reduced power to detect the valence interaction in Experiment 2, limiting generalizability. - No significant color effects on arousal (Exp 2) or emotional intensity (Exp 1) suggest the materials may not have captured the full complexity of color-driven physiological responses; alternative designs/materials are warranted. - Sound was omitted to isolate visual effects; given audio’s major role in emotion, future studies should integrate and model interactions among color, editing, and sound. - The insula’s specific role in color–editing interactions remains unclear and requires further clarification. - Prospective validation in real-world filmmaking (e.g., A/B test screenings, industry workflows) is needed to confirm the applicability and refine guidance for practice.
Related Publications
Explore these studies to deepen your understanding of the subject.