logo
ResearchBunny Logo
Decoding individual identity from brain activity elicited in imagining common experiences

Psychology

Decoding individual identity from brain activity elicited in imagining common experiences

A. J. Anderson, K. Mcdermott, et al.

This groundbreaking study explores how individual differences in imagining common experiences resonate in brain activity. Through fMRI scans of 26 participants vividly imagining various scenarios, the researchers uncovered insights that could transform our understanding of personalized mental imagery. This research was conducted by Andrew James Anderson, Kelsey McDermott, Brian Rooks, Kathi L. Heffner, David Dodell-Feder, and Feng V. Lin.

00:00
00:00
~3 min • Beginner • English
Introduction
The study investigates whether person-specific information present in mental imagery of common scenarios is reflected in fMRI activity and can be decoded to identify individuals. Prior work has shown overlapping brain networks for episodic recollection and imagination, and machine-learning approaches have decoded event categories or group-level differences using generic semantic models. However, it remained unclear whether between-subject differences in imagining the same type of event reflect meaningful, idiosyncratic representational structure rather than noise. The authors hypothesize that personalized models derived from each participant’s own verbal descriptions and ratings of experiential attributes will better predict their fMRI representational structure than group-average models, demonstrating readable neural signatures of individual identity during imagined experiences. This line of inquiry is motivated by advances in modeling semantic representations from language and by findings that fMRI can capture individual differences across cognitive domains, but has not yet been tested for first-person imagination of varied everyday scenarios.
Literature Review
The paper situates its contribution within several literatures: (1) Episodic memory and imagination recruit overlapping networks including medial parietal cortex, medial prefrontal cortex, inferior prefrontal cortex, and medial/lateral temporal lobes; prior fMRI studies can distinguish different event types and components but have not established whether between-individual differences during similar imagined events are meaningful. (2) Individual-differences research shows fMRI can predict personal traits and performance (e.g., resting-state predicting task activation topography; decoding object similarity, STEM exam performance; characterizing ongoing thought), but these paradigms differ from first-person scenario imagination. (3) Clinical and group-difference studies reveal altered autobiographical memory-related activation in disorders (Alzheimer’s, semantic dementia, epilepsy) and classification between groups (e.g., suicidal ideation), yet such averages cannot explain detailed differences between individuals within the same group. (4) Computational semantic modeling has decoded brain activity from reading/hearing sentences using group-level distributional semantics; however, generic models may miss idiosyncratic, subjectively salient content. These gaps motivate a personalized, multimodal modeling approach to capture and interpret person-specific neural representations of imagined scenarios.
Methodology
Design: Participants imagined 20 common scenarios (e.g., reading, dancing, restaurant, wedding, funeral, shopping, etc.). The study built two personalized models per participant: (a) a verbal model from each participant’s free-text description of each scenario mapped into 300-dimensional GloVe word embeddings (content words extracted, vectors summed to represent the scenario), and (b) an attribute model from participant ratings (Likert 0–6) of 20 experiential attributes spanning sensory (e.g., color, motion, touch, audition, speech, music, taste), motor (upper/lower limb, body part), cognitive/spatial/landmark, social/communication, affect (pleasant, unpleasant), and temporal/spatiotemporal aspects. Ratings were z-normalized within participant. A combined personal model integrated verbal and attribute similarity structures. Participants: 26 participants completed scanning (17 female, 9 male; mean ± SD age reported as 73 ± 7 years in Results). Participants were healthy and English-speaking. (Note: numbers and ages in Methods text are inconsistent elsewhere, but primary analyses report n=26.) Five participants were left-handed. Stimuli and procedure: For each scenario, participants first generated a brief description and attribute ratings. During fMRI, standardized text prompts ("A/An X scenario") cued re-imagination. The 20 scenarios were presented five times in randomized orders. Participants vividly reimagined each scenario during on-screen cue presentation. MRI acquisition and preprocessing: Standard preprocessing included slice-timing correction, motion correction, spatial normalization to MNI space, and resampling to 3×3×3 mm. To accommodate variable hemodynamic responses during extended imagery, the analysis averaged BOLD residuals over a post-stimulus window centered around 15 s after prompt onset to construct per-trial activation estimates. Feature extraction: The brain was parcellated into 90 AAL ROIs. For each ROI and participant, the 100 most stable/informative voxels were selected (robustness checks also used 50 or 200). For each scenario, voxel activity was averaged across repetitions and vectorized to yield a 100-dimensional fMRI vector per scenario per ROI per participant. Representational Similarity Analysis (RSA): For each participant, pairwise Pearson correlations among the 20 scenario representations were computed separately for fMRI (per ROI) and for each personal model (verbal, attribute, and their integration), yielding 20×20 similarity matrices. Lower-triangle vectors were z-transformed. Group-average (G-1) model similarity matrices were computed by averaging other participants’ model representations, excluding the test participant. Hypothesis testing: (1) RSA tested whether model-fMRI similarity vectors were positively correlated (Spearman) across scenarios. (2) Partial RSA tested whether person-specific model-fMRI correlations remained positive when controlling for the corresponding group-average model, isolating idiosyncratic structure. Multiple comparisons across 90 ROIs were FDR-corrected. Permutation tests assessed individual-level significance. (3) Searchlight RSA repeated analyses using a moving cube (radius 3 voxels) to localize effects. Identity decoding: A two-alternative forced-choice decoding tested whether a participant’s personal model better matched their own fMRI similarity structure than another participant’s. For each of the 325 participant pairs, congruent (P1 model–P1 fMRI plus P2 model–P2 fMRI) versus incongruent (P1 model–P2 fMRI plus P2 model–P1 fMRI) sums of Fisher z-transformed RSA coefficients were compared; accuracy was the proportion of correct pairwise matches. Permutation tests (10,000 shuffles) estimated chance distributions. Comparative decoding using models alone (verbal vs attribute, without fMRI) provided context. Robustness analyses: Results were replicated across different voxel counts (50, 100, 200), alternative group-averaging methods (feature vs similarity space), exclusion of low-correlation participants, handedness hemisphere swaps, and subgroup analyses by sex. Additional analyses showed verbal-only and attribute-only models each predicted person-specific fMRI structure; combined models performed best.
Key Findings
- Personalized models capture idiosyncratic neural representations: Partial RSA revealed that participants’ own integrated personal models significantly predicted their fMRI representational structure over and above group-average models in multiple ROIs. Reported exact p-values (FDR-corrected where applicable) included 0.0030, 0.0034, 0.0070, 0.0030, 0.0400 across highlighted ROIs. Significant participant-level outcomes (p < 0.05, permutation) were observed in ROIs such as precuneus, left middle occipital, left middle frontal, and left angular gyri. - Comparable magnitudes for personal and group models: RSA coefficients for person-specific and group-average models were broadly similar in magnitude across eight key ROIs (e.g., bilateral precuneus, left middle temporal, left inferior parietal/angular, left posterior cingulate, left middle occipital, left middle frontal). Both model types contribute complementary strengths (idiosyncrasies vs higher SNR for commonalities). - Searchlight localization: Person-specific representational structure localized primarily to medial parietal cortex (posterior/anterior cingulate/precuneus) and inferior parietal regions; person-specific models alone also highlighted dorsal/ventral medial prefrontal cortex, dorsolateral prefrontal cortex, and anterior temporal cortex. - Identity decoding from fMRI: Pairwise identity decoding achieved around 75% accuracy across the eight ROIs (all p < 0.05, permutation), demonstrating that individual identity can be decoded from fMRI activity elicited during imagination of common scenarios. A comparative decoding using models alone (verbal vs attribute) achieved 83% accuracy (p = 0.0001), indicating complementary information across modalities. - Robustness: Findings were robust to variations in voxel selection (50/100/200), averaging strategies for group models, exclusion criteria, handedness adjustments, and subgroup analyses. GloVe embeddings of prompts alone correlated with some ROIs but could not account for interpersonal differences.
Discussion
The results demonstrate that individualized models built from each participant’s own verbal descriptions and experiential attribute ratings align with person-specific fMRI representational structures during imagination of everyday scenarios. This supports the view that fMRI can quantify meaningful inter-individual differences in complex imagined event representations. Medial parietal cortex (including posterior/anterior cingulate and precuneus) strongly encoded person-specific information, consistent with its role in episodic recollection/simulation, scene/space perception, and event segmentation. The left temporoparietal junction/inferior parietal/angular regions also encoded idiosyncratic information, aligning with first-person perspective taking, episodic recollection, and aspects of bodily self-consciousness. Left dorsolateral prefrontal cortex reflected individual differences, potentially indexing the control processes regulating scenario activation and suppression or representing control demands linked to content. Searchlight analyses further implicated medial prefrontal and anterior temporal regions, suggesting overlap with the semantic memory system and reinforcing the blurred boundaries between episodic and semantic representations. The study advances beyond one-size-fits-all group semantic models by showing that personalized multimodal models can predict neural representational geometry at the individual level and enable identity decoding. This unified modeling framework, related to approaches used in language comprehension, may bridge episodic and semantic memory representations within default network subsystems and has potential to relate to individual differences in ongoing thought and psychosocial traits. Applied implications include potential utility for characterizing disorders affecting episodic memory/imagery (e.g., Alzheimer’s disease, schizophrenia, depression), tailoring interventions, and tracking state-level dynamics of internally generated thought.
Conclusion
This work provides initial evidence that individual identity can be decoded from fMRI activity elicited during the imagination of common scenarios and that person-specific models derived from verbal descriptions and experiential attribute ratings better predict individuals’ neural representational structures than group-average models. The findings localize idiosyncratic representations to regions within the episodic simulation/recollection network, notably medial parietal and inferior parietal cortices, with additional contributions from prefrontal and anterior temporal regions. Future research should: (1) quantify the complementary contributions of personal versus group models; (2) examine the role of perspective (first- vs third-person); (3) explore dynamic information flow and HRF variability across individuals; (4) extend to broader populations (including younger adults) and clinical cohorts; (5) refine multimodal models to unify episodic and semantic features and track temporal dynamics of ongoing thought.
Limitations
- Perspective not manipulated: Only first-person imagery was modeled; effects of third-person perspective remain untested. - Age and sample details: The text contains inconsistencies about participant ages across sections; generalizability across age groups (especially younger adults) is unresolved. - Searchlight sensitivity: Searchlight analyses may have reduced sensitivity for patterns not fitting within the searchlight radius; some prefrontal/inferior parietal effects were less evident than ROI analyses. - HRF variability: Considerable inter-individual variation in hemodynamic responses during extended imagery motivates caution; averaging windows were used instead of a canonical HRF model. - Attribute set abridged: The attribute model used an abridged set (20 of 63–65 attributes), potentially omitting relevant experiential dimensions. - Multiple comparisons and regional effects: Some hippocampal/parahippocampal relations did not survive correction, limiting conclusions about these regions. - Modality of prompts: GloVe embeddings of prompts alone related to some ROIs but do not capture interpersonal differences, underscoring reliance on self-reports that may be subject to reporting biases.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny