logo
ResearchBunny Logo
Introduction
The neural mechanisms underlying face processing in humans remain a subject of debate, with competing modular and distributed models. Lesion studies, functional imaging, EEG, and MEG have provided insights, but the temporal dynamics across the cortex and within individuals remain largely unexplored. This study aimed to address this gap using intracranial electrocorticography (ECoG) recordings, offering high temporal resolution and precise localization of neural activity. The hypothesis was that face information is initially processed in anatomically distinct, face-selective sites before spreading to less selective sites. The study also aimed to investigate the causal role of face-responsive sites in conscious face perception through electrical stimulation.
Literature Review
Prior research using various techniques like lesion studies, functional neuroimaging (fMRI), scalp EEG, and MEG has provided valuable spatial and temporal information on face processing. Studies in non-human primates have also contributed significantly. However, a long-standing controversy persists regarding whether face processing is modular (localized to specific brain regions) or distributed (across multiple brain regions). Some studies show face-selective responses in distinct temporal cortex (TC) regions, while others suggest that face information can be deciphered from non-selective regions, implying a distributed representation. These studies often suffer from limitations in temporal resolution, averaging across multiple seconds or relying on regions of interest and averaging across subjects, hindering the understanding of the precise temporal dynamics.
Methodology
Eight neurosurgical patients with ECoG electrodes implanted as part of their presurgical evaluation for epilepsy participated. Electrodes provided coverage over the ventral and lateral TC. High-temporal-resolution (>1000 Hz) recordings were obtained while participants performed a visual task involving various image categories (human, mammal, bird, marine) and non-faces. Univariate and multivariate analyses, along with a multiband learning kernel (MKL), were used to quantify face information across different frequency bands, focusing primarily on high-frequency broadband (HFB, 70-117 Hz) activity. Electrical stimulation was used to assess the causal role of specific sites in face perception. Several machine learning models were used to analyze the data and to determine the importance of different brain regions in discriminating between faces and non-faces. These included models that included all sites, models that excluded face-selective sites, models that included only a random subset of sites, and sparse models. The response onset latency (ROL) was also calculated for each site and analyzed in relation to anatomical location and selectivity.
Key Findings
Analysis of HFB activity revealed that only a minority of recording sites (10.64%) showed face-selective responses, clustered in the fusiform gyrus or lateral occipital gyrus. Human faces elicited the strongest activation in face-selective sites, followed by mammal, bird, and marine faces. A sparse model focusing on information from human face-selective sites performed as well as or better than anatomically distributed models in discriminating faces from non-faces. Removal of face-selective sites significantly decreased decoding accuracy. Electrical stimulation of the posterior fusiform site (pFUS) but not the medial fusiform site (mFUS) caused distortions in conscious face processing. The study found a posterior-to-anterior gradient in both response time and selectivity in both face-selective and face-active sites. The results supported the hypothesis that face information is anatomically localized but temporally distributed.
Discussion
The findings support a model of face processing where information is initially processed in a few anatomically discrete, face-selective sites, primarily in the posterior fusiform gyrus (pFUS). This processing then seems to propagate anteriorly in time to a more distributed network of regions. The high temporal resolution of ECoG allowed for the detection of these fast temporal dynamics. The relatively small number of sites sampled by ECoG, and the inherent variability in task-active areas across individuals, may limit the ability to detect weak, distributed patterns of activity. The study highlights the crucial role of pFUS in conscious face processing, while task-active sites appear to play a limited role in the same context. These findings integrate seemingly contradictory modular and distributed views of face processing, unifying previous findings.
Conclusion
This study provides compelling evidence for anatomically localized yet temporally distributed face processing in the human brain. The posterior fusiform gyrus plays a critical causal role in conscious face perception. Future research should explore the interactions between face-selective and other association areas to fully understand the complex network underlying face processing.
Limitations
The study used a relatively small sample size (eight participants). The use of ECoG limits the spatial coverage of the recordings, and the task-active areas may exhibit variability in size and composition across individuals. These factors might influence the interpretation and generalizability of the results. The study also primarily focuses on HFB activity, neglecting potentially valuable information from other frequency bands.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny