logo
ResearchBunny Logo
Selective Attention and Decision-Making Have Separable Neural Bases in Space and Time

Psychology

Selective Attention and Decision-Making Have Separable Neural Bases in Space and Time

D. Moerel, A. N. Rich, et al.

Research conducted by Denise Moerel, Anina N. Rich, and Alexandra Woolgar separates selective attention from decision-making using a two-stage task and multimodal neuroimaging. The study reveals attention boosts stimulus representations in early visual and frontoparietal regions before decisions begin, highlighting attention’s independent role in neural coding.

00:00
00:00
~3 min • Beginner • English
Introduction
Selective attention prioritizes relevant information and improves behavioral performance for spatial locations and visual features, implying attention modulates perception. In many cognitive neuroscience paradigms, however, attended information is also the basis for the participant’s decision, confounding selection/maintenance with decision-making processes. The adaptive coding hypothesis proposes that frontoparietal (Multiple Demand, MD) regions flexibly code task-relevant information, and prior MVPA in humans shows MD coding of cued objects/features greater than distractors. M/EEG studies reveal sustained coding for cued information but not distractors. Decisions themselves can drive decodable signals in early visual and frontoparietal areas with time courses overlapping with attention effects, complicating interpretation. There is limited evidence that attention effects can occur without decision-making, but prior work did not assess what information was coded for attended vs unattended stimuli. The present study asks: (1) using MEG, does attention affect stimulus information coding when dissociated in time from decision-making? (2) using fMRI, what information do MD regions hold (attended, unattended, decision)? (3) using model-based MEG-fMRI fusion, what is the spatiotemporal profile with which MD regions preferentially code cued vs distractor information during the initial phase of the trial?
Literature Review
The authors review evidence that MD/frontoparietal regions flexibly code task-relevant information across diverse tasks (Duncan and colleagues; Fedorenko et al.) and show stronger multivariate coding for cued vs distractor objects or features. M/EEG studies show sustained coding for cued features at cued locations, with distractor coding transient. Conversely, decisions can be decoded from early visual and frontoparietal cortex and emerge around 140–180 ms in some paradigms, potentially overlapping with attention effects. Prior univariate work showed attention-related MD activity with minimal decision requirements, but without examining representational content of attended vs unattended stimuli. This motivates dissociating attention and decision signals experimentally and characterizing their spatial and temporal bases.
Methodology
Design overview: A two-stage visual task dissociated attention from decision-making by orthogonalizing attended stimulus (color-defined orientation) and decision (rotation direction) and separating them in time. Stage 1 presented overlaid oriented lines in two colors; Stage 2 presented a black comparison line; a subsequent response screen required a correctness judgment about the indicated rotation, orthogonal to rotation direction, to separate decision from motor mapping. MVPA decoded cued orientation, distractor orientation, rotation direction, and (for completeness) response button. MEG experiment: Participants and training: 31 trained; 21 reached ≥90% accuracy in training; final MEG N=20 (14F/6M; 18 right-handed; mean age 25.8, SD 5.1). A structured training session progressively introduced task elements and timing until MEG-speed (150 ms stimulus; 200 ms comparison). Feedback given per trial and per block. Task and stimuli: Fixation maintained; block-wise cue to attend blue or orange (block=32 trials). Stage 1: overlain blue and orange oriented gratings (within 3° DVA window) at fixation for 150 ms; spatial frequency 2 cpd; colors approximately equiluminant (RGB blue=72,179,217; orange=239,159,115), phase randomized. After 500 ms blank, Stage 2: black comparison line (0.8° length, 0.1° width) for 200 ms. After 500 ms blank, response screen with arrows indicated clockwise or anticlockwise rotation; participants pressed one of two buttons to indicate whether the shown rotation was correct or incorrect. Button mapping counterbalanced across participants. Feedback via fixation color (green/red) for 250 ms. Four possible orientations (22.5°, 67.5°, 112.5°, 157.5°) arranged as two orthogonalized pairs; when cued orientation was from one pair, distractor and comparison were from the other; comparison always 45° from cued orientation, clockwise or anticlockwise. Trial factors were fully counterbalanced within blocks; error trials could be re-presented (max twice) to increase correct-trial counts; successful retakes replaced error trials in analysis. MEG acquisition and preprocessing: Whole-head 160 axial gradiometer system (KIT) sampled at 1,000 Hz; online filter 0.03–200 Hz; five head position coils; photodiode used for precise stimulus onset. Psychtoolbox (MATLAB) for presentation; two-button pad for responses. Data epoched -100 to 3000 ms relative to Stage 1 onset, downsampled to 200 Hz. Error trials replaced by successful retakes; unreplaced errors (~0.33%) retained to preserve counterbalancing. MEG decoding analysis: At each timepoint, trained SVM classifiers (COSMOMVPA) on all 160 sensors; leave-one-run-out cross-validation over 16 runs (one participant had 14). Decoded: (1) cued orientation (within-pair 2-way classification; chance=50%), (2) distractor orientation (same), (3) rotation direction (clockwise vs anticlockwise; chance=50%), and (4) correct response button (for completeness). Bayesian statistics (Bayes factors) assessed above-chance decoding and cued> distractor differences over time with half-Cauchy priors and null intervals; strong evidence defined as BF>10 for two consecutive timepoints. Exploratory channel searchlight used LDA classifiers over local sensor neighborhoods to map sensor contributions across time. fMRI experiment: Participants: 42 trained; 27 reached ≥90% in training; final fMRI N=24 (15F/9M; 23 right-handed; mean age 27.33, SD 5.53); one overlapped with MEG cohort. Compensation provided; ethics approved. Task adjustments for fMRI: Same task timing as MEG but with sequence counterbalancing rather than full randomization; no re-presentation of errors; no trial-wise feedback (block-level feedback maintained). Eight runs, four blocks each, 32 trials per block (one participant completed seven runs). Due to fMRI temporal resolution, attention and decision could not be temporally separated, but design orthogonality allowed representational dissociation. fMRI acquisition: Siemens 3T Prisma-Fit, 32-channel head coil. Multiband EPI: TR=1208 ms, TE=30 ms, flip angle 67°, FOV 192 mm, multiband factor 2, 38 slices, 3 mm thickness with 10% gap, in-plane resolution 3×3 mm. T1 MPRAGE (1 mm isotropic) at session start; two practice blocks during structural scan. fMRI preprocessing and GLM: SPM8 in MATLAB; EPI to NIfTI, realignment, slice-time correction (SPM12 routine), coregistration, normalization parameters for ROI definition. First-level GLMs with 16 regressors per run (4 cued orientations × 2 distractor orientations × 2 rotation directions). Whole trials modeled from Stage 1 onset to response, HRF-convolved. ROIs: Thirteen MD ROIs from Fedorenko et al. (2013): bilateral aIFS, pIFS, PM, IFJ, AI/FO, IPS, and bilateral ACC. Early visual cortex V1 defined as BA17 from MRIcro Brodmann template. ROIs transformed to native space via inverse normalization; hemispheres averaged for decoding. fMRI decoding: Within each ROI, SVM decoded cued orientation, distractor orientation, and rotation direction from beta patterns using leave-one-run-out CV; accuracies averaged over hemispheres. Bayesian ANOVA tested attention (cued vs distractor) × MD region effects; Bayesian t-tests assessed above-chance decoding and cued> distractor per ROI, including V1. Model-based MEG-fMRI fusion: Constructed 16×16 RDMs (conditions = 4 cued orientations × 2 distractor orientations × 2 rotation directions) for MEG (per timepoint; 25 ms windows) and fMRI (mean MD, V1, and individual MD ROIs). Three orthogonal model RDMs encoded cued orientation (0/0.5/1 for 0°/45°/90° differences), distractor orientation (same scheme), and rotation direction (0 same, 1 different). Commonality analysis quantified variance jointly shared by MEG and fMRI RDMs uniquely explained by each model, yielding time courses per ROI. Permutation testing (10,000 shuffles of MEG RDM rows/columns) established cluster-corrected significance across ROIs and timepoints (one-tailed, p<0.05, multiple-comparisons corrected).
Key Findings
Behavior and task performance: - MEG session: High accuracy; mean 94.23% pre-replacement; 99.67% after replacing errors (remaining errors ~0.33%); mean RT 667 ms (SD 104 ms). - fMRI session: Mean accuracy 94.45% (SD 4.91); mean RT 658 ms (SD 140 ms) from response screen onset. MEG decoding over time: - Cued orientation: Strong above-chance decoding onset at ~85 ms after Stage 1 stimulus onset; sustained until after mean response time (BF>10 criterion). - Distractor orientation: Strong above-chance decoding from ~90 ms but not sustained; returned to chance by ~420 ms post-stimulus. - Attention effect (cued > distractor): Emerged at ~215 ms after stimulus onset and persisted until after mean RT, indicating selective maintenance of task-relevant information independent of decision processing. - Rotation direction (decision): Above-chance decoding from ~170 ms after the comparison line onset (i.e., ~820 ms after initial stimulus onset), sustained until after response; indicates emergence of decision-related coding after decision-relevant information becomes available. - Response button (correct mapping): Decodable from ~1,580 ms (about 230 ms after response screen), consistent with motor preparation/execution and orthogonal to rotation direction. fMRI decoding by ROI: - Attention effect across MD network: Bayesian ANOVA showed a main effect of attention (cued > distractor; BF > 100), no main effect of region (BF=0.44), and an attention×region interaction (BF=9.60). - Mean MD decoding: Cued orientation mean accuracy 55.62% (BF > 100); distractor orientation at chance 49.56% (BF < 0.01). - Individual MD ROIs: Above-chance cued orientation decoding in all MD regions; distractor orientation at chance in aIFS, pIFS, PM, IFJ, AI/FO, IPS (BF < 0.01); ACC showed evidence consistent with chance (BF=0.20; CIs overlapped chance). Attention effect (cued > distractor) strongly supported in aIFS, pIFS, PM, IFJ, IPS; some evidence in AI/FO and ACC. - V1: Strong attention effect (BF > 100). Cued orientation mean accuracy 66.34% (BF > 100); distractor orientation 51.29% with evidence for chance (BF=0.28; CIs overlapped chance). - Rotation direction (decision): Mean MD above-chance 52.43% (BF=45.89), with significant contributions in PM and IPS individually; aIFS, pIFS, IFJ showed substantial-to-inconclusive evidence for chance; V1 mean 52.67% with inconclusive BF=1.85 due to greater variance. Model-based MEG-fMRI fusion: - Cued orientation commonality: Mean MD showed a significant cluster 555–640 ms after stimulus onset, before comparison onset (i.e., before decision possible), and a later cluster from ~1,140 ms (≈490 ms after comparison onset) extending past mean RT. Individual MD ROIs mirrored this pattern; the early pre-comparison cluster was significant in PM, IFJ, and AI/FO. V1 showed significant cued commonality 225–505 ms after stimulus onset and again from ~1,210 ms to after mean RT. - Distractor orientation commonality: No significant clusters in mean MD or V1; a brief late cluster in aIFS (1,180–1,270 ms). - Rotation direction commonality: Significant clusters in mean MD from ~1,140 ms (≈490 ms after comparison onset) until after response; individual MD regions showed similar onsets (~1,160 ms).
Discussion
The two-stage, orthogonalized design dissociated attention from decision-making. MEG demonstrated that attention enhances neural coding of the attended stimulus well before decision formation is possible (attention effect from ~215 ms after stimulus), while decision-related coding (rotation direction) emerged only after the comparison line, ~170 ms later, consistent with evidence accumulation accounts. fMRI showed that MD regions preferentially represent attended visual information (and not distractor information) and also encode decision-related information, with PM and IPS contributing strongly. V1 similarly exhibited stronger coding for cued than distractor orientations. Model-based MEG-fMRI fusion linked spatial and temporal dynamics, revealing that MD regions carry attended-stimulus information prior to decision onset, and later represent decision signals; V1 coding of attended information arose earlier than in MD, with both regions maintaining attended information into the decision period. Together, these findings indicate that selective attention and decision-making have separable neural bases in both space and time, with MD cortex supporting selective coding of relevant sensory information independently of decision computations, and later integrating decision information. The results reinforce the adaptive, mixed-selectivity nature of MD regions, which flexibly encode multiple task parameters, and underscore the value of combining MEG and fMRI to resolve spatiotemporal dynamics.
Conclusion
This study demonstrates that selective attention and decision-making are separable in neural coding: attention enhances stimulus representations early and independently of decision processes, while decision-related coding arises later once decision-relevant information is available. MEG pinpointed the temporal dissociation, fMRI localized effects to MD regions and V1 for attention and to PM/IPS for decision, and fusion analyses linked these in space-time, showing attended information in MD prior to decision. These findings verify that attention modulates information processing beyond decision demands and highlight MD cortex as a key substrate for selective attention and integration for decision-making. Potential future directions, motivated by noted limitations, include leveraging concurrent eye-tracking to exclude ocular confounds, applying fusion methods with improved sensitivity or participant-level fusion statistics, testing whether attentional advantages reflect enhancement of cued, suppression of distractor, or both, and probing whether the same neural populations within MD encode both attended sensory and decision information.
Limitations
- Eye movements were not recorded; although fixation was instructed and stimuli were foveal/brief (150 ms), residual eye-movement-related signals cannot be fully ruled out (channel searchlight suggested posterior, not frontal, drivers for orientation decoding). - fMRI’s lower temporal resolution prevents temporal separation of attention and decision effects; only representational dissociation via orthogonal design was possible. - Model-based MEG-fMRI fusion required group-averaged RDMs, precluding random-effects statistics across participants; permutation-based cluster testing was used instead and fusion is sensitive only to effects present in both modalities, potentially missing transient unattended-information coding seen in MEG. - The delayed response design prevents precise characterization of the lag between emergence of decision coding and motor response. - A small fraction of error trials remained (to preserve orthogonality), and error re-presentations were used; however, behavioral accuracy was very high, minimizing impact.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny