logo
ResearchBunny Logo
A desire for distraction: uncovering the rates of media multitasking during online research studies

Psychology

A desire for distraction: uncovering the rates of media multitasking during online research studies

A. C. Drody, E. J. Pereira, et al.

A secondary analysis of nearly 3,000 online participants reveals high rates of media multitasking during studies—averaging 38% and ranging 9–85%—raising concerns about the interpretability of online research. Research conducted by Allison C. Drody, Effie J. Pereira, and Daniel Smilek.

00:00
00:00
~3 min • Beginner • English
Introduction
In recent years, particularly during the pandemic, studies examining aspects of human cognition have increasingly been conducted online. These research settings are known to have strong replication rates with in-person laboratory studies and are used widely across cognitive disciplines. Interpretations of task performance from these online studies are often based on the assumption that participants are fully attentive to the cognitive task they are completing. Response time and accuracy data from various cognitive tasks are commonly taken as indices of the time-course of information processing and the limits of the cognitive system. If individuals engage in considerable off-task behaviour while participating in online studies, these common assumptions could be called into question. Studies of off-task behaviours like media multitasking—multitasking wherein at least one of the tasks is media-based—show that this behaviour is highly ubiquitous and pervasive across numerous activities. Prior work indicates that people spend about 17–31% of their media time multitasking, and that two-thirds of participants in a lab study volitionally chose to media multitask even when aware of its negative impact on performance. Despite this apparent preference to media multitask, a clear understanding of the degree to which this behaviour occurs within online studies is lacking. A few smaller studies have suggested prevalence ranging from roughly 8–20% (surveys) and 14–19% (compliance checks). A comprehensive analysis across larger datasets is needed to uncover the prevalence on a broader scale. Thus, the present study investigated the prevalence of media multitasking within online studies via a secondary data analysis across multiple datasets that queried instances of this behaviour.
Literature Review
Prior research has documented widespread media multitasking across daily activities, with diary and lab-based studies estimating that individuals spend 17–31% of media time multitasking and that two-thirds may choose to multitask even when it harms performance. Limited work examining online research contexts reported self-reported media multitasking prevalence of 8–20% (survey-based) and 14–19% (via compliance checks). These studies suggest substantial media multitasking in online testing environments but lacked breadth across diverse tasks and samples, motivating a larger-scale secondary analysis.
Methodology
Study design and participants: The authors collated all existing datasets from online studies conducted by the Vision and Attention Laboratory at the University of Waterloo that included at least one direct measure of media multitasking during the study and used diverse experimental paradigms. Participants were healthy adults (18+), compensated with either course credit (SONA virtual participant pool) or approximately USD $7.25/hour (Amazon Mechanical Turk). Informed consent was obtained; ethics approval was granted by the University of Waterloo Research Ethics Board, and all studies complied with the Tri-Council Policy Statement. Datasets were accessed in April 2022. Sample: The final sample comprised 2,972 participants from 16 separate online studies collected between May 2016 and April 2022. Based on available demographics: 1,760 women, 988 men, 11 non-binary, 1 two-spirit, and 30 not specified (Mage = 25.08, SDage = 9.80). Tasks included seven video viewing tasks, eight sustained attention tasks (including n-back and SART), and one collaborative group project task. Eleven studies used SONA; five used Amazon Mechanical Turk. Primary outcome variable: Media multitasking prevalence was assessed via post-task self-report questions directly probing whether participants engaged in other activities during the task. Question wording and response options varied across datasets. Participants were scored as having engaged in media multitasking if they (1) answered “Yes” to multitasking during the task (datasets 1–14 and 16), or (2) reported any non-zero percentage of time engaged in media-related activities on a 0–100% slider (dataset 15). Analytic approach: A meta-analysis of single proportion data was conducted across all datasets using a random-effects model (given expected heterogeneity across studies in samples, designs, and outcome definitions). Heterogeneity supported this choice (Cochran’s Q(15) = 1027.70, Higgins I² = 98.54%). Population proportion estimates and study-specific standard errors were adjusted using a modified sampling weight W* = 1/(v + τ²), where v is within-study variance and τ² is between-study variance (see OSF: https://osf.io/bd94j/ for calculations). Bias was assessed via Begg’s rank correlation, Egger’s regression, and the LFK index.
Key Findings
- Overall prevalence: Media multitasking proportions ranged from 0.09 to 0.85 across the 16 studies (τ² = 0.06). The random-effects model yielded an overall point estimate of 0.38 (95% CI [0.25, 0.50], p < 0.001). - Heterogeneity: Cochran’s Q(15) = 1027.70; Higgins I² = 98.54%. - Subgroup analyses (no significant differences): • Recruitment method: SONA = 0.44 (95% CI [0.30, 0.57]) vs MTurk = 0.25 (95% CI [0.06, 0.45]); p = 0.12. • Task type: Video Viewing = 0.38 (95% CI [0.15, 0.61]) vs N-back = 0.37 (95% CI [0.14, 0.60]); p = 0.90. • Task length: 0–30 min = 0.47 (95% CI [0.39, 0.54]); 30–45 min = 0.29 (95% CI [0.20, 0.36]); 45–60 min = 0.38 (95% CI [0.28, 0.48]); p = 0.35. • Gender: Women = 0.39 (95% CI [0.24, 0.53]); Men = 0.41 (95% CI [0.30, 0.52]); p = 0.75. • Question focus: Media-specific (datasets 8, 9, 15, 16) = 0.27 (95% CI [0.06, 0.48]) vs General off-task (datasets 1–7, 10–14) = 0.41 (95% CI [0.26, 0.57]); p = 0.26. • Question descriptiveness: Descriptive (datasets 1–5, 12–16) = 0.35 (95% CI [0.23, 0.46]) vs Non-descriptive (datasets 6–11) = 0.14 (95% CI [0.16, 0.70]); p = 0.46. - Bias assessments: No evidence of bias (Begg’s p = 0.63; Egger’s p = 0.36; LFK index = 0.76).
Discussion
The study addressed whether participants in online cognitive research frequently engage in off-task behaviours—specifically media multitasking—during experimental tasks. The meta-analytic results show that media multitasking is common (average 38%; range 9–85%), challenging the assumption that online participants devote full attention to tasks. This raises concerns about the interpretation, reliability, and generalizability of results from online cognitive studies, which may reflect cognitive processing under disengaged conditions. Individual differences in performance on tasks that index trait-like abilities (e.g., n-back for working memory) may be confounded by tendencies to multitask. The discussion situates these findings within broader evidence that inattention also occurs in laboratories (e.g., mind-wandering 30–70%). Media multitasking online may partially replace mind-wandering observed in lab settings, suggesting disengagement is a general concern across testing environments. Online contexts may exacerbate issues by allowing a wider array of off-task activities (e.g., smartphone use, TV), potentially introducing more varied and uncontrolled noise depending on the overlap of sensory and cognitive resources between the task and the concurrent media activity. The authors emphasize the importance of measuring off-task behaviours in online studies and considering their influence on observed effects and individual differences.
Conclusion
Across 16 online studies and nearly 3,000 participants, media multitasking was frequent, averaging 38% and ranging from 9% to 85%, indicating that many participants do not devote full attention during online tasks. These findings call for caution when interpreting cognitive performance from online studies and underscore the need to routinely assess and account for off-task behaviours. Future research directions include: (1) characterizing the temporal dynamics of media multitasking with moment-to-moment measures (e.g., trial-level options to multitask, passive tracking of window focus or screen changes); (2) clarifying dose–response relations by quantifying the amount of multitasking rather than relying solely on binary self-reports; (3) exploring motivational and opportunity cost factors that drive multitasking, including interventions that enhance task value or motivation to reduce off-task behaviour; and (4) broadening analyses across diverse labs, tasks, and populations to assess generalizability and boundary conditions.
Limitations
- Measurement limitations: Media multitasking was assessed via self-report, often with binary yes/no responses, which may conflate brief/infrequent bouts (e.g., checking a phone) with sustained multitasking (e.g., listening to music throughout). Variability in question phrasing and whether media examples were provided could contribute to variance, although subgroup analyses did not show significant differences. - Selection scope: All datasets were from a single research group (Vision and Attention Laboratory, University of Waterloo). While the sample was large and diverse (task types, recruitment sources, demographics, pre-/during-pandemic periods), generalizability beyond these studies should be confirmed. - Temporal granularity: Multitasking was measured holistically at task end rather than continuously, limiting insights into when multitasking occurred and its dynamic relation with performance.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny