logo
ResearchBunny Logo
COVID-19 and science advice on the ‘Grand Stage’: the metadata and linguistic choices in a scientific advisory groups’ meeting minutes

Interdisciplinary Studies

COVID-19 and science advice on the ‘Grand Stage’: the metadata and linguistic choices in a scientific advisory groups’ meeting minutes

H. Baker, S. Concannon, et al.

This research insightfully explores the communication strategies of the Scientific Advisory Group for Emergencies (SAGE) during the COVID-19 pandemic, revealing insights into transparency, expert plurality, and the nuances of scientific communication. Conducted by a team from the University of Cambridge and The Technical University of Munich, this analysis assesses key linguistic trends in SAGE's meeting minutes.

00:00
00:00
~3 min • Beginner • English
Introduction
Science advice for governments attracted great scrutiny during the COVID-19 pandemic, with the public spotlight on institutions and individual experts. SAGE, the UK’s primary provider of coordinated scientific and technical advice during emergencies since 2009, had never convened for such a prolonged period and faced critical attention over transparency, influence on unprecedented policy decisions (e.g., national lockdowns), and communication of uncertainty. For the first time during an ongoing emergency, the UK Government released SAGE's advice in the form of meeting minutes into the public domain. This study uses SAGE's minutes (22 January 2020–13 May 2021) to explore how they construct SAGE's role and communicate SAGE's protocols. A literature review identified four key themes: transparency, plurality of expertise, the science–policy boundary, and consensus/uncertainty. Research questions: RQ1: Did SAGE’s approach towards transparency and plurality of expertise change throughout the study period? RQ2: How is SAGE’s role constructed within their meeting minutes? RQ3: How is consensus and uncertainty communicated within SAGE’s meeting minutes? The study investigates minutes’ metadata (release lags, attendance) and linguistic choices to understand authority, consensus, and uncertainty, focusing on how publication and language practices may impact public understanding of the advisory system and scientific ambiguity. Empirically, the study finds increased transparency but scope for clearer depiction of specific expertise; evidence that SAGE sometimes takes stronger stances and discusses policy choices; presentation of a consensus view despite increasing voices; and an increased marking of uncertainty over time. The paper presents theoretical framing (Hilgartner’s ‘Science on Stage’ and stance frameworks), methods, analysis aligned to the key themes, and discussion and conclusions.
Literature Review
The study is framed by Science and Technology Studies scholarship on credibility and the dramaturgical metaphor of ‘Science on Stage’ (Hilgartner), emphasising stage management between front-stage visibility and backstage processes. Transparency is crucial for trust but has limits and value-laden choices. Plurality and diversity of expertise are advocated to avoid dominance by narrow perspectives, and openness about experts’ disciplines is encouraged. The science–policy boundary is complex; neutrality can conflict with usefulness. Advisory bodies may function as deliberative sites that describe implications and policy choices, ranging from ‘normatively light’ to ‘normatively heavy’ advice. Consensus carries social authority but can backstage disagreement and uncertainty; best practice often includes publishing dissenting views. The paper also draws on linguistic research on certainty/uncertainty markers—hedges, boosters, attitude markers, and self-reference—using Hyland’s stance framework. In COVID-19 contexts, stance markers and promotional language patterns have been studied in academic and news genres. The authors argue that SAGE minutes are historical records of an evolving ‘front stage’ performance, and they review SAGE guidance and CoPSAC to situate expectations about transparency, plurality, boundary work, and communicating consensus and uncertainty, including the guidance to highlight uncertainty, consensus levels, and differences of opinion.
Methodology
Design: Mixed-methods analysis of publicly available SAGE meeting minutes from 22 January 2020 to 13 May 2021 (meetings 1–89). Data source: SAGE minutes (gov.uk), which serve both as records and communicative instruments of advice to ministers, supporting transparency during COVID-19. Minutes are anonymised, unattributable, and intended to be understandable by the public. Time segmentation: The corpus was divided into five timeframes (TF1–TF5) anchored to key public/media moments: TF1 (Meetings 1–16; 22 Jan–16 Mar 2020), TF2 (17–39; 18 Mar–28 May 2020), TF3 (40–58; 4 Jun–1 Sep 2020), TF4 (59–74; 24 Sep–22 Dec 2020), TF5 (75–89; 7 Jan–13 May 2021). Method 1 (Metadata analysis): To address RQ1 (transparency and plurality), the authors recorded meeting vs. publication dates to compute release delays and compiled an expert attendance database from minutes (classifying each attendee’s most frequent role: expert, observer, secretariat; and primary institutional affiliation). Counts of meetings attended per person were computed to test for a ‘core group’ and to quantify plurality over time. Average, minimum, and maximum numbers of named attendees and redactions were tracked by timeframe. Method 2 (Linguistic analysis): To address RQ2–RQ3 (role construction; consensus/uncertainty), the authors examined explicit self-references to SAGE (e.g., “SAGE advised…”) and stance markers following Hyland (2005). First-person pronouns were rare; analysis focused on explicit SAGE self-references (n=764). Stance marker list derived from prior work (e.g., Shen & Tao, 2021) was refined via manual inspection, classifying instances as hedges, boosters, or attitude markers; ambiguous cases were double-annotated with agreement thresholds (e.g., “will” 99.3% agreement). Frequencies were normalised per 1000 words. The emergence of explicit confidence labels (e.g., “low/medium/high confidence”) was also tracked. Statistical analysis used non-parametric Kruskal–Wallis tests across timeframes; significant results were followed by Bonferroni-corrected pairwise comparisons. Descriptive statistics summarised variation. Ethics: Publicly available data under Open Government Licence; names anonymised in analysis outputs. Institutional confirmation that formal ethical approval beyond local review was not required.
Key Findings
Transparency and publication lags: - Average delay between meeting and publication decreased over time: TF1 99.8 days; TF2 41.5; TF3 40.9; TF4 33.2; TF5 17.1 days. TF4–TF5 generally met SAGE’s 30-day target, though some outliers exceeded it. Minutes became available in HTML as well as PDF, improving accessibility. - Minutes evidence SAGE’s support for transparency (e.g., endorsing release of participants’ names; stressing putting maximum information into the public domain). Unclear from minutes who ultimately decides content/timing of releases. Plurality and ‘core group’: - Across the study period, 142 individuals were classified as experts; only 32 attended more than 50% of all meetings. Nine experts attended >50% in each timeframe; 16 did so in four timeframes. Sixty-three experts attended only one meeting. For observers, 39/85 attended only one meeting. - Average number of named experts per meeting rose from 15.1 (TF1) to 38.3 (TF5), indicating increasing plurality. Redactions were fewer for experts than for observers/secretariat. Role construction and stance: - Self-references (e.g., “SAGE advised/recognised/recommended…”) explicitly position SAGE as scientific advisors, often delimiting responsibilities of government departments and operational bodies. Minutes sometimes discuss policy options/implications (e.g., frameworks for policy choices), indicating tension between neutrality and usefulness. - Attitude markers such as “should”, “important”, “will”, and “SAGE agreed” highlight evaluative judgements and strong stances (e.g., “SAGE advised strongly…”). Consensus portrayal: - Minutes present a central/consensus view. Explicit signposting of disagreement was rare; unanimity is occasionally emphasised (e.g., “SAGE was unanimous…”). Wording like “consensus view” used to support recommendations. Uncertainty communication and stance markers over time: - Hedges more frequent overall than boosters and attitude markers. Confidence labels (e.g., “high/medium/low confidence”) introduced sporadically early, with more regular use from Meeting 31 onward; total 405 instances with 52.59% “high”, 33.58% “medium/moderate”, 13.83% “low/very low”. - Kruskal–Wallis results: • Self-references vary significantly across timeframes (H(4)=48.419, p<0.001); fewer in TF4 vs TF2 (p=0.025) and TF3 (p<0.011), and in TF5 vs TF1–TF3 (all p<0.001). • Hedges vary significantly (H(4)=29.001, p<0.001); more in TF4 and TF5 vs TF1–TF3 (p-values 0.029 to <0.001); no difference between TF4 and TF5. • Boosters vary significantly (H(4)=15.845, p=0.003); TF2 has fewer than TF4 (p=0.011) and TF5 (p=0.004). • Attitude markers trend downward but variation not significant (H(4)=9.069, p=0.059). Interpretation of temporal trends: - Early period (TF2) aligned with acute uncertainty showed fewer boosters; later periods (TF4–TF5) combined increased boosters with significantly more hedges, indicating efforts to communicate both confidence and caveats precisely. Self-references decreased over time, with later minutes more often signposting prior positions (e.g., “has previously advised”).
Discussion
Findings address RQ1 by showing increased transparency via reduced publication lags and name disclosures, and a growing plurality of attending experts alongside a persistent ‘core group’. A clearer depiction of specific expert disciplines and roles would further enhance transparency. For RQ2, minutes construct SAGE’s role as scientific advisors who coordinate input across government science networks while demarcating operational responsibilities to departments and agencies. Nonetheless, SAGE at times discusses policy choices and expresses strong evaluative judgements, reflecting a balance between neutrality and usefulness consistent with their guidance and broader debates on the science–policy boundary. For RQ3, SAGE minutes present a consensus view with little explicit disagreement recorded, despite many voices feeding into discussions. The linguistic analysis shows increased use of hedges and explicit confidence labels over time, coupled with selective boosting when needed, suggesting a growing emphasis on precise expression of scientific ambiguity and confidence levels. The decline in self-references and attitude markers later may indicate established positions being reiterated, or a deliberate distancing from policy decisions while foregrounding uncertainty management. These patterns illuminate how SAGE’s ‘front stage’ performance evolved under intense public scrutiny, shaping perceptions of authority, consensus, and uncertainty.
Conclusion
The study demonstrates that SAGE’s publication practices became more transparent during the pandemic, and that attendance broadened even as a core group persisted. Linguistically, SAGE’s minutes increasingly marked uncertainty (hedges, confidence labels) while selectively employing boosters, reflecting an evolving commitment to precise communication of what is known and unknown. Self-references decreased over time, consistent with established positions and/or cautious distancing from policy decisions. Contributions include: (1) a quantitative extension of prior transparency analyses (publication lag trends; attendance plurality and core group quantification); (2) a stance-based linguistic analysis of SAGE’s role construction, consensus portrayal, and uncertainty communication; and (3) temporal dynamics demonstrating changing practices across key phases of the pandemic. Future directions suggested by the authors include: refining guidance for minutes to systematically record differences in opinion (majority/minority positions), clarifying who determines release content and timing, providing clearer depictions of specific expertise and disciplines feeding into meetings, and extending analyses beyond minutes to other ‘front stage’ venues (e.g., press briefings) and linked documents (e.g., subcommittee consensus statements).
Limitations
- Temporal scope: The analysis covers meetings 1–89 (to 13 May 2021); subsequent developments (e.g., Omicron) fall outside the study period. - Data type: Minutes are crafted documents, not verbatim transcripts; they reflect editorial choices by the secretariat and may vary by minute-taker, potentially affecting linguistic patterns and inclusion of dissent. - Backstage processes: Informal conversations and other channels feeding into decisions are not captured; the extent of iteration from notes to published minutes is unknown. - Transparency governance: Minutes do not make clear who decides content and timing of releases; leaks highlight ongoing public interest and process ambiguities. - Scope: The study focuses on minutes and not other outputs (press briefings, linked papers, subcommittee consensus statements). Expert disciplines were not systematically available; thus, disciplinary diversity was not analysed. - Methodological: While non-parametric statistics address non-normal distributions, results rely on accurate classification of stance markers; some functions are context-dependent despite double-coding. - Generalisability: Findings concern SAGE’s published minutes and may not generalise to other advisory bodies or to unpublished internal deliberations.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny