logo
ResearchBunny Logo
Talk with ChatGPT About the Outbreak of Mpox in 2022: Reflections and Suggestions from AI Dimensions

Medicine and Health

Talk with ChatGPT About the Outbreak of Mpox in 2022: Reflections and Suggestions from AI Dimensions

K. Cheng, Y. He, et al.

This study investigates the innovative use of ChatGPT in analyzing the 2022 mpox outbreak, highlighting its role in generating research insights while discussing the ethical considerations surrounding AI in medical research. Conducted by a collaborative team of experts, the research underscores the importance of responsible AI usage in advancing medical knowledge.

00:00
00:00
~3 min • Beginner • English
Introduction
Since May 2022, monkeypox (Mpox) emerged and spread in non-endemic regions, prompting WHO to declare a Public Health Emergency of International Concern on July 23, 2022. By March 17, 2023, 86,601 confirmed cases and 112 deaths across 110 countries/regions had been reported. Prior bibliometric analyses showed Mpox was historically neglected compared to other orthopoxviruses, though recent publications on epidemiology, surveillance, treatment, prevention, and vaccines have surged, creating potential information overload. Concurrently, generative AI systems such as ChatGPT have advanced, showing capabilities in natural language understanding and problem-solving, including notable performance on medical examinations and open-ended queries. This letter explores how ChatGPT reflects on and suggests approaches to the 2022 Mpox outbreak, assessing its potential role alongside humans in prevention and containment of future epidemics or pandemics.
Literature Review
The authors reference bibliometric studies indicating Mpox’s prior neglect within orthopoxvirus research and the recent rapid growth of Mpox-related literature. They cite reports of ChatGPT’s performance on the USMLE and discussions of its applications and limitations in medical research and clinical advice, including concerns about deficits in situational awareness, inference, consistency, and risks of unsafe antimicrobial recommendations. Additional works discuss AI’s impact on research practices, ethical considerations, and the broader question of AI’s role relative to human experts. These sources frame the promise and pitfalls of deploying LLMs in health-related domains during emerging infectious disease events.
Methodology
On March 17, 2023, the authors conducted a dialogue with ChatGPT, posing four questions: causes of Mpox emergence in non-endemic regions; analysis of future trends in confirmed Mpox cases; implications for the future; and five novel systematic review ideas related to Mpox. They documented ChatGPT’s responses verbatim. The authors then evaluated the responses qualitatively and, for the systematic review suggestions, checked PubMed to assess whether the proposed topics appeared novel and to judge their importance for further summarization.
Key Findings
- ChatGPT provided high-level, multi-faceted responses regarding potential causes of Mpox emergence (environmental changes, human behavior, pathogen evolution) but declined to offer definitive causal attribution. - For future case trends, ChatGPT emphasized dependence on control measures, surveillance, and public adherence, avoiding specific predictions. - It outlined broad implications spanning public health, economic impact, global health security, vaccine development, and sociocultural effects. - ChatGPT proposed five potential systematic review topics (e.g., Mpox in immunocompromised populations; public health responses; zoonotic transmission dynamics and risk factors; economic impacts; diagnostics and treatments). The authors’ PubMed checks suggested these were important areas warranting further synthesis. - The letter highlights concerns: ChatGPT’s answers were generic, lacked critical comparative analysis across studies, and may include inaccuracies due to unreliable online information. Prior evidence shows ChatGPT can provide unsafe antimicrobial advice and miss safety cues. - The authors note potential future utility of multimodal GPT-4 for assisting differential diagnosis of Mpox rash, given rash is common in cases. - Contextual data noted: as of March 17, 2023, 86,601 confirmed Mpox cases and 112 deaths in 110 countries/regions had been reported to WHO.
Discussion
The dialogue suggests that while ChatGPT can rapidly synthesize and communicate general considerations about an emerging outbreak, it does not reliably provide definitive causal insights or forecasts and may lack depth in critical appraisal. Its capacity to generate plausible but potentially inaccurate content underscores the need for human oversight, especially in clinical or public health decision-making. Nonetheless, ChatGPT’s brainstorming of systematic review topics aligned with recognized gaps, indicating value in idea generation, knowledge summarization, and drafting. The potential of multimodal models (e.g., GPT-4) to assist with image-informed differential diagnosis, such as Mpox rash, could augment clinical workflows. Balancing these opportunities with ethical, safety, and accuracy concerns is essential for integrating AI tools into outbreak response and medical scholarship.
Conclusion
ChatGPT demonstrates promise as a complementary tool for synthesizing information, ideation, and potentially assisting diagnostics in the context of Mpox, but its outputs are generic and can be inaccurate, necessitating expert verification. The authors advocate for developing practice guidelines and consensus statements to govern ChatGPT’s use in scientific activities, emphasizing responsible adoption, human oversight, and further technological advances to mitigate risks while leveraging benefits.
Limitations
The work is a qualitative letter based on a single ChatGPT interaction at a specific time point, without systematic evaluation or benchmarking. ChatGPT’s responses were generic, lacked critical comparative analysis, and may contain inaccuracies or fabricated details. The model cannot provide definitive causation or reliable forecasts, and prior evidence indicates potential for unsafe clinical advice. Conclusions are therefore limited and require human expert verification and caution in application.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny