logo
ResearchBunny Logo
Should We Acknowledge ChatGPT as an Author?

Medicine and Health

Should We Acknowledge ChatGPT as an Author?

A. Goto and K. Katanoda

This editorial by Atsushi Goto and Kota Katanoda explores the profound implications of AI chatbots in academic writing, particularly highlighting authorship issues. The findings reveal that while AI tools like ChatGPT offer potential, they should not be credited as authors due to their risk of factual inaccuracies and lack of accountability.

00:00
00:00
Playback language: English
Introduction
The emergence of powerful language models like ChatGPT has significantly impacted various fields, including publishing, education, and science. ChatGPT, developed by OpenAI, generates text by analyzing patterns in a massive text database. This raises important questions about its role in academic writing, particularly regarding authorship. The editorial explores this issue by examining existing guidelines, conducting a survey among editorial board members, and analyzing examples of ChatGPT's inaccuracies. The central question addressed is whether ChatGPT meets the criteria for authorship in scientific publications, considering its limitations and potential for generating incorrect information. The context is the rapid advancement of AI and its integration into scholarly work, necessitating the establishment of clear guidelines for its proper use and attribution. The importance of the study lies in its contribution to the ongoing discussion regarding authorship in the age of AI, with implications for academic integrity and the reliability of published research. The purpose is to provide recommendations for researchers and publishers on how to ethically and accurately utilize AI tools in their writing processes.
Literature Review
The editorial references a previous letter to the editor in the Journal of Epidemiology which argued, based on the International Committee of Medical Journal Editors (ICMJE) guidelines, that ChatGPT cannot be considered an author because it cannot approve the final manuscript or take responsibility for its content. The authors also cite policies from different publishing groups, such as Science, Elsevier, and Cambridge University Press, illustrating the varied approaches to AI usage in scientific publications. Some journals ban AI use without explicit permission while others permit its use as a tool with proper disclosure. The existing literature highlights the need for clear guidelines and a consensus on how to handle AI-generated content in academic publishing.
Methodology
The authors employed a two-pronged approach to address the research question. First, they conducted a survey among their Editorial Board members. The survey asked about the potential role of ChatGPT and authors' responsibilities in using it. The results indicated that none of the respondents considered ChatGPT an author, while the majority (74%) viewed it as a tool; amongst those, 63% advocated for its use being disclosed during submission. Secondly, the authors performed several tests to assess ChatGPT's accuracy in generating epidemiological information. They specifically requested information about citations relevant to coffee intake and liver cancer risk in Japan and asked about leading causes of death in Japan. In both cases, ChatGPT produced responses containing factual inaccuracies, underlining the risk of using it without careful verification. The findings from both the survey and the accuracy tests informed the editorial’s recommendations.
Key Findings
The key findings of the editorial include: 1. The survey of editorial board members revealed a consensus against considering ChatGPT as an author. The majority of respondents (74%) recognized its potential as a tool, but advocated for transparency in its use (63%). 2. ChatGPT produced demonstrably incorrect information in several instances. For instance, it provided an inaccurate citation for a study on coffee intake and liver cancer in Japan and incorrectly listed the leading causes of death in Japan for 2019. 3. While newer versions of ChatGPT, like GPT-4, show improvements, the inherent risk of factual errors highlights the responsibility of authors to meticulously verify any AI-generated content. 4. The editorial emphasizes that using ChatGPT is not an ethical concern if the text generated is verified and the use of ChatGPT is disclosed. 5. The editorial highlights the need for ongoing critical evaluation of AI-generated content, given the lack of rigorous scrutiny for much of the data sources used to train these models. 6. The examples of inaccurate outputs from ChatGPT demonstrate the potential for fatal errors in scientific papers if these tools are not properly used and verified. These findings underscore the critical importance of human oversight and verification when using AI tools in academic writing.
Discussion
The findings address the research question by demonstrating that ChatGPT, in its current state, does not meet the authorship criteria outlined by the ICMJE. The inaccuracies revealed in the examples highlight the crucial role of human verification in ensuring the accuracy and reliability of scientific publications. The results reinforce the need for transparent disclosure of AI tool usage in research, promoting research integrity and allowing readers to critically assess the source of information. The significance of the results lies in establishing a cautious yet forward-looking approach to the integration of AI in academic publishing. By acknowledging the limitations of current AI tools, the editorial encourages responsible use and prevents the dissemination of unreliable information. This contributes to the field by establishing guidelines and promoting best practices.
Conclusion
This editorial concludes that ChatGPT, despite its potential usefulness as a writing tool, should not be acknowledged as an author in scientific publications. The potential for factual errors necessitates rigorous human verification. Transparent disclosure of AI tool usage is recommended to maintain research integrity. Future research should investigate how best to leverage AI tools responsibly in scholarly writing while maintaining accuracy and accountability.
Limitations
The survey was limited to the editorial board members of the Journal of Epidemiology, potentially lacking generalizability to other fields or journals. The accuracy tests conducted only involved a limited number of examples. Furthermore, the rapidly evolving nature of AI technology means that the observations might not completely hold true for future versions of ChatGPT or similar tools.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny