Use of Generative Artificial Intelligence Tools by Authors and Reviewers of the Journal Eurosurveillance
Abstract
Eva Sarachaga,1 Ines Steffens1
Objective
In September 2023, Eurosurveillance adopted an artificial intelligence (AI) policy encouraging authors and reviewers to use AI tools responsibly. The policy states that authors should disclose tools such as large language models (LLMs), chatbots, and image-generating algorithms used in the production and writing of the manuscript. It further outlines that reviewers should declare whether they have used a chatbot or LLM tool in the generation of their review or in their correspondence to authors or editors.1 Fifteen months after launching the policy, we investigated how many authors and reviewers had declared using AI tools and how. Additionally, we examined whether possible AI use was flagged by the plagiarism detection system, iThenticate, which checks whether incoming submissions overlap with published material.
Design
Data were extracted from the article submission system, Editorial Manager. In a cross-sectional design, we included all articles and reviews in the system since the policy was published in September 2023 until November 2024. Results were analyzed by domain, AI tool, and tool version. To investigate whether results from iThenticate differed between articles where AI use was declared versus where it was not declared, we compared scores of a convenience sample of 43 articles in each group and checked manually thereafter.
Results
Only 7% of authors (70 of 961) and 2% of reviewers (11 of 707) declared using AI tools. Language was the most frequent domain in both groups: 61 of 70 authors and 10 of 11 reviewers. Authors declared other domains, such as coding, data analysis, prediction identification, correspondence, and figure creation. ChatGPT was the most used AI tool in both groups: 54 of 70 authors and 6 of 11 reviewers. The second most used AI tools were DeepL for authors (2 of 70) and Paperpal for reviewers (2 of 11). Our results were in line with results from other studies.2,3 AI use was not declared according to the policy: only 20% of authors (13 of 70) and 20% of reviewers (2 of 11) declared the tool version as required. The plagiarism score was lower in articles where AI use was declared (35 of 43) versus where it was not declared (39 of 43).
Conclusions
In line with findings by others, only a limited proportion of authors and reviewers declared using AI tools following implementation of an AI policy. The proportion was similar for authors and reviewers, with language being the most frequent domain and ChatGPT the most used tool in both groups. Authors and reviewers need to be reminded to comply better with the AI policy and to declare tool version.
References
1. Eurosurveillance. Editorial policy: Responsible use of artificial intelligence (AI) tools. Accessed July 10, 2025. https://www.eurosurveillance.org/editorial-policy#AI%20policy
2. Salvagno M, De Cassai A, Zorzi S, et al. The state of artificial intelligence in medical research: a survey of corresponding authors from top medical journals. PLOS One. 2024;19(8):e0309208. doi:10.1371/journal.pone.0309208
3. Else H. Should researchers use AI to write papers? Group aims for community-driven standards. Science. 2024;384(6693). doi:10.1126/science.z9gp5zo
1European Centre for Disease Prevention and Control (ECDC), Stockholm, Sweden, evasarachaga01@gmail.com.Conflict of Interest Disclosures
None reported.
Acknowledgment
We would like to thank the Eurosurveillance editorial team (Alina Buzdugan, Anirban Dey, Kathrin Hagmaier, Megan Osler, Elina Tast-Lahti, and Karen Wilson), with special thanks to Kathrin Hagmaier, at the ECDC who provided their feedback on this work.