Abstract

Testing Computational Reproducibility Review in Editorial Workflows of Academic Journals: A Randomized Controlled Trial From the European iRISE Project

Laura Caquelin,1 Rachel Heyard,2 Stephanie Zellers,3 Hanno Würbel,4 Gustav Nilsonne1

Objective

Computational reproducibility is vital to research quality, enabling others to replicate analyses and achieve the same results. Despite its importance, it is rarely assessed systematically. This study aims to evaluate whether computational reproducibility review during the publication process improves reproducibility compared with standard peer review. We hypothesize that this intervention will improve reproducibility, encourage code sharing, and reduce errors.

Design

This randomized controlled trial was designed to enroll manuscripts submitted to partnering journals that meet inclusion criteria, including open data availability and inferential analysis. Manuscripts will be randomized 1:1 to peer review with computational reproducibility review (intervention group) or standard peer review without intervention (control group). The computational reproducibility review involves reproducing the essential statistical results using shared data (with or without the code). Feedback is provided to authors in the intervention group during the peer review process. The reproducibility of the manuscripts randomized to the control group will be assessed after publication only. The primary outcome is the proportion of manuscripts for which we successfully reproduced the essential statistical results after publication. Secondary outcomes include rates of code sharing, overt errors identified, time required for the review, publication timelines, and categorized reproducibility issues.

Results

This study began in January 2025, with a preregistered protocol openly available on the Open Science Framework.1 Recruitment is ongoing, and as of April 2025, 28 manuscripts submitted to the partnering journal GigaSicence have been screened, with 17 deemed eligible. The project is scheduled to run through September 2026, with the goal of including 200 manuscripts. This presentation will outline key insights gained during the study’s implementation and preliminary results.

Conclusions

This study fills an important gap in assessing research quality by testing a practical way to improve computational reproducibility. If successful, the results could support wider use of reproducibility reviews in academic journals, leading to more reliable and trustworthy science.

Reference

1. Caquelin L, Heyard R, Zellers S, et al. A study protocol for an intervention study using computational reproducibility testing to improve reproducibility. Open Science Framework. Updated January 29, 2025. doi:10.17605/OSF.IO/3Y8FP

1Karolinska Institutet, Stockholm, Sweden, laura.caquelin@ki.se; 2Center for Reproducible Science at the Epidemiology, Biostatistics and Prevention Institute, University of Zurich, Zurich, Switzerland; 3Institute for Molecular Medicine Finland, University of Helsinki, Helsinki, Finland; 4Division of Animal Welfare, University of Bern, Bern, Switzerland.

Conflict of Interest Disclosures

None reported.

Funding/Support

This study is part of the iRISE (Improving Reproducibility in Science) project, which has received funding from the European Union under the Horizon Europe program (grant agreement 101094853).

Role of the Funder/Sponsor

The funder had no role in the data collection, analysis, or conclusions of the abstract.