Association of Peer Review With Completeness of Reporting, Transparency for Risk of Bias, and Spin in Diagnostic Test Accuracy Studies Published in Imaging Journals
Sakib Kazi,1 Robert A. Frank,1 Jean-Paul Salameh,3,4 Nicholas Fabiano,1 Marissa Absi,1 Alex Pozdnyakov,5 Nayaar Islam,4,6 Daniël A. Korevaar,7 Jérémie F. Cohen,8 Patrick M. Bossuyt,9 Mariska M. G. Leeflang,9 Kelly D. Cobey,4,10 David Moher,4,10 Mark Schweitzer,11 Yves Menu,12 Michael Patlas,5 Matthew D. F. McInnes2,4
To evaluate whether peer review of diagnostic test accuracy (DTA) studies published by imaging journals is associated with changes in completeness of reporting, transparency of risk of bias, and spin, given that there is limited evidence to support the concept that peer review improves the completeness of research reporting.1,2
This retrospective cross-sectional study evaluated articles published in the Journal of Magnetic Resonance Imaging (JMRI; 2019 impact factor [IF], 4.0), the Canadian Association of Radiologists Journal (CARJ; IF, 1.7), and European Radiology (EuRad; IF, 4.1) before March 31, 2020.3 Initial submitted and final versions of manuscripts were screened consecutively in reverse chronological order to include a minimum of 23 articles (based on power calculation) per journal. At least 30 eligible articles from each journal were collected when available to account for potential exclusions. Primary studies evaluating the diagnostic accuracy of an imaging test in humans were included. Studies exclusively reporting on prognostic or predictive tests were excluded. Studies were evaluated independently by 2 reviewers blinded to version for completeness of reporting using the Standards for Reporting Diagnostic Accuracy Studies (STARD) 2015 and STARD for Abstracts guidelines, transparency of reporting for risk of bias assessment based on the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), and actual and potential spin using modified published criteria. Two-tailed paired t-tests and paired Wilcoxon signed-rank tests were used for comparisons; P < .05 was considered statistically significant.
Of 692 diagnostic accuracy studies screened, 84 articles published in 2014 to 2020 from 3 journals were included: JMRI, 30 articles; CARJ, 23; and EuRad, 31. Reporting by STARD 2015 increased between initial submissions and final accepted versions (mean reported items 16.67 vs 17.47; change, 0.80 [95% CI, 0.25 to 1.17]; P = .002). From STARD, sources of funding and other support (item 30.1) and role of funders (item 30.2) had the largest change of 0.32 (P < .001). No difference was found for the reporting of STARD for Abstracts (5.28 vs 5.25; change, −0.03; 95% CI, −0.15 to 0.11; P = .74); QUADAS-2 (6.08 vs 6.11; 0.03; 95% CI, −1.00 to 0.50; P = .92); actual spin (2.36 vs 2.40; change, 0.04; 95% CI, 0.00 to 1.00; P = .39); or potential spin practices (2.93 vs 2.81; change, −0.12; 95% CI, −1.00 to 0.00; P = .23) (Figure 20).
This retrospective cross-sectional study found that peer review was associated with a marginal improvement in completeness of full text; however, it was not associated with abstract reporting in published imaging DTA studies nor with improvement in transparency for risk of bias assessment or reduction in spin. Considering that this study included articles from only 3 radiology journals, the findings may not be generalizable to other journals, other fields of DTA research, or non–DTA study designs. Interventions such as reviewer training and use of checklists should be evaluated.
1. Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Library. 2020. doi:10.1002/14651858.MR000016.pub3
2. Bruce R, Chauvin A, Trinquart L, Ravaud P, Boutron I. Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis. BMC Medicine. 2016;14(1):1-16. doi:10.1186/s12916-016-0631-5
3. Clarivate Analytics. Journal Citation Reports. 2019 Journal Impact Factor. Accessed July 11, 2020. https://clarivate.com/blog/announcing-the-2019-journal-citation-reports/
1Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada; 2Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada; 3Faculty of Health Sciences, Queen’s University, Ottawa, Ontario, Canada; 4Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada, email@example.com; 5Department of Radiology, McMaster University, Hamilton, Ontario, Canada; 6School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada; 7Department of Respiratory Medicine, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, the Netherlands; 8Department of Pediatrics Inserm UMR 1153, Centre of Research in Epidemiology and Statistics, Necker−Enfants Malades Hospital, Assistance Publique−Hôpitaux de Paris, Université de Paris, Paris, France; 9Epidemiology and Data Science, Amsterdam Public Health Research Institute, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands; 10Centre for Journalology, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada;11Department of Radiology, Wayne State University School of Medicine, Detroit, MI, USA; 12Department of Radiology, Sorbonne Université−AP-HP, Paris, France
Conflict of Interest Disclosures
Mark Schweitzer, Yves Menu, Michael Patlas, and Kelly D. Cobey have active affiliations with the 3 journals used as data sources but had no role in data extraction, analysis, or interpretation, but reviewed and approved the work. Michael Patlas reported an editorial honorarium from Springer outside of the submitted work. No other disclosures were reported.
Funding support was received from the Philips−Radiological Society of North America research seed grant (RSNA Research & Education Foundation), Mitacs Research Training Award, and the Department of Radiology MD Summer Student Fund at the University of Ottawa. Study performance and manuscript content were the sole task and responsibility of the investigators and do not necessarily represent the official views of the funders.
Role of the Funder/Sponsor
The funders had no role in data collection, analysis, interpretation, or manuscript composition.
Sakib Kazi and Robert A. Frank contributed equally to this work.