Abstract

Assessment of Agreement Between Reviewers in the Open Postpublication Peer Review Process of F1000Research

Tiago Barros,1 Liz Allen1

Objective

F1000Research operates an author-driven, open, and postpublication peer review model. The identity of the reviewer and the peer review report including its recommendation are made public immediately after submission by the reviewer. This study aimed to identify any potential influence of the first published peer review on the recommendation of the second reviewer, as measured by the agreement between the 2 recommendations and the time between them.

Design

Bringing together a dataset of articles published between July 2012 and February 2017 and associated open peer review reports, we analyzed the agreement among reviewers depending on the time between reports. Only articles presenting original research or methods were included. Articles where the time gap between the 2 reviewer reports was longer than a year (365 days) were excluded. The recommendations (“approved,” “approved with reservations,” or “not approved”) of the first 2 reviewers were recorded, as well as the published date of the reports. Cohen κ was used to measure interrater reliability, and its change with time between reports was used to assess potential bias. In the absence of survey data on whether the second reviewer had read a previous report before submitting their recommendation, reports published within the same day are considered the control group.

Results

The analyzed dataset contained 1,133 articles and 2,266 reviewer reports, ie, the first 2 reviewer reports of each article. The median (interquartile range) time between the first 2 peer reviews was 18 (6-52) days. In aggregate, the breakdown of the peer review decision (“approved,” “approved with reservations,” or “not approved”) across the dataset was virtually identical between the 2 reviewers (724 [63.9%], 355 [31.3%], and 54 decisions [4.8%] vs 705 [62.2%], 372 [32.8%], and 56 [4.9%] decisions, respectively). However, comparing the recommendations made for each article individually, the Cohen κ was 0.330 (compared with 0.282 for the control group), indicating only a fair agreement between the reviewers. Moreover, the Cohen κ changed minimally with the length of time between the peer review publication dates (Table).

Conclusions

Our analysis of the F1000Research open peer reviews found that the agreement between reviewers did not change substantially with the time gap between peer reviews. The second reviewer does not seem to be systematically influenced by the ability to see the recommendation of an earlier reviewer. This is an important finding and something to continue to monitor as the momentum and acceptance of open peer review models, and open science more broadly, continues to grow.

1F1000, London, UK, tiago.barros@f1000.com

Conflict of Interest Disclosures:

Dr Barros is the Product Strategy Manager of F1000, and Dr Allen is the Director of Strategic Initiatives of F1000.