Abstract

Utility of Machine Learning in Predicting Success of a Peer Review Paper From Peer Reviewer Scores

Ernest Kimani,1 James Kigera,2 Vincent Kipkorir1

Objective

To investigate the utility of machine learning algorithms in predicting the likelihood of publication of a manuscript from peer reviewer scores.

Design

Using a cross-sectional study design, 263 manuscripts that had undergone peer review between 2017 and 2021 were selected, and a final decision made on whether to accept or reject a manuscript for publication. Excluded were manuscripts with incomplete data on peer reviewer scores. Data were collected on the manuscripts’ peer reviewer scores and the final decision made by the journal. Peer reviewers’ scores included ratings by 2 peer reviewers per manuscript on originality, quality, interest, overall rating, and priority for publishing. Two-thirds (174 manuscripts) of the data were used for training and one-third (89 manuscripts) for testing the algorithms. Microsoft Excel 2019 was used to preprocess the data and Weka version 3.9.5 was used for model assessment. Training and testing of the model were conducted using various machine learning algorithms. The model with the highest accuracy in predicting the likelihood of a manuscript to be published would be further improved and deployed for application.

Results

One-hundred and thirty-four manuscripts were accepted for publication and 129 rejected for the final analysis. The performance of various machine learning algorithms in predicting the likelihood of publication ranged from 58.4% to 65.2% (Table 27).

Conclusions

A machine learning model to reduce peer review workload would ensure that scarce peer review resources are utilized by optimizing desk rejections. Such a model would promote efficiency in the publishing process and improve overall journal output and satisfaction with authors. To implement such a model, the in-house reviewers and editorial team could score manuscripts and assess their performance before advancing them for peer review. To improve the model’s performance and reduce bias, there would be a need to enhance the selection of data variables for scoring the manuscripts with a greater focus towards objective variables. Other limitations included small sample size and possible interrater variability in scoring individual manuscripts.

References

1. Checco A, Bracciale L, Loreti P, Pinfield S, Bianchi G. AI-assisted peer review. Humanit Soc Sci Commun. 2021;8(1):25. doi.org/10.1057/s41599-020-00703-8

2. Heaven D. AI peer reviewers unleashed to ease publishing grind. Nature. 2018;563:609-610. doi.org/10.1038/d41586-018-07245-9

1Department of Surgery, University of Nairobi, Nairobi, Kenya, drenk10@gmail.com; 2Annals of African Surgery, Nairobi, Kenya

Conflict of Interest Disclosures

Dr Kigera is a member of the Peer Review Congress Advisory Board but was not involved in the review or decision for this abstract.

Video

Poster