Abstract
Enhancing Research Integrity in Abstract Submissions With a Hybrid AI-Human Review Process
Heather Goodell,1 Christine Beaty,1 Jonathan Schultz,1 Shilpi Mehra,2 Chirag Jay Patel3
Objective
The increase in abstract submissions to major scientific conferences requires efficient and reliable methods to ensure compliance with research integrity standards, which are essential for maintaining the credibility of science. Traditional review methods may not fully capture key issues, requiring a structured approach to evaluate submissions before final acceptance. This study presents a review process that uses artificial intelligence (AI) with human validation to assess the integrity of abstracts submitted to the American Heart Association (AHA) Scientific Sessions.
Design
An integrity evaluation process was applied by combining AI-driven analysis with human review of abstracts submitted for the 2024 AHA Scientific Sessions. An AI tool (Paperpal Preflight for Editorial Desk) was used to identify research integrity issues, including AI-generated text, author expertise misalignment, and reference accuracy. Subject matter experts then reviewed abstracts flagged for validation. AI checks categorized abstracts into passed, warning, or critical levels, with those not marked as passed requiring further human review.
Results
Among the 8477 submitted abstracts that were analyzed, 42 were flagged with a warning for integrity concerns (AI-generated text, author expertise misalignment, and reference accuracy); human reviewers cleared 3 of these. Further analysis revealed that 167 authors submitted 15 or more abstracts, 63 authors submitted 20 or more, and 13 authors submitted more than 30 abstracts. A total of 2624 abstracts had at least 1 author with 10 or more submissions, and 1438 abstracts had at least 1 author with 15 or more submissions. The number of submissions was considered when assessing research integrity concerns. Analysis of author submission counts helped flag questionable cases for closer review and the identification of potentially suspect activities.
Conclusions
The hybrid AI-human review process for abstract submissions was demonstrated to be an efficient and accurate evaluation method that should be scalable. By addressing concerns related to author submission volume and content errors, this approach can strengthen conference proceedings and support broader efforts to enhance research quality and trust in scientific dissemination. Future efforts will focus on refining and customizing AI models and optimizing reviewer workflows to further enhance this process.
1American Heart Association, Dallas, TX, US; 2Cactus Communications Pvt Ltd, Mumbai, India; 3Cactus Communications Inc, Princeton, NJ, US; chirag.patel@cactusglobal.com.
Conflict of Interest Disclosures
Heather Goodell, Christine Beaty, and Jonathan Schultz are employed by the American Heart Association. Shilpi Mehra and Chirag Jay Patel are employed by Cactus Communications, which owns Paperpal Preflight for Editorial Desk. No other disclosures were reported.