Abstract

Study Hypotheses and Results From Superiority and Noninferiority Randomized Clinical Trials

Yuanxi Jia,1 Yiwen Jiang,2 Karen A. Robinson,3 Jinling Tang2

Objective

In embarking on randomized clinical trials (RCTs), researchers might hypothesize that a more intensive treatment will be better than a less intensive treatment (positive hypothesis) or that a more intensive treatment will be similar or noninferior to a less intensive treatment (negative hypothesis). Researchers might design noninferiority RCTs (NI-RCTs) to support negative hypotheses and superiority RCTs (S-RCTs) to support positive or negative hypotheses. Regardless of hypotheses, S-RCTs and NI-RCTs should produce consistent results when assessing similar participants, interventions, controls, and outcomes (PICO). Systematic discrepancies in effect estimates between S-RCTs and NI-RCTs assessing similar PICO might suggest the impact of bias. This study aimed to compare effect estimates between S-RCTs and NI-RCTs assessing similar PICO. We hypothesized that (1) S-RCTs with positive hypotheses produced larger effect estimates than NI-RCTs with negative hypotheses and (2) S-RCTs with negative hypotheses produced similar effect estimates to NI-RCTs with negative hypotheses.

Design

This was a meta-research study of 101 meta-analyses comparing a more intensive treatment with a less intensive treatment (placebo, sham treatment, no treatment, or lower dose of the same treatment) that were identified from the Web of Science in July 2024. In total, 494 RCTs were analyzed, including 157 NI-RCTs and 337 S-RCTs (169 with positive hypotheses and 168 with negative hypotheses). In each meta-analysis, S-RCTs were selected as the exposure group and NI-RCTs as the control group. The bias for blinding was assessed using the Cochrane Risk of Bias tool. The ratios of effect estimates (risk ratio, mean difference [transformed to odds ratio], and hazard ratio) between S-RCTs and NI-RCTs were calculated and then combined across meta-analyses to form a single estimate as the main outcome. The ratio of effect size (RES) was estimated among all RCTs and prospectively registered RCTs, the initial hypotheses of which were verified.

Results

S-RCTs with positive hypotheses had higher effect size estimates than NI-RCTs (RES, 1.47; 95% CI, 1.28-1.70). When restricted to prospectively registered RCTs, the RES was 1.47 (95% CI, 1.20-1.82). Among RCTs rated as low risk of bias for blinding, the RES was 1.03 (95% CI, 0.74-1.44), while among those rated as high or unclear risk of bias for blinding, the RES was 1.83 (95% CI, 1.45-2.32). S-RCTs with negative hypotheses produced effect estimates similar to those of NI-RCTs (RES, 0.94; 95% CI, 0.85-1.04). When restricted to prospectively registered RCTs, the RES was 1.00 (95% CI, 0.80-1.25). Among RCTs rated as low risk of bias for blinding, the RES was 0.93 (95% CI, 0.71-1.22), while among those rated as high or unclear risk of bias for blinding, the RES was 1.14 (95% CI, 0.90-1.44).

Conclusions

Researchers’ hypotheses may bias the results of RCTs. Blinding should be emphasized to reduce bias from researchers’ hypotheses. Systematic reviews and clinical practice guidelines are suggested to routinely assess the impact of researchers’ hypotheses on clinical evidence.

1Yong Loo Lin School of Medicine, National University of Singapore, Singapore, yx.jia@nus.edu.sg; 2Shenzhen Institute of Advanced Technology, Shenzhen, China; 3School of Medicine, Johns Hopkins University, Baltimore, MD, US.

Conflict of Interest Disclosures

None reported.

Funding/Support

This study was supported by the Shenzhen Science and Technology Program (grant No. KQTD20190929172835662) from the Shenzhen Municipal Government, Guangdong Province, China, and the Outstanding Youth Innovation Fund from the Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (grant No. E2G019).

Role of Funder/Sponsor

The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the abstract; and decision to submit the abstract for presentation.