2013 Abstracts

Sunday, September 8

Authorship

Too Much of a Good Thing? A Study of Prolific Authors

Elizabeth Wager,1 Sanjay Singhvi,2 Sabine Kleinert3

Objective

Authorship of unfeasibly large numbers of publications may indicate guest authorship, plagiarism, or fabrication (eg, the discredited anesthetist Fujii published 30 trials in 1 year). However, it is difficult to accurately assess an individual’s true publication history in databases such as MEDLINE using searches for author name alone. We therefore used a bespoke, semiautomated tool, which considers additional author characteristics, to identify authorship patterns for a descriptive study of prolific authors.

Design

Publications from a 5-year period (2008-2012) across 4 topics were selected from MEDLINE to provide a varied sample. The bespoke tool was used to disambiguate individual authors by analyzing characteristics such as affiliation, past publication history, and coauthorships, as well as author name. Focusing on 4 discrete topics also reduced the chance of double-counting publications from authors with similar names. Type of publication and authorship position were assessed for the most prolific authors in each topic.

Results

The number of publications per topic are shown in Table 1. Distinct publication patterns could be identified (eg, individuals who were often first author [max 56%] or last author [max 89%]). The maximum number of publications per year was 43 (for any type) and 15 (for trials). Of the 10 most prolific authors for each topic, 24/40 were listed on ≥1 publication per 10 working days in a single year.

Table 1. Total Number of MEDLINE Publications per Individual for 2008-2012 for Selected Topics

Table 1. Total Number of MEDLINE Publications per Individual for 2008-2012 for Selected Topics

Conclusions

Analytical software may be useful to identify prolific authors from public databases with greater accuracy than simple name searches. Although such findings always need careful interpretation, these techniques might be useful to journal editors and research institutions in cases of suspected misconduct or to screen for potential problems (eg, prolific last authors might be guest authors). When measuring productivity, institutions and funders should be alert not only to unproductive researchers but also to unfeasibly prolific ones.

1Sideview, Princes Risborough, UK, liz@sideview.demon.co.uk; 2System Analytic, London, UK; 3 Lancet, London, UK

Conflict of Interest Disclosures

Sanjay Singhvi is a director of System Analytic Ltd, which provides expert identification/mapping services, the tools from which were used for this study. Elizabeth Wager has acted as a consultant to System Analytic—this work represents less than 1% of her total income. Sabine Kleinert reports no conflicts of interest.

Funding/Support

No external funding was obtained for this project. System Analytic provided the tools, analysis, and staff time. Elizabeth Wager is self-employed and received no payment for this work.

Back to Top

Deciding Authorship: Survey Findings From Clinical Investigators, Journal Editors, Publication Planners, and Medical Writers

Ana Marušić,1 Darko Hren,2 Ananya Bhattacharya,3 Matthew Cahill,4 Juli Clark,5 Maureen Garrity,6 Thomas Gesell,7 Susan Glasser,8 John Gonzalez,9 Samantha Gothelf,10 Carolyn Hustad,4 Mary-Margaret Lannon,11 Neil Lineberry,12 Bernadette Mansi,13 LaVerne Mooney,14 Teresa Pena15

Objective

Low awareness, variable interpretation, and inconsistent application of guidelines can lead to a lack of transparency when recognizing contributors in industry-sponsored clinical trial publications. We sought to identify how different groups who participate in the publication process determine authorship.

Design

Interviews with clinical investigators, journal editors, publication planners, and medical writers identified difficult-toresolve authorship scenarios when applying ICMJE guidelines, such as authorship for significant patient recruitment or medical writing contribution. Seven scenarios were converted into a case-based, online survey to identify how these groups determine appropriate recognition and provide rationale and confidence for their decision. Respondents also indicated their awareness and use of authorship guidelines. A sample of at least 96 participants per group enabled estimates with a 10% margin of error for a 100,000 population. The online survey remained open until all groups surpassed this sample size by at least 10%.

Results

We analyzed 498 responses from a global audience of 145 clinical investigators, 132 publication planners, 113 medical writers, and 108 journal editors. Overall, types of recognition chosen for each scenario varied both within and across respondent groups (see Table 2

for example case results). Despite acknowledged awareness and use of authorship criteria, respondents often adjudicated cases inconsistently with ICMJE guidelines. Clinical investigators provided the most variable responses and had the lowest level of ICMJE awareness (49% [95% CI=42.9-59.4] vs 92% [95% CI=88.5-94.2] for other groups) and use (28% [95% CI=20.5-34.6] vs 61% [95% CI=55.7- 65.5] for other groups). Respondents were confident in their answers (mean score, 2.0 [95% CI=1.5-2.5] on a relative scale from 1: extremely confident to 6: not at all confident), regardless of their adjudication. Based on roundtable discussions with 15 editors and qualitative analysis of respondents’ answers, Medical Publications Insights and Practices Initiative (MPIP) developed supplemental guidance aimed at helping authors to set common rules for authorship early in a trial and document all trial contributions to increase transparency.

Table 2. Opinions of Respondents About an Example Authorship Scenario

Table 2. Opinions of Respondents About an Example Authorship Scenario

aStatistically significant difference among 4 groups in frequencies of answers (χ29=64.28, P<.001).

b1 = extremely confident/frequent/important, 6 = not at all confident/frequent/important, respectively.

cNo statistically significant difference among the groups (one-way ANOVA, F3494=0.39, P=.760).

dStatistically significant difference among the groups (one-way ANOVA, F3494=15.05, P<.001); Tukey post hoc test: all pair-wise comparisons are statistically significant (P≤.046) except for clinical investigators vs journal editors (P=.956).

eStatistically significant difference among the groups (1-way ANOVA, F3494=4.54, P=.004). Tukey post hoc test: publication planners vs other 3 groups (P≤.038).

Conclusions

Groups that participate in the publishing process had differing opinions on adjudication of challenging real-world authorship scenarios. Our proposed supplemental guidance is designed to provide a framework to improve transparency when recognizing contributors to all clinical trial publications.

1Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia, ana.marusic@mefst.hr; 2University of Split Faculty of Philosophy, Split, Croatia; 3Bristol-Myers Squibb, Princeton, NJ, USA; 4Merck, North Wales, PA, USA; 5Amgen, Thousand Oaks, CA, USA; 6Astellas, Northbrook, IL, USA; 7Envision Pharma and International Society for Medical Publication Professionals, Briarcliff Manor, NY, USA; 8Janssen R&D, LLC, Raritan, NJ, USA; 9AstraZeneca, Alderley Park, UK; 10Bristol-Myers Squibb, Princeton, NJ, USA; 11Takeda, Deerfield, IL, USA; 12Leerink Swann Consulting, Boston, MA, USA; 13GlaxoSmithKline, King of Prussia, PA, USA; 14Pfizer, New York, NY, USA; 15AstraZeneca, Wilmington, DE, USA

Conflict of Interest Disclosures

Ana Marušić and Darko Hren were supported by a grant from Medical Publishing Insights and Practices (MPIP) Initiative. All others are members of MPIP. Neil Lineberry is an external consultant employed by Leerink Swann Consulting and was paid by the MPIP Initiative for his work. Thomas Gesell serves on the MPIP Initiative Steering Committee as a representative of the International Society for Medical Publication Professionals, neither of which provides funding support to the MPIP Initiative. The other members of the MPIP Initiative Steering Committee are employees of the companies sponsoring the MPIP Initiative, as shown by their individual affiliations.

Funding/Support

Ana Marušić and Darko Hren received funding for the research project from the MPIP Initiative.

Back to Top

Multiauthorship Articles and Subsequent Citations: Does More Yield More?

Joseph Wislar,1 Marie McVeigh,2 Annette Flanagin,1 Mary Lange,2 Howard Bauchner1

Objective

To assess if articles published with large numbers of authors are more likely to be cited in subsequent works than articles with fewer authors.

Design

Research and review articles published in 2010 in the top 3 general medical journals (JAMA, Lancet, New England Journal of Medicine) and the top 3 general science journals (Nature, Proceedings of the National Academy of Sciences, Science) according to their rank by Impact Factor were extracted. The number of authors and other article characteristics (eg, article type and topic for the medical journal articles) were recorded. Citations per article were recorded through March 1, 2013.

Results

A total of 6,337 research and review articles were published in the 6 journals in 2010 articles (848 in medical journals and 5,489 in science journals); the number of authors per article ranged from 1 to 659. The number of authors and articles were divided into 4 quartiles: 1-3, 4-6, 7-10, 11+ authors, respectively (Table 3). There was a median of 8 (IQR: 4-14) authors per medical journal article and 6 (IQR: 4-9) per science journal article. There were more articles among the medical journals than science journals with 11+ authors (39% vs 18%, respectively). Medical journal articles were cited a median of 50 (IQR: 25-100) times and the science journal articles were cited a median of 22 (IQR: 12-42) times during the follow-up period. Median number of citations was highest for articles in the highest quartile of number of authors: 80 vs 29 (P<.001) for medical journals and 35 vs 18 (P<.001) for science journals. Article types among medical journals included 290 observational studies, 259 randomized trials, 176 reviews, 89 case reports, and 34 meta-analyses. Top topics represented among the medical journals articles were infectious disease (14.7%), cardiovascular disease (12.5%), and oncology (11.3%). Preliminary analyses assessing article type and topic in medical journals articles did not reveal a consistent pattern for article type but did show a linear increase in numbers of authors for articles in cardiovascular disease, critical care medicine, obstetrics/gynecology, and oncology and inconsistent patterns for other topics.

Table 3. Number of Author by Quartile and Median Number of Citations, by Journal

Table 3. Number of Author by Quartile and Median Number of Citations, by Journal

Conclusion

Articles with 11 or more authors have higher citations than articles with fewer authors; this does not appear to be affected by article type, but there may be an association with article topic.

1JAMA, joseph.wislar@jamanetwork.org, Chicago, IL, USA; 2Thomson Reuters, Philadelphia, PA, USA

Conflict of Interest Disclosures

Joseph Wislar, Annette Flanagin, and Howard Bauchner are employed as editors with JAMA, one of the journals included in this study. Annette Flanagin, coordinator of the Peer Review Congress, had no role in the review of or decision to accept this abstract for presentation.

Funding/Support

There was no external funding for this study.

Citations

Back to Top

Coercive Citation and the Impact Factor in the Business Category of the Web of Science

Tobias Opthof,1,2 Loet Leydesdorff,3 Ruben Coronel1

Objective

Coercive citation is not unusual in journals within the business category of the Web of Science. Coercive citation is defined as pressure by the editor of a journal on an author to include references to the editor’s journal either before the start of the review process or after acceptance of the manuscript. The quantitative effects of this practice on Impact Factors and on the degree of journal self-citation have thus far not been assessed.

Design

We have quantified bibliographic parameters of the top 50 journals in the category Business of the Web of Science. Of these, 26 had previously been shown to exert citation coercion by Wilhite and Fong. We have compared these 26 journals with the other 24 journals.

Results

The averaged Impact Factors in 2010 of coercive journals and noncoercive journals were 2.665 ± 0.265 (mean ± SEM) vs 2.227 ± 0.134 (P=.141) including journal self-citations and 2.110 ± 0.279 vs 1.635 ± 0.131 (P=.123) without journal self-citations. Compared to noncoercive journals, coercive journals had a higher percentage of journal self-citations to all years (9.5% vs 6.7%, P<.25), a higher percentage of journal self-citations to the years relevant for the Impact Factor (based on the 2 preceding years; 19.5% vs 13.5%, P<.05), and a higher percentage of journal self-citations to the earlier years (8.0% vs 5.2%, P<.01). Within the group of coercive journals, the degree of coercive citation was positively and significantly correlated with the percentage of journal self-citations to the preceding 2 years relevant for the Impact Factor (r=.513, n=26, P<.005), but not to other years.

Conclusion

In business journals, coercive citation is effective for coercing editors in the sense that their journals show a higher percentage of journal self-citations to years relevant for the conventional Impact Factor.

1Department of Experimental and Clinical Cardiology, Academic Medical Center, Amsterdam, the Netherlands, t.opthof@inter.nl.net; 2Department of Medical Physiology, University Medical Center Utrecht, the Netherlands; 3Amsterdam School of Communication Research (ASCoR), University of Amsterdam, the Netherlands

Conflict of Interest Disclosures

None reported.

Back to Top

Do Views Online Drive Citations?

Sushrut Jangi,1,2 Jeffrey A. Eddowes,1 Jennifer M. Zeis,1 Jessica R. Ippolito,1 Edward W. Campion1

Objective

Biomedical journals have a 2-fold aim—to contribute literature that is highly cited and to present articles that are read widely. The first aim is measured by Impact Factor. The second aim is measurable using parameters such as Most Viewed online. We examined whether these publication aims are related: are the most-viewed articles subsequently the most cited?

Design

Citations for all 226 original research articles published in 2010 in the New England Journal of Medicine were analyzed through the end of 2012. Online usage data were analyzed for 2 time periods: (1) during the first year after publication and (2) from publication to the end of 2012. A linear regression was generated between number of views and citations during these 2 periods.

Results

When number of online views at the end of 2010 was compared with citations between 2010-2012, the correlation between online views and citations was near 0 (linear regression slope near 0 with an R2 of .0015). The inability to create a robust linear regression occurred because these research articles segregated into 4 unique clusters with distinct characteristics (Figure 1). When the number of online views over the first 2 years of publication were compared with citations over the same time period, the association between views and citations was stronger (linear regression slope of 3 with an R2 of .3). Clusters were determined as follows: articles that fell into cluster 1 all were within the first standard deviation of citations and views. Papers that were beyond one standard deviation of citations but were normally viewed were assigned to cluster 2; papers that were beyond one standard deviation of online views but were normally cited were cluster 3; papers that were beyond one standard deviation in both views and citations were cluster 4. Of the 226 original research articles published in 2010, 178 fall into a core cluster 1; 20 into cluster 2; 21 in cluster 3; and 5 in cluster 4. Clustering prevents a robust linear regression (slope of .27 and R2 of .0015). Each cluster has distinct traits. Cluster 2 papers are primarily randomized control trials in oncology or other specialized fields that are rarely reported by the media; cluster 3 papers concern epidemiology, public health, and general medical practice and are moderately reported by the media; cluster 4 papers were most highly reported by the media.

Figure 1. Clustering of Original Research Articles

Figure 1. Clustering of Original Research Articles

Conclusions

Highly viewed papers within the first year of publication of original research do not robustly predict citations over the following 2 years. Instead, papers that are highly cited become increasingly viewed over time. Furthermore, following the first year of publication, most original research falls into 4 distinct clusters of views and citations, reflecting the varying aims of biomedical publication.

1New England Journal of Medicine, Boston, MA, USA, sjangi@nejm.org; 2Beth Israel Deaconess Medical Center, Boston, MA, USA

Conflict of Interest Disclosures

None reported.

Back to Top

Uncited or Poorly Cited Articles in the Cardiovascular Literature

Ruizhi Shi,1 Isuru Ranasinghe,1 Aakriti Gupta,1 Behnood Bikdeli,1 Ruijun Chen,1 Natdanai Tee Punnanithinont,1 Julianna F. Lampropulos,1 Joseph S. Ross,1,2 Harlan M. Krumholz1,2

Objective

In an efficient system, virtually all studies worthy of publication would be cited in subsequent papers. We sought to determine the percentage of uncited or poorly cited articles in the cardiovascular literature, the factors associated with citations, and the trend in citations.

Design

We identified cardiovascular journals indexed in Scopus with 20 or more publications. We determined 5-year citations of each original article published in 2006 and the association between article and journal characteristics with likelihood of citation using multivariable logistic regression. To evaluate trends, we obtained similar cohorts for 2004, 2006, and 2008 with 4-year citations of each original article and compared the percents of uncited articles in these years.

Results

Among 144 cardiovascular journals, we identified a total of 18,411 original articles published in 2006. In the following 5 years, the median number of citations was 6 (IQR: 2-16). Of all articles, 2,756 (15.0%) articles were uncited and 6,122 (33.3%) had 1 to 5 citations. English language (OR 4.3, 95% CI 3.6-5.0), journal Impact Factor >4 (OR 5.6, 95% CI 4.6-6.9 compared with journal Impact Factor <1.0), per-person increase in number of authors (OR 1.1, 95% CI 1.1-1.1), and 3 or more author key words (OR 1.4, 95% CI 1.2-1.6) increased the likelihood being cited. Among the 144 journals, the interquartile range for the percent of uncited papers ranged from 7% to 40% (Figure 2). From 2004 to 2008, the overall volume of published articles increased (2004, n=13,880; 2006, n=18,411; 2008, n=19,184), including the volume of uncited publications at 4 years after publication (2004, n=2,393; 2006 n=3,134; 2008, n=3,337); however, the percents of uncited papers were not statistically different (17.2% vs 17.0% vs 17.4%, P value for trend =.4).

Figure 2. Distribution of Percents of Uncited Original Articles

Figure 2. Distribution of Percents of Uncited Original Articles

Conclusion

Nearly half of the cardiovascular literature remains uncited or poorly cited after 5 years and are particularly concentrated in certain journals, suggesting substantial waste in some combination of the funding, pursuit, publication, or dissemination of cardiovascular science.

1Yale University Center for Outcomes Research and Evaluation, New Haven, CT, USA, harlan.krumholz@yale.edu; 2Yale University School of Medicine, New Haven, CT, USA

Conflict of Interest Disclosures

Joseph Ross reports that he is a member of a scientific advisory board for FAIR Health, Inc.

Funding/Support

Harlan Krumholz and Joseph Ross receive support from the Centers of Medicare & Medicaid Services to develop and maintain performance measures that are used for public reporting. Harlan Krumholz is supported by a National Heart, Lung, and Blood Institute Cardiovascular Outcomes Center Award (1U01HL105270-03). Joseph Ross is supported by the National Institute on Aging (K08 AG032886) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program.

Peer Review

Back to Top

Engaging Patients and Stakeholders in Scientific Peer Review of Research Applications: Lessons From PCORI’s First Cycle of Funding

Rachael Fleurence,1 Joe Selby,1 Laura P. Forsythe,1 Anne Beal,1 Martin Duenas,1 Lori Frank,1 John P. A. Ioannidis,2 Michael S. Lauer3

Objective

The mission of the Patient-Centered Outcomes Research Institute (PCORI) is to fund research that helps people make informed health care decisions. Engagement of patients and stakeholders is essential in all aspects of our work, including research application review. PCORI’s first round of peer review was evaluated to measure the impact of different reviewer perspectives on merit review scores.

Design

In 2012 PCORI initiated a 2-phase review (phase 1: online review by 3 scientists using 8 criteria (Table 4); phase 2: review top one-third of applications by 2 additional scientists, 1 patient, 1 stakeholder (clinicians, providers, manufacturers, etc). Scientists provided overall scores based on the 8 criteria; patients/stakeholders focused on criteria 2, 4, and 7. Proposals were scored prior to an in-person discussion and final scores were assigned after discussion. We conducted (1) correlations and Bland-Altman tests to examine agreement between scientific and patient/stakeholder reviewers, (2) random forest analyses to identify unique contributors to final scores (lower depth implies higher correlation), and (3) postreview focus groups.

Results

In phase 2, there was limited agreement on overall scores between scientists and patients/stakeholders prediscussion (r=.18, P=.02), but no observable systematic bias; agreement was stronger postdiscussion (r=.90, P<.001). Random forest analyses indicate the strongest predictors of scientists’ final scores were patient/stakeholder final scores and scientist prediscussion scores (depth=1.033 and 1.861, respectively). The strongest predictors of patient/stakeholder final scores were scientific final scores, scientific prediscussion scores, and patient/stakeholder prediscussion scores (depth=1.084, 2.423, and 3.080, respectively). Twenty-five contracts were awarded after the 2-phase review; only 13 of these scored among the top 25 in phase 1. Postreview focus group themes included a collegial learning experience, a steep learning curve around PCORI’s new criteria, scientists’ appreciation of perspectives offered by patients/stakeholders, scientists’ concern about nonscientists’ level of technical expertise, and patients’/stakeholders’ experiences of being considered less authoritative than scientists.

Table 4. PCORI Merit Review Criteria for the Inaugural Funding Cycle

Table 4. PCORI Merit Review Criteria for the Inaugural Funding Cycle

Conclusions

The projects funded differed after the 2-phase merit review compared to 1 phase of scientific review. Patient/ stakeholder reviewers may have been more likely to incorporate insights from scientists than vice versa. Further research is being conducted on the available data to deepen our understanding of the process.

1Patient-Centered Outcomes Research Institute (PCORI), Washington, DC, USA, rfleurence@pcori.org; 2Stanford University School of Medicine, Stanford, CA, USA; 3National Heart, Lung, and Blood Institute, Bethesda, MD, USA

Conflict of Interest Disclosures

None reported.

Back to Top

Editorial Triage: Potential Impact

Deborah Levine,1 Alexander Bankier,1 Mark Schweitzer,2 Albert de Roos,3 David C. Madoff,4 David Kallmes,5 Douglas S. Katz,6 Elkan Halpern,7 Herbert Y. Kressel8

Objective

Increasing manuscript submissions threaten to overwhelm a biomedical journal’s ability to process manuscripts and overburden reviewers with manuscripts that have little chance of acceptance. Our purpose was to evaluate editorial triage.

Design

In a prospective study of original research manuscripts submitted to a single biomedical journal for an 8-week period beginning July 2012, 329 articles were processed with our normal procedures as well as with a parallel “background triage mode.” The editor in chief/deputy editor (EIC/DE) rated on a 5-point scale the likelihood of an article being accepted for publication (with scores of 1 “definitely reject” and 2 “almost certainly reject” considered “low priority” for publication). Editors noted reasons for low priority ratings. Manuscripts were sent for peer review in the typical fashion, with reviewers chosen by noneditor office staff. There typically were 4 to 8 weeks between initial triage and final decisions (based on standard peer review). The EIC who made the final decision was unaware of triage scores given by DEs; however, there were articles where the final and triage decisions were made by the EIC. Spearman correlation was used to correlate final decisions with triage scores and with reviewer mean scores.

Results

Triage scores, reviewer scores, and final outcomes are detailed in Table 5. Of 124 manuscripts scored as low priority, 6 (4.8%, CI 1.8%-10.2%) were ultimately accepted for publication (P<.0001, correlation .26). “Limited new information” was the primary reason for a low priority score for 57/124 (46%) manuscripts, and 5 manuscripts with low priority score that were ultimately accepted had this reason given. Individual EIC/DE triage scores were weakly to moderately correlated with final decision (r=-.1-.45, with overall EIC/DE group correlation of .24). Reviewer scores were moderately correlated with final decision (r=.62).

Table 5. Triage Scores and Final Decisions

Table 5. Triage Scores and Final Decisions

R-R indicates reject with resubmission allowed.

a1 ultimately accepted.

b4 ultimately accepted.

Conclusions

Editorial peer review triage identified 38% (124/329) of submitted manuscripts as low priority, with lack of new information representing the most common reason for such scoring. Of submitted papers, 1.8% (6/329) would have been “erroneously” triaged, that is, manuscripts potentially worthy of acceptance but triaged as low priority. In our journal, editorial triage represents an efficient method of diminishing reviewer burden without a substantial loss of quality papers.

1Beth Israel Deaconess Medical Center, Department of Radiology, Boston, MA, USA, dlevine@rsna.org; 2The Ottawa Hospital, Department of Radiology, Ottawa, ON, Canada; 3Leiden University Medical Center, Department of Radiology, Leiden, South-Holland, the Netherlands; 4New York-Presbyterian Hospital/Weill Cornell Medical Center, Division of Interventional Radiology, New York, NY, USA; 5Mayo Clinic, Department of Radiology, Rochester, MN, USA; 6Winthrop University Hospital, Department of Radiology, Mineola, NY, USA; 7Massachusetts General Hospital, Institute for Technology Assessment, Boston, MA, USA; 8Radiological Society of North America, Radiology Editorial Office, Boston, MA, USA

Conflict of Interest Disclosures

None of the authors report any disclosures that pertain to the content of this abstract. Other disclosures that do not pertain to this research include royalties from Medrad/Bayer-Shering for endorectal coil (Herbert Kressel); consultant for Spiration, Olympus (Alexander Bankier); consultant for Hologic, expert testimony for Ameritox (Elkan Halpern).

Back to Top

Authors’ Assessment of the Impact and Value of Statistical Review in a General Medical Journal

Catharine Stack,1,2 John Cornell,3 Steven Goodman,4 Michael Griswold,5 Eliseo Guallar,6 Christine Laine,1,2 Russell Localio,7 Alicia Ludwig,2 Anne Meibohm,1,2 Cynthia Mulrow,1,2 Mary Beth Schaeffer,1,2 Darren Taichman,1,2 Arlene Weissman2

Objective

Statistical methods for clinical research are complex, and statistical review procedures vary across journals. We sought authors’ views about the impact of statistical review on the quality of their articles at one general medical journal.

Design

Corresponding authors of all articles published in Annals of Internal Medicine in 2012 that underwent statistical review received an online survey. Surveys were anonymous and authors were informed that individual responses would not be linked to their papers. Per standard procedures, all provisional acceptances and revisions received statistical review. We asked authors about the amount of effort needed to respond to the statistical review, the difficulty in securing the necessary statistical resources, and the degree to which statistical review had an impact on the quality of specific sections of and the overall published article. Authors of rejected articles were not surveyed because rejected papers rarely receive full statistical review.

Results

The online survey was completed by 74 of 94 (79%) corresponding authors. Response rates varied by study design (90% randomized trials, 83% cohort studies, 74% systematic reviews) and number of revisions (90% 3 revisions, 80% 2 revisions, 65% 1 revision). Published studies included reports of original research (73%), systematic reviews/meta-analysis (23%), and decision analysis (4%). Of papers, 21%, 49%, and 29% required 1, 2, and 3 or more revisions, respectively. Of the authors, 61% reported a moderate or large increase in the overall quality of the paper as a result of the statistical review process; 61% and 59% noted improvements to the statistical methods and results sections, respectively; and 19% reported improvements to the conclusions section. Sixty-four percent of authors indicated considerable effort was required to respond to the statistical editor’s comments. A similar proportion (65%) reported that the effort required was worth the improved quality. Thirty-two percent of authors reported having some difficulty in securing the statistical support needed to respond to the statistical editor’s comments, and 5% reported having a lot of difficulty.

Conclusions

The majority of authors whose papers received statistical review reported that the statistical review process improved their articles. Most authors reported that the effort required to respond to the statistical reviewer’s comments was considerable but worthwhile.

1Annals of Internal Medicine, Philadelphia, PA, USA, cstack@acponline.org; 2American College of Physicians, Philadelphia, PA, USA; 3University of Texas Health Science Center, San Antonio, TX, USA; 4Stanford University, Stanford, CA, USA; 5University of Mississippi, Jackson, MS, USA; 6Johns Hopkins University, Baltimore, MD, USA; 7University of Pennsylvania, Philadelphia, PA, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

No external funding was provided for this study. Contributions of staff time and resources came from Annals of Internal Medicine.

Back to Top

Mentored Peer Review of Standardized Manuscripts as an Educational Tool

Victoria S. S. Wong,1 Roy E. Strowd III,2 Rebeca Aragón-García,3 Mitchell S. V. Elkind3

Objective

To determine whether mentored peer review of standardized scientific manuscripts with introduced errors is a feasible and effective educational tool for teaching neurology residents the fundamental principles of research methodology and peer review.

Design

A partially blinded, randomized, controlled multicenter pilot study was designed. Neurology residents (PGY-3 and PGY-4) were recruited from 9 sites. Standardized manuscripts with introduced errors were created and distributed at 2-month intervals: a baseline manuscript, 3 formative manuscripts, and a final (postintervention) manuscript. All residents were asked to peer review these manuscripts. Residents were randomized at enrollment to receive or not receive faculty mentoring on appropriate peer review technique and manuscript assessment after each review. Pretests and posttests were administered to determine improvement in knowledge of research methodology before and after peer reviews. The Review Quality Instrument (RQI), a validated, objective measure to assess quality of peer reviews, was used to evaluate baseline and final reviews blinded to assigned group.

Results

Seventy-eight neurology residents were enrolled (mean age of 30.8 ± 2.6 years; 39 [50%] male; mean duration of 3.4 years [range 2-10] since medical school graduation). Mean pretest score was 13.2 ± 2.7 correct of 20 questions. Sixty-four residents (82% of enrolled; 30 nonmentored, 34 mentored) returned a review of the first manuscript, 49 (77% of active participants) returned the second, 35 (55%) returned the third, 28 (47%) returned the forth, and 45 (71%, 24 nonmentored, 21 mentored) returned a review of the final manuscript. Ten residents were withdrawn due to lack of participation, and 5 asked to be withdrawn. Preliminary RQI evaluation of the first manuscript reviews by a single reviewer revealed an average score of 26.4 ± 6.7 out of 40 points. The RQI evaluation of the reviews from the first and final manuscripts by 2 independent reviewers are pending.

Conclusion

This multicenter pilot study will determine whether peer review of standardized manuscripts with faculty mentoring is a feasible and potentially effective method for teaching neurology residents about peer review and research methodology.

1Oregon Health & Science University, Department of Neurology, Portland, OR, USA, vwongmd@gmail.com; 2Wake Forest School of Medicine, Department of Neurology, Winston-Salem, NC, USA; 3Columbia University, Department of Neurology, New York, NY, USA

Conflict of Interest Disclosures

Mitchell Elkind receives compensation from the American Academy of Neurology for serving as the associate editor for the Resident and Fellow Section.

Funding/Support

This project is funded by an Education Research Grant from the American Academy of Neurology Institute (AANI, the American Academy of Neurology’s education affiliate). The AANI is aware of this abstract submission, but otherwise has no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the abstract.

Back to Top

The Reporting of Randomized TrIals: Changes After Peer RevIew (CAPRI)

Sally Hopewell,1,2 Gary Collins,1 Ly-Mee Yu,1 Jonathan Cook,1,3 Isabelle Boutron,2 Larissa Shamseer,4 Milensu Shanyinde,1 Rose Wharton,1 Douglas G. Altman1

Objective

Despite wide use of peer reviewing, little is known about its impact on the quality of reporting of published research. From numerous reviews showing poor reporting in published research, it seems that peer reviewers frequently fail to detect important deficiencies. The aims of our study are to examine the (1) nature and extent of changes made to manuscripts after peer review, in relation to reporting of methodological aspects of randomized trials; and (2) type of methodological changes requested by peer reviewers and the extent to which authors adhere to these requests.

Design

This is a retrospective, before-and-after study. We included all primary reports of randomized trials published in BMC Medical Series journals in 2012. We chose these journals because they publish all submitted versions of a manuscript and corresponding peer review comments and author responses. By accessing the prepublication history for each published trial report, we examined any differences (ie, additions, changes, or subtractions) in reporting between the original and final submitted versions of the manuscript and second, whether or not specific CONSORT items were reported. From the prepublication history for each report, we also assessed peer reviewers’ comments with regard to reporting methodological issues. Our main outcome is the percentage improvement in reporting following peer review, measured as the number of CONSORT checklist items reported.

Results

We identified 86 primary reports of randomized trials. The median interval between the original and final submitted version of a manuscript was 146 days (range, 29-333) with 36 days (7-127) from final submission to online publication. The number of submitted versions of a manuscript varied (median 3; 2-7), with a median of two peer reviewers and reviewer rounds per manuscript. Changes between the original and final submitted version were common; overall the median proportion of words deleted from the original manuscript was 11% (1%-59%) and 20% (4%-68%) for words added.

Conclusions

There was substantial variation in terms of timescale and the scale of revisions between original and submitted versions. Further data extraction and an in-depth analysis of the nature and extent of these changes are under way.

1Centre for Statistics in Medicine, University of Oxford, Oxford, UK, sally.hopewell@csm.ox.ac.uk; 2Centre d’Epidémiologie Clinique, Université Paris Descartes, Paris, France; 3Health Services Research Unit, University of Aberdeen, Aberdeen, UK; 4Ottawa Hospital Research Institute, Ottawa, ON, Canada

Conflict of Interest Disclosures

None reported.

Ethical Issues and Misconduct

Back to Top

Identical or Nearly So: Duplicate Publication as a Separate Publication Type in PubMed

Mario Malički,1 Ana Utrobičić,2 Ana Marušić1

Objective

“Duplicate Publication” was introduced into Medical Subject Headings (MeSH) of the National Library of Medicine (NLM) in 1991 as a separate publication type and is defined as “work consisting of an article or book of identical or nearly identical material published simultaneously or successively to material previously published elsewhere, without acknowledgment of the prior publication.” Our aim was to assess how journals corrected duplicate publications indexed by NLM and how these corrections were visible in PubMed.

Design

The data set included 1,011 articles listed as “duplicate publication [pt]” in PubMed on January 16, 2013. We checked PubMed to identify if duplicate articles were linked with a Correction/Comment notice. We also checked the journals’ websites for published notices and identified the reasons provided for the duplications in those notices. The time from the duplicate article publication to the notice of duplication/retraction in PubMed and/or journals was also recorded.

Results

A total of 624 duplicate publications (61.7%) identified in PubMed lacked any notice in respective journals. There were 152 notices of 342 duplicate publications (ie, published twice or more times) found in journals and marked as Comments/Corrections in PubMed. The reasons for duplications are presented in Table 6

. Median time from duplicate publication to notice of duplication was 8 months (95% CI, 6-10). Of articles with notices, 130 of 152 were available online, but only 34 (26.1%) had links to the published notices of duplication. Of indexed duplicate publications, 24 (2.4%) articles were retracted: 10 due to publishers’ errors, 11 due to authors’ errors (3 notices could not be accessed). Of these retractions, 14 were marked as “retracted publication [pt]” in PubMed, and 10 more retractions were found only at journal websites.

Table 6. Reasons for Duplication of Articles With Published Notices of Duplicate Publication (n=152)

Table 6. Reasons for Duplication of Articles With Published Notices of Duplicate Publication (n=152)

Percentages for statements within individual categories are to the total sum for the category.

Conclusions

More than half of duplicate publications identified in PubMed have not been corrected by journals. All stakeholders in research publishing should take seriously the integrity of the published record and take a proactive role in alerting the publishing community to redundant publications.

1University of Split School of Medicine, Department of Research in Biomedicine and Health, Split, Croatia, ana.marusic@mefst.hr; 2Central Medical Library, University of Split School of Medicine, Split, Croatia

Conflict of Interest Disclosures

None reported.

Back to Top

Fate of Articles That Warranted Retraction Due to Ethical Concerns: A Descriptive Cross-sectional Study

Nadia Elia,1 Elizabeth Wager,2 Martin R. Tramèr3

Objective

Guidelines on how to retract articles exist. Our objective was to verify whether articles that warranted retraction due to ethical concerns have been retracted, and whether this had been done according to published guidelines.

Design

Descriptive cross-sectional study, as of January 2013, of 88 articles by Joachim Boldt, published in 18 journals, which warranted retraction since the State Medical Association of Rheinland-Pfalz (Germany) was unable to confirm approval by an ethics committee. According to the recommendations of the Committee on Publication Ethics, we regarded a retraction as adequate when a retraction notice was published, linked to the retracted article, identified title and authors of the retracted article in its heading, explained the reason and who took responsibility for the retraction, and was freely accessible. Additionally, we expected the full text of retracted articles to be freely accessible and marked using a transparent watermark that preserved original content. Two authors extracted the data independently and contacted editors in chief for clarification in cases of inadequate retraction.

Results

Five articles (5.7%), from 1 journal, fulfilled all criteria for adequate retraction. Nine articles (10.2%) were not retracted (no retraction notice published, full-text article not marked). A total of 79 (90%) retraction notices were published, 76 (86%) were freely accessible, but only 15 (17%) were complete. Seventy-three (83%) full-text articles were marked as retracted, of which 14 (15.9%) had an opaque watermark hiding parts of the original content, and 11 (12.5%) had all original content deleted. Fifty-nine (67%) retracted articles were freely accessible. One editor in chief claimed personal problems to explain incomplete retractions; 8 blamed their publishers. One publisher regretted that no mechanism existed for previous publishers to undertake retractions that were required after the publisher had changed. Two publishers mentioned legal threats from Boldt’s coauthors that prevented them from retracting 6 articles.

Conclusions

Guidelines for retraction of articles are incompletely followed. The role of publishers in the retraction process needs to be clarified and standards are needed on how to mark the text of retracted articles. Legal safeguards are required to allow the retraction of articles against the wishes of authors.

1Division of Anaesthesiology, Geneva University Hospitals, Geneva, Switzerland, nadia.elia@hcuge.ch; 2Sideview, Princes Risborough, UK, 3Division of Anaesthesiology, Geneva University Hospitals, and Medical Faculty, University of Geneva, Geneva, Switzerland

Conflict of Interest Disclosures

Nadia Elia is associate editor, European Journal of Anaesthesiology. Elizabeth Wager is a coauthor of COPE retraction guidelines. Martin Tramèr is editor in chief, European Journal of Anaesthesiology. No other disclosures were reported.

Back to Top

Value of Plagiarism Detection for Manuscripts Submitted to a Medical Specialty Journal

Heidi L. Vermette,1 Rebecca S. Benner,1 James R. Scott1,2

Objective

To assess the value of a plagiarism detection system for manuscripts submitted to Obstetrics & Gynecology.

Design

All revised manuscripts identified as candidates for publication between January 24, 2011, and December 26, 2012, were evaluated using the CrossCheck plagiarism detection software. Outcomes were (1) number and percentage of manuscripts with clear plagiarism, minor copying of short phrases only, redundancy, or no problem as defined by the Committee on Publication Ethics (COPE); (2) time needed to check each manuscript; and (3) actions taken for manuscripts with violations.

Results

Clear plagiarism was detected in 1 (0.3%) of 312 manuscripts in 2011 and 0 of 368 manuscripts in 2012 (Table 7). Forty (12.8%) manuscripts in 2011 and 21 (5.7%) manuscripts in 2012 contained minor copying of short phrases only or redundancy. Our staff spent a mean time of 19.5 minutes per manuscript checking for plagiarism in 2011, and this decreased to 10.2 minutes per manuscript in 2012 (P<.001). The plagiarized manuscript, a case report, was rejected. A detailed description of the plagiarized content was included in the letter to the author. The authors of manuscripts with minor problems and redundancy were asked to rewrite or properly attribute the reused or redundant passages, and all of these manuscripts were eventually published.

Table 7. Detection Rate and Time Spent on Plagiarism Checking

Table 7. Detection Rate and Time Spent on Plagiarism Checking

aDefinitions from COPE available at http://publicationethics.org/files/u2/02A_plagiarism _submitted.pdf.

bPer COPE: “unattributed use of large portions of text and/or data, presented as if they were by the plagiarist.”

cPer COPE: for example, “in discussion of research paper from non-native language speaker; No misattribution of data.”

dPer COPE: “copying from author’s own work.”

Conclusions

CrossCheck is a practical tool for detecting duplication of exact phrases, but it is limited to searching the English-language literature accessible via various repositories and the Internet and does not replace careful peer review and editor assessment. Checking manuscripts for plagiarism represents a substantial time investment for staff. Two years of data from our journal indicate that the number of problems detected was low, but systematic assessment of manuscripts uncovered problems that otherwise would have appeared in print.

1Obstetrics & Gynecology, American College of Obstetricians and Gynecologists, Washington, DC, USA, rbenner@greenjournal.org; 2University of Utah School of Medicine, Department of Obstetrics and Gynecology, Salt Lake City, UT, USA

Conflict of Interest Disclosures

None reported.

Back to Top

Implementation of Plagiarism Screening for the PLOS Journals

Elizabeth Flavall,1 Virginia Barbour,2 Rachel Bernstein,1 Katie Hickling,2 Michael Morris1

Objective

PLOS publishes a range of journals, from highly selective to 1 that assesses submissions based only on objective criteria. Overall the journals receive more than 5,000 submissions per month. We aimed to assess the quantity of submitted manuscripts with potential plagiarism issues along with the staffing requirements and optimal procedures for implementing plagiarism screening using CrossCheck/iThenticate software across the PLOS journals.

Design

Consecutively submitted research articles were screened until the following numbers were reached: PLOS Medicine (n=203), PLOS Pathogens (n=250), and PLOS ONE (n=241). Articles were screened at initial submission prior to any revision requests. Initial screening was performed by junior staff members, and potential issues were elevated in house for further screening. Manuscripts were classified as major problem, minor problem, and no problem. Extent, originality, and context were assessed using guidance from the Committee on Publication Ethics (COPE) Discussion Document “How should editors respond to plagiarism?” Manuscripts with major problems were flagged but otherwise continued through the review process as normal. Any flagged manuscripts that were not rejected in the first round of review were elevated to journal editors and managers for follow-up.

Results

The percent of manuscripts classified as having any problem (major or minor) are shown in Figure 3

Of the manuscripts classified as major problem, almost all were rejected in the first round of review for reasons unrelated to the screening results. Only 1 manuscript with major problems, submitted to PLOS Pathogens, was elevated to the editors for follow-up as it was not rejected following peer review. Minor problems consisted primarily of small instances of self-plagiarism and attributed copying; follow-up was deemed unnecessary for the purposes of this study. The time per manuscript for screening was on average 7 minutes (range, 2-60 minutes). More time was spent on articles with problems. The total screening time per journal over the 3 month pilot was PLOS Medicine: 16 hours, PLOS Pathogens: 29 hours, and PLOS ONE: 42 hours.

Figure 3. Percent of Manuscripts Classified as Having a Problem When Screened for Plagiarism

Figure 3. Percent of Manuscripts Classified as Having a Problem When Screened for Plagiarism

Conclusion

Based on the results of this study, we have determined that plagiarism screening is feasible at PLOS and have made recommendations for the optimal screening procedures on the different journals.

1PLOS, San Francisco, CA, USA, eflavall@plos.org; 2PLOS, Cambridge, UK

Conflict of Interest Disclosures

Virginia Barbour spoke on behalf of the Committee on Publication Ethics, of which she is Chair, at the 5th International Plagiarism Conference in 2012, which was organized by Plagiarism Today, and which is part of the company that makes iThenticate software. The authors report no other conflicts of interest.

Funding/Support

The authors are employees of PLOS; this work was conducted during their salaried time.

Back to Top

Publication Ethics: 16 Years of COPE

Irene Hames,1 Charon A. Pierson,2 Natalie E. Ridgeway,3 Virginia Barbour4

Objective

The Committee on Publication Ethics (COPE) holds a quarterly forum in which journal editors from its 8,500-strong membership can bring publication-ethics cases for discussion and advice. Since it was established in 1997, COPE has amassed a collection of almost 500 cases. We set out to develop a more comprehensive classification scheme and to classify all the cases, providing a finer level of detail for analysis. We wanted to determine trends and establish whether the analysis could be used to guide the development of new ethical guidelines.

Design

A new taxonomy was developed, comprising 18 main classification categories and 100 key words. Cases were assigned up to 2 classification categories to denote the main topics and up to 10 key words to cover all the issues. Classification and key word coding denotes that a topic was raised and discussed, not that a particular form of publication misconduct had occurred.

Results

Between 1997 and the end of 2012, 485 cases were brought to COPE. These received 730 classification categories. The number of cases presented annually varied (from 16 to 42), but with no clear pattern, and there was no increase in cases when COPE’s membership increased from 350 to about 3,500 in 2007-2008. The number of classifications has, however, increased gradually for cases, from a mean of 1.3 per case 1997-2000 to 1.7 per case 2009-2012. Authorship and plagiarism have been and remain major topics (Figure 4

). Categories of cases that have increased most noticeably are correction of the literature, data, misconduct/questionable behavior, conflicts of interest, and peer review. Cases involving questionable/unethical research and redundant/duplicate publication remain prominent, but have been decreasing in recent years.

Figure 4. Classification of COPE Cases, 1997-2012 (for Categories With More Than 7 Classifications in a 4-Year Period)

Figure 4. Classification of COPE Cases, 1997-2012 (for Categories With More Than 7 Classifications in a 4-Year Period)

Conclusions

The increasing number of case classification categories over time suggests an increasing complexity in the cases being brought to COPE. Cases in a number of categories are increasing. A key word analysis is being undertaken to provide a greater level of detail and to determine whether new guidelines could effectively be developed in any of those areas. The study is also being used to identify cases for inclusion in educational packages of publication-ethics cases for use by beginners through to experienced groups.

1Irene Hames Consulting, York, UK, irene.hames@gmail.com; 2Journal of the American Association of Nurse Practitioners; Charon A. Pierson Consulting, El Paso, TX, USA; 3Committee on Publication Ethics (COPE), UK; 4PLOS Medicine, Cambridge, UK

Conflict of Interest Disclosures

Irene Hames reports being a COPE Council member, director, and trustee since November 2010 (unpaid); and owner of Irene Hames Consulting (an editorial consultancy advising and informing the publishing, higher education, and research sectors); Charon Pierson reports being a COPE Council member since March 2011 (unpaid); and owner of Charon A. Pierson Consulting (paid consultant for geriatric education grants and online course development in gerontology at various universities). Natalie Ridgeway reports being the COPE Operations Manager since March 2010 (paid employee). Virginia Barbour reports being the chair of COPE since March 2012 and was previously COPE Secretary and Council Member (all unpaid).

Funding/Support

There is no funding support for this project; it is being carried out on a voluntary basis; all expenses necessary to carry out the project are being covered by COPE, as are the costs involved in attending the Congress for presentation of the study.

Back to Top

Monday, September 9

Abstracts from day 2 of the Seventh International Congress on Peer Review and Biomedical Publication will be posted at 8:00AM CST on Monday September 9.

Bias

Back to Top

Underreporting of Conflicts of Interest Among Trialists: A Cross-sectional Study

Kristine Rasmussen,1 Jeppe Schroll,1,2 Peter C. Gøtzsche,1,2 Andreas Lundh1

Objective

To determine the prevalence of conflicts of interest (COI) among non–industry-employed Danish physicians who are authors of clinical trials and to determine the number of undisclosed conflicts of interest in trial publications.

Design

We searched EMBASE for papers with at least 1 Danish author. Two assessors included the 100 most recent papers of drug trials published in international journals that adhere to the ICMJE’s manuscript guidelines. For each paper, 2 assessors independently extracted data on trial characteristics and author COI. We calculated the prevalence of disclosed COI among non–industry-employed Danish physician authors and described the type of COI. We compared the COI reported in the papers to those reported on the publicly available Danish Health and Medicines Authority’s disclosure list to identify undisclosed COI.

Results

Preliminary analysis of the first 50 included papers found 27 papers with industry sponsorship, 14 with mixed sponsorship, and 9 with nonindustry sponsorship. Of a total of 563 authors, 171 (30%) were non–industry-employed Danish physicians. Forty-four (26%) of these authors disclosed 1 or more COI in the journal. Among the 171 authors, 19 (11%) had undisclosed COI related to the trial sponsor or manufacturer of the drug being studied, and 45 (26%) had undisclosed COI related to competing companies manufacturing drugs for the same indication as the trial drug. Full analysis of all 100 trials and further exploration of data will be presented at the conference.

Conclusions

Our preliminary results suggest that there is substantial underreporting of COI in clinical trials. Publicly available disclosure lists may assists journal editors in ensuring that all relevant COI are disclosed.

1The Nordic Cochrane Centre, Department of Rigshospitalet, Copenhagen, Denmark, al@cochrane.dk; 2Faculty of Health and Medical Sciences, University of Copenhagen, Denmark

Conflict of Interest Disclosures

None reported.

Funding/Support

This study received no external funding. The Nordic Cochrane Centre provided in-house resources.

Back to Top

Outcome Reporting Bias in Trials (ORBIT II): An Assessment of Harm Outcomes

Jamie Kirkham,1 Pooja Saini,1 Yoon Loke,2 Douglas G. Altman,3 Carrol Gamble,1 Paula Williamson1

Objective

The prevalence and impact of outcome reporting bias (ORB), whereby outcomes are selected for publication on the basis of the result, have previously been quantified for benefit outcomes in randomized controlled trials (RCTs) on a cohort of systematic reviews. Important harm outcomes may also be subject to ORB where trialists prefer to focus on the positive benefits of an intervention. The objectives of this study were (1) estimate the prevalence of selective outcome reporting of harm outcomes in a cohort of both Cochrane reviews and non-Cochrane reviews, and (2) understand the mechanisms that may lead to incomplete reporting of harms data.

Design

A classification system for detecting ORB for harm outcomes in RCTs and nonrandomized studies was developed and applied to both a cohort of Cochrane systematic reviews and non-Cochrane reviews that considered the synthesis of specific harms data as their main objective. An e-mail survey of trialists from the included trials in the cohort of reviews was also undertaken to examine how harms data are collected and reported in clinical studies.

Results

A total of 234 reviews were identified for the non-Cochrane review cohort and 244 new reviews for the Cochrane review cohort. In 77% (180/234) of the non-Cochrane reviews, there was suspicion of ORB in at least 1 trial. Forty-nine percent (89/180) could not be fully assessed for ORB due to shortcomings in the review reporting standards. In the Cochrane review cohort, many reviews also were not assessable as harm outcomes were poorly specified. Study findings from the reviews in which a full assessment for ORB could be carried out for both the cohorts will be presented. Responses from the trialist survey and an example of how ORB can influence the benefit-harm ratio will also be presented.

Conclusions

Trade-off between benefits and harms is very important. Making informed decisions that consider both benefits and harms of an intervention in an unbiased way is essential to make reliable benefit-harm predictions.

1Department of Biostatistics, University of Liverpool, Liverpool, UK, jjk@liv.ac.uk; 2School of Medicine, University of East Anglia, Norwich, UK; 3Centre for Statistics in Medicine, University of Oxford, Oxford, UK

Conflict of Interest Disclosures

Yoon Loke is a co-convenor of the Cochrane Adverse Effects Methods Group. No other disclosures were reported.

Funding/Support

The Outcome Reporting Bias In Trials (ORBIT II) project is funded by the Medical Research Council (MRC research grant MR/J004855/1). The funders had no role in the study design, data collection, and analysis.

Back to Top

Systematic Review of Evidence for Selective Reporting of Analyses

Kerry Dwan,1 Paula R. Williamson,1 Carrol Gamble,1 Julian P. T. Higgins,2 Jonathan A. C. Sterne,2 Douglas G. Altman,3 Mike Clarke,4 Jamie J. Kirkham1

Objective

Selective reporting of information or discrepancies in trials may occur for many aspects of a trial. Examples include the selective reporting of outcomes and the selective reporting of analyses (eg, subgroup analyses or per protocol rather than intention-to-treat analyses). Selective reporting bias occurs when the inclusion of analyses in the report is based on the results of those analyses. Discrepancies occur when there are changes between protocol and publication. The objectives of this study were (1) review and summarize the evidence from studies that have assessed discrepancies or the selective reporting of analyses in randomized controlled trials and (2) compare current reporting guidelines to identify where improvement is needed.

Design

Systematic review of studies that have assessed discrepancies or the selective reporting of analyses in randomized controlled trials. The Cochrane methodology register, Medline, and PsycInfo were searched in May 2013. Cohorts containing randomized controlled trials (RCTs) were eligible. This review provides a descriptive summary of the included empirical studies. Along with the collaboration with experts in this area, current guidelines, such as Consolidated Standards of Reporting Trials (CONSORT) and International Conference on Harmonisation (ICH), have been compared to identify the specific points that address the appropriate reporting of a clinical trial with respect to outcomes, outcome measures, subgroups, and analyses and to assess whether improvements are needed.

Results

Eighteen studies have been included in this review. Ten compare details within published reports, 4 compare protocols to publications, and 4 compare company documents or documents submitted to regulatory agencies with publications. The studies consider discrepancies in statistical analyses (7); subgroup analyses (9); and composite outcomes (2). No studies considered selective reporting. There were discrepancies in statistical analyses in 22% to 88% of RCTs, in unadjusted vs adjusted analyses (46% to 82%), and in subgroup analyses (31% to 100%). Composite outcomes were inadequately reported.

Conclusion

This work highlights the evidence of selective reporting and discrepancies and demonstrates the importance of prespecifying analysis and reporting strategies during the planning and design of a clinical trial, for the purposes of minimizing bias when the findings are reported.

1University of Liverpool, Department of Biostatistics, Liverpool, UK, kdwan@liverpool.ac.uk; 2University of Bristol, Bristol, UK; 3University of Oxford, Oxford, UK; 4Queen’s University, Belfast, UK

Conflict of Interest Disclosures

Kerry Dwan is a coauthor of the Outcome Reporting Bias In Trials (ORBIT) study. Paula Williamson is a coauthor of one of the included studies in the review and coauthor of the Outcome Reporting Bias In Trials (ORBIT) study. Carrol Gamble is a coauthor of the Outcome Reporting Bias In Trials (ORBIT) study. Douglas G. Altman is a coauthor of one of the included studies in the review, coauthor of the Outcome Reporting Bias In Trials (ORBIT) study, and a coauthor of the CONSORT statement. Jamie Kirkham is a coauthor of the Outcome Reporting Bias In Trials (ORBIT) study. Julian Higgins, Jonathan Sterne, and Mike Clarke report no conflicts of interest.

Funding/Support

The MRC Network of Hubs for Trial Methodology Research. The sponsor has no role in this work.

Back to Top

Impact of Spin in the Abstract on the Interpretation of Randomized Controlled Trials in the Field of Cancer: A Randomized Controlled Trial

Isabelle Boutron,1-4 Douglas G. Altman,5 Sally Hopewell,1,4,5 Francisco Vera-Badillo,6 Ian Tannock,6 Philippe Ravaud1-4,7

Objective

Spin is defined as a specific way of reporting to convince readers that the beneficial effect of the experimental treatment is greater than is shown by the results. The aim of this study is to assess the impact of spin in abstracts of randomized controlled trials (RCTs) with non–statistically significant results in the field of cancer on readers’ interpretation.

Design

A 2-arm parallel-group RCT comparing the interpretation of results in abstracts with or without spin. We selected from a collection of articles identified in previous work a sample of reports describing negative (ie, statistically nonsignificant primary outcome) RCTs with 2 parallel arms evaluating treatments in the field of cancer and having spin in the abstract conclusion. Selected abstracts were rewritten by 2 researchers according to specific guidelines to remove spin. All abstracts were presented in the same format without the identifying authors or journal name. The names of treatments were masked by using generic terms (eg, experimental treatment A). Corresponding authors (n=300) of clinical trials indexed in PubMed and blinded to the objectives of our study will be randomized using a centralized computer-generated randomization to evaluate 1 abstract with spin or 1 abstract without spin. The primary endpoint is the interpretation of abstract results by the participants. After reading each abstract participants will answer the following question: “Based on this abstract, do you think treatment A would be beneficial to patients?” (answer: numerical scale from 0-10)

Results

Three hundred participants were randomized; 150 assessed an abstract with spin and 150 an abstract with no spin. From abstracts with spin, the experimental treatment was rated as being more beneficial (scale 0-10, mean [SD] = 3.6 [2.5] vs 2.9 [2.6]; P=.02), the trial was rated as less rigorous (scale 0-10, mean [SD] = 4.5 [2.4] vs 5.1 [2.5]; P =.04) and participants were more interested in reading the full-text article (scale 0-10, mean [SD] = 5.1 [3.2] vs 4.3 [3.0]; P =.0311). There was no statistically significant difference for the importance of the study (scale 0-10, mean [SD] = 4.6 [2.4] vs 4.9 [2.4]; P =.17) and the need to run another trial (scale 0-10, mean [SD] = 4.8 [2.9] vs 4.2 [2.9]; P =.06).

Conclusion

Spin in abstracts of RCTs in the field of cancer may have an impact on the interpretation of these trials.

1INSERM U738, Paris, France; 2Centre d’Épidémiologie Clinique, AP-HP (Assistance Publique des Hôpitaux de Paris), Hôpital Hôtel Dieu, Paris, France, isabelle.boutron@htd.aphp.fr; 3Paris Descartes University, Sorbonne Paris Cité, Faculté de Médecine, Paris, France; 4French Cochrane Center, Paris, France; 5Centre for Statistics in Medicine, Oxford University, Oxford, UK; 6University of Toronto, Toronto, ON, Canada; 7Department of Epidemiology, Columbia University Mailman School of Public Health, New York, NY, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

The study received a grant from the French Ministry of Health (Programme Hospitalier de Recherche Clinique cancer) and the Fondation pour la Recherche Médicale (FRM, Equipe Espoir de la Recherche 2010). The funder had no role in the design and the conduct of this study.

Publication Bias

Back to Top

Authors’ Reasons for Unpublished Research Presented at Biomedical Conferences: A Systematic Review

Roberta W. Scherer, Cesar Ugarte-Gil

Objective

Only about half of studies presented in conference abstracts are subsequently published in full. Reasons for not publishing abstract results in full are often attributed to the expectation of journal rejection. We aimed to systematically review studies that asked abstract authors for reasons for failing to publish abstract results in full.

Design

We searched Medline, EMBASE, the Cochrane Library, Web of Science, and references cited in eligible studies in November 2012 for studies examining full publication of results at least 2 years after presentation at a conference. We included studies if investigators contacted abstract authors for reasons for nonpublication. We independently extracted information on methods used to contact abstract authors, study design, and reasons for nonpublication. We calculated a weighted mean average of the proportion of type of reason, weighted by total number of responses by study.

Results

We identified 27 (of 367) studies published between 1992 and 2011 that were eligible for this study. The mean full publication rate was 56% (95% CI, 55 to 57%; n = 24); 7 studies reported on abstracts describing clinical trials. Investigators typically sent a closed-ended questionnaire with free text options to the lead and then successive authors until receiving a response. Of 24 studies that itemized reported reasons, 6 collected information on the most important reason. Lack of time comprised 31.5% of reasons in studies that had included this as a reason and 45.4% of the most important reason (Table 8

). Other commonly stated reasons were lack of resources, publication not an aim, low priority, incomplete study, and trouble with coauthors. Limitations of these results include heterogeneity across studies and self-report of reasons by authors.

Table 8. Proportion of Reasons for Nonpublication by Total Number of Reasons Reported

Table 8. Proportion of Reasons for Nonpublication by Total Number of Reasons Reported

Columns include studies reporting multiple reasons, the most important reason, or the reasons reported by authors of clinical trial abstracts. WMA indicates weighted mean average.

Conclusion

Across medical specialties, the main reasons for not subsequently publishing an abstract in full lies with factors related to the abstract author rather than with a journal.

Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA, rscherer@jhsph.edu

Conflict of Interest Disclosures

None reported.

Funding/Support

Support was provided by the National Eye Institute, National Institutes of Health (U01EY020522-02). The sponsor had no input in the design or conduct of this study.

Back to Top

Role of Editorial and Peer Review Processes in Publication Bias: Analysis of Drug Trials Submitted to 8 Medical Journals

Marlies van Lent,1 John Overbeke,2 Henk Jan Out1

Objective

Positive publication bias has been widely addressed. It has mainly been ascribed to authors and sponsors failing to submit negative studies, but may also result from lack of interest from editors. In this study, we evaluated whether submitted manuscripts with negative outcomes were less likely to be published than studies with positive outcomes.

Design

A retrospective study of manuscripts reporting results of randomized controlled trials (RCTs) submitted to 8 medical journals between January 1, 2010, and April 30, 2012, was done. We included 1 general medical journal (BMJ) and 7 specialty journals (Annals of the Rheumatic Diseases, British Journal of Ophthalmology, Diabetologia, Gut, Heart, Journal of Hepatology, and Thorax). We selected journals indexed with the highest Impact Factors within subject categories, according to Institute for Scientific Information Journal Citation Report 2011, and that had published a substantial number of drug RCTs in 2010-2011. Original research manuscripts were screened and those reporting results of RCTs were included, if at least 1 study arm assessed the efficacy or safety of a drug intervention and a statistical test was used to evaluate treatment effects. Manuscripts were either outright rejected, rejected after external peer review, or accepted for publication. Trials were classified as nonindustry, industry-supported, or industry-sponsored, and outcomes as positive or negative, based on predefined criteria.

Results

Of 15,972 manuscripts submitted, we identified 472 drug RCTs (3.0%), of which 98 (20.8%) were accepted for publication. Among submitted drug RCTs, 287 (60.8%) had positive and 185 (39.2%) negative results. Of these, 135 (47.0%) and 86 (46.5%), respectively, were rejected immediately and 91 (31.7%) and 61 (33.0%) after peer review. In total, compared to the number of submitted manuscripts, 60 (20.9%) positive studies were published compared to 38 (20.5%) negative studies. One positive study was withdrawn by authors before editorial decisions were made. Nonindustry trials (n=213) had positive outcomes in 138 manuscripts (64.8%), compared to 78 (70.9%) in industry-sponsored studies (n=110). Industry-supported trials (n=149) were positive in 71 manuscripts (47.7%) and negative in 78 manuscripts (52.3%).

Conclusion

Submitted manuscripts on drug RCTs with negative outcomes are not less likely to be published than those with positive outcomes.

1Clinical Research Centre Nijmegen, Department of Pharmacology–Toxicology Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands, M.vanLent@pharmtox.umcn.nl; 2Medical Scientific Publishing, Department of Primary and Community Care, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands

Conflict of Interest Disclosures

Henk Jan Out is an employee of Teva Pharmaceuticals next to his professorship at the university. John Overbeke is the immediate past-president of the World Association of Medical Editors (WAME). Marlies van Lent reports no conflicts of interest.

Funding/Support

This research was supported by an unrestricted educational grant from MSD. MSD (Merck, Sharp & Dohme) B.V. is a Dutch subsidiary of Merck & Co, Inc located in Oss, the Netherlands. The funder had no role in the study’s design, data collection and analysis, or the preparation of this abstract.

Back to Top

Accessing Internal Company Documents for Research: Where Are They?

L. Susan Wieland,1 Lainie Rutkow,2 S. Swaroop Vedula,3 Christopher N. Kaufmann,2 Lori Rosman,4 Claire Twose,3,4 Nirosha Mahendraratnam,2 Kay Dickersin2

Objective

Internal pharmaceutical company data have attracted interest because they have been shown to sometimes differ from what is publicly reported, and because they often make unpublished data available to researchers. However, internal company documents are not readily available, and researchers have obtained data through litigation and requests to regulatory authorities. In contrast, repositories of internal tobacco industry documents, created through massive litigation, have supported diverse and informative research. Our objective was to describe sources of internal corporate documents that had been used in health research, so that we document where these important data, that could be useful to researchers, are located.

Design

Although our main interest was pharmaceutical industry documents, our initial search strategy was designed to identify research articles that used internal company documents from any industry. We searched PubMed and EMBASE, and 2 authors independently reviewed retrieved records for eligibility. We checked our findings against the Tobacco Documents Bibliography (http://www.library.ucsf.edu/tobacco/docsbiblio), citations to included articles, and lists from colleagues. When we discovered that we had missed many articles, informationists redesigned and ran an additional search to identify articles using pharmaceutical documents.

Results

Our initial electronic searches retrieved 9,305 records, of which 357 were eligible for our study. Ninety-one percent (325/357) used tobacco, 5% (17/357) pharmaceutical, and 4% (15/357) other industry documents. Most articles (325/357) posed research questions about the strategic behavior of the company. Despite extensive testing, our search did not retrieve all known studies: we missed 41% of articles listed in the Tobacco Documents Bibliography and reference lists led to 4 additional eligible pharmaceutical studies. Our redesigned search yielded 26,605 citations not identified by the initial search, which we decided was an impractical number to screen.

Conclusions

Searching for articles using internal company documents is difficult and resource-intensive. We suggest that indexed and curated repositories of internal company documents relevant to health research would facilitate locating and using these important documents.

1Brown University, Providence, RI, USA; 2Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA, kdickers@jhsph.edu; 3Johns Hopkins University, Baltimore, MD, USA; 4William H. Welch Medical Library, Baltimore, MD, USA

Conflict of Interest Disclosures

None reported.

Trial Registration

Back to Top

Publication Agreements or “Gag Orders”? Compliance of Publication Agreements With Good Publication Practice 2 for Trials on ClinicalTrials.gov

Serina Stretton,1 Rebecca A. Lew,1 Luke C. Carey,1 Julie A. Ely,1 Cassandra Haley,1 Janelle R. Keys,1 Julie A. Monk,1 Mark Snape,1 Mark J. Woolley,1 Karen L. Woolley1-3

Objective

Good Publication Practice 2 (GPP2) recognizes the shared responsibility of authors and industry sponsors to publish clinical trial data and confirms authors’ freedom to publish. We quantified the extent and type of publication agreements between industry sponsors and investigators for phase 2-4 interventional clinical trials on ClinicalTrials.gov and determined whether these agreements were GPP2 compliant.

Design

Trial record data were electronically imported on October 7, 2012, and trials were screened for eligibility (phase 2-4, interventional, recruitment closed, results available, first received after November 10, 2009, any sponsor type, investigators not sponsor employees). Publication agreement information was manually imported from the Certain Agreements field. Two authors independently categorized agreement information for GPP2 compliance, resolving discrepancies by consensus. An independent academic statistician conducted all analyses.

Results

Of the 484 trials retrieved, 388 were eligible for inclusion and 96 were excluded (12 trials that were still active and 84 trials with investigators who were sponsor employees). Of the eligible trials, 81% (313/388) reported publication agreement information in the Certain Agreements field. Significantly more publication agreements reported on ClinicalTrials.gov were GPP2 compliant than noncompliant (74% [232/313] vs 26% [81/313], χ2 P<.001). Reasons for GPP2 noncompliance were insufficient, unclear, or ambiguous information reported (48%, 39/81), sponsor-required approval for publication (36%, 29/81), sponsor-required text changes (9%, 7/81), and sponsor bans on publication (7%, 6/81). Drug trials (180/255) were significantly less likely to have GPP2-compliant agreements than other trials (52/58; relative risk 0.79, 95% CI 0.70-0.89, P=.003). Publication agreement compliance varied among affiliates of the same sponsor. Follow-up of agreements with insufficient information and a contact e-mail (response rate, 12.5% [4/32]) revealed 2 additional agreements banning publication, 1 requiring approval, and 1 GPP2-compliant agreement.

Conclusions

This study investigated publication agreements using the largest, international, public-access database of publication agreements. Most, but not all, publication agreements for clinical trials were consistent with GPP2. Although “gag orders” forbidding publication were infrequent, any such ban is unacceptable. Sponsors and their affiliates must ensure that publication agreements confirm authors’ freedom to publish data. Sponsors should also audit agreement information reported on ClinicalTrials.gov for compliance with GPP2 and for consistency with other publication agreement information.

1ProScribe Medical Communications, Noosaville, Queensland, Australia, ss@proscribe.com.au; 2University of the Sunshine Coast, Maroochydore DC, Queensland, Australia; 3University of Queensland, Brisbane, Queensland, Australia

Conflict of Interest Disclosures

None reported.

Back to Top

Reporting of Results in ClinicalTrials.Gov and Published Articles: A Cross-sectional Study

Jessica E. Becker,1 Harlan M. Krumholz,2 Gal Ben-Josef,1 Joseph S. Ross2

Objective

In 2007, the US Federal Drug Administration (FDA) Amendments Act expanded requirements for ClinicalTrials.gov, a public clinical trial registry maintained by the US National Library of Medicine, mandating results reporting within 12 months of trial completion for all FDA-regulated drugs. We compared clinical trial results reported on ClinicalTrials.gov with corresponding published articles.

Design

We conducted a cross-sectional analysis of clinical trials published from July 1, 2010, through June 30, 2011, in highimpact journals (Impact Factor ³10) that were registered and reported results on ClinicalTrials.gov. We compared trial results reported on ClinicalTrials.gov and within published articles for the following: cohort characteristics, trial intervention, primary and secondary efficacy endpoint definition(s) and results, and adverse events.

Results

Of 95 included clinical trials registered and reporting results on ClinicalTrials.gov, there were 96 corresponding publications, among which 95 (99%) had at least 1 discrepancy in reporting of trial details, efficacy results, or adverse events between the 2 sources. When comparing reporting of primary endpoints, 132 (85%) were described in both sources, 14 (9%) only on ClinicalTrials.gov, and 10 (6%) only within articles. Results for 30 of 132 (23%) primary endpoints could not be compared because of reporting differences between the 2 sources (eg, tabular vs graphics); among the remaining 102 endpoints, reported results were discordant for 21 (21%), altering interpretations for 6 (6%). When comparing reporting of secondary endpoints, 619 (30%) were described in both sources, 421 (20%) only on ClinicalTrials.gov, and 1,049 (50%) only within articles. Results for 228 of 619 (37%) secondary endpoints could not be compared; among the remaining 391, reported results were discordant for 53 (14%).

Conclusion

Among published clinical trials that were registered and reported results on ClinicalTrials.gov, nearly all had at least 1 discrepancy in reported results, questioning the accuracy of both sources and raising concerns about the usefulness of results reporting to inform clinical practice and future research efforts.

1Yale University School of Medicine, New Haven, CT, USA, joseph.ross@yale.edu; 2Center for Outcomes Research and Evaluation, Yale–New Haven Hospital, New Haven, CT, USA

Conflict of Interest Disclosures

Harlan Krumholz reports that he chairs a scientific advisory board for UnitedHealthcare. Joseph Ross reports that he is a member of a scientific advisory board for FAIR Health, Inc. Harlan Krumholz and Joseph Ross receive support from Medtronic, Inc to develop methods of clinical trial data sharing, from the Centers for Medicare & Medicaid Services to develop and maintain performance measures that are used for public reporting, and from the Food and Drug Administration to develop methods for postmarket surveillance of medical devices. Jessica Becker and Gal Ben-Josef report no conflicts of interest.

Funding/Support

This project was not supported by any external funds.

Back to Top

Beyond Feasibility: Assessing the ClinicalTrials.gov Results Database

Deborah A. Zarin, Tony Tse, Heather D. Dobbins

Objective

Prompted in part by ongoing evidence of selective publication and outcome reporting, the US Congress mandated the first public government-operated results database for the disclosure of clinical trial results, whether published or not. Following implementation of the ClinicalTrials.gov results database in September 2008, the European Medicines Agency (EMA) began developing a similar system. This de facto standard will affect results database reporting for thousands of trials around the world annually. Thus, it is imperative to engage in evaluation and continuous improvement of the 1 system that is already operational—ClinicalTrials.gov—both to allow for improvements and to inform ongoing and future efforts elsewhere. We describe a framework for guiding the evaluation of the results database.

Design

A 3-domain conceptual framework was adapted from the Fryback/Thornbury Hierarchical Model of Efficacy of diagnostic tests, based on our experience in designing and operating the database: (1) feasibility, (2) usability and utility, and (3) potential impact. Each domain consists of operationally defined questions for assessing the degree to which the results database is able to meet its intended purposes.

Results

Operationally defined questions have been identified for each of the 3 domains, and we provide supporting data and case studies based on our experience over the past 5 years, or point out areas that require further research, and provide some explanatory comments. The following have been identified as needing research: Do data providers enter accurate data? Do data tables provide necessary and sufficient information? How are submissions for individual studies used? Are the aggregated data useful? What is the relationship to scientific abstracts, press releases, and other gray literature? And, What is the relationship to individual participant-level data?

Conclusions

This framework can guide evaluative work by the research community with the goal of improving current and future trial disclosure efforts. The areas identified as needing research should be considered high priority, especially before large additional sums of money and human capital are expended internationally to replicate or modify the current system.

Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA, dzarin@mail.nih.gov

Conflict of Interest Disclosures

Deborah Zarin is director, Tony Tse is program analyst, and Heather Dobbins is lead results analyst for ClinicalTrials.gov.

Funding/Support

There was no external funding for this study.

Data/Content Sharing, Availability, And Access

Back to Top

How Does the Availability of Research Data Change With Time Since Publication?

Timothy H. Vines,1,2 Arianne Y. K. Albert,3 Rose L. Andrew,1 Florence Débarre,1 Dan G. Bock,1 Michelle T. Franklin,1,4 Kimberly J. Gilbert,1 Jean-Sébastien Moore,1,5 Sébastien Renaut,1 Diana J. Rennison1

Objective

To quantify how fast the availability of research data decreases with time since publication and to identify the main causes.

Design

As part of a parallel study on how the reproducibility of data sets changes through time, we identified 516 papers that conducted a discriminant function analysis (DFA) on morphological data from plants, animals, or other organisms. These papers were published in the odd years between 2011 and 1991. We obtained e-mail addresses for the first, last, and corresponding authors of the papers and by searching online. We then requested the morphological data used in the DFA by e-mail. For papers where the data were not available, we also asked the authors to give a reason, such as “the data are stored on inaccessible hardware” or “the data are currently in use.”

Results

We received 101 data sets, and another 20 were reported extant but could not be shared. We found that 37% of the data from papers published in 2011 still existed, but this fell to 18% for 2001 and 7% for 1991 (Figure 5). The odds of receiving the data decreased by about 7% per year. The proportion of papers with no functioning e-mails fell from 12 of 80 papers (15%) in 2011 to 10 of 26 (38%) in 1991. Furthermore, for papers where we heard about the status of the data, the proportion of authors reporting it lost or on inaccessible hardware rose gradually from 0 of 30 in 2011 to 2 of 9 (22%) in 1997, and then increased to 7 of 8 in 1993 (87%) and 4 of 6 (66%) in 1991. Other variables like the proportion of nonresponders or the proportion of datasets that could be shared showed no relationship with time.

Figure 5. Proportion of Papers With Data Available, 1991-2011

Figure 5. Proportion of Papers With Data Available, 1991-2011

Conclusions

Researchers should be able to obtain published data from the authors long after the study is complete, but we found that almost all research data was lost 10 to 15 years after publication. The main causes of data loss appeared to be a lack of working e-mails for the authors and the data being stored on outdated hardware.

1Biodiversity Department, University of British Columbia, Vancouver, BC, Canada, vines@zoology.ubc.ca; 2Molecular Ecology Editorial Office, Vancouver, BC, Canada; 3Women’s Health Research Institute, Vancouver, BC, Canada; 4Department of Biological Sciences, Simon Fraser University, Burnaby, BC, Canada; 5Department of Biology, Université Laval, Laval, QC, Canada

Conflict of Interest Disclosures

None reported.

Back to Top

Reproducible Research: Biomedical Researchers’ Willingness to Share Information to Enable Others to Reproduce Their Results

Michael Griswold,1 Christine Laine,2 Cynthia Mulrow,2 Mary Beth Schaeffer,2 Catherine Stack2

Objective

“Reproducible research” is a model for communicating research that promotes transparency of methods used to collect, analyze, and present data. It allows independent scientists to reproduce results using the original investigators’ same procedures and data. Reproducible research requires a level of transparency seldom sought or achieved in biomedical research. Since 2008, Annals of Internal Medicine requires authors of research articles accepted for publication to state whether and under what conditions they would make available to others their protocol, statistical code, and data. The published article includes this information. This report describes trends and patterns in the willingness of biomedical researchers to share their study materials with others over the period 2008-2012.

Design

We collected data for original research articles published in our journal 2008-2012 on authors’ reported willingness to share study materials and examine willingness to share by study characteristics and over time.

Results

Of 389 articles, 17% stated that protocol was available without conditions, 54% with conditions, and 29% not available. Statistical code was available without conditions for 6%, with conditions for 66%, and unavailable for 28%. Data were available without condition for 7%, with conditions for 47%, and unavailable for 46%. Most authors who said materials were available required interested parties to contact them first, and many stated specific conditions for sharing these materials. Willingness to share varied little by the study characteristics examined (Figure 6

). Over the years since the onset of this policy, there has been a decrease in authors’ willingness to share protocol and data.

Figure 6. Trends in Reported Availability Over Time

Figure 5. Trends in Reported Availability Over Time

Conclusions

While the majority of authors stated that they would make study materials available to others, most would do so only if others contacted them and attached requirements to the sharing of this information. Researchers were most willing to fully share their protocols and least willing to share data.

1University of Mississippi, Jackson, MS, USA; 2Annals of Internal Medicine, American College of Physicians, Philadelphia, PA, USA, claine@acponline.org

Conflict of Interest Disclosures

None reported.

Funding/Support

No external funding was provided for this study. Contributions of staff time and resources came from Annals of Internal Medicine and University of Mississippi.

Back to Top

Clinical Research Data Repositories and the Public Disclosure of Trial Data

Karmela Krleža-Jerić,1,2 Lee-Anne Ufholz3

Objective

To explore essential features and practices of repositories that accept clinical trial data, including individual participant data (IPD), and facilitate their sharing and public disclosure.

Design

This is an environmental scan. Inclusion criteria: repositories that harvest clinical trial raw data with a goal of enabling sharing and public disclosure. A list of headings was developed to capture features of selected repositories. We reviewed the literature and initiatives in this area and searched catalogues of data repositories, such as Databib (http://databib.org/) and interviewed repository managers.

Results

Key word search of 588 repositories of Databib identified 60 repositories (38 human science, 14 clinical research, and 3 clinical trials). We identified 3 more repositories by personal contacts. After exclusion of duplicates and repositories that did not meet our criteria, we selected 4 repositories that accept and enable sharing and public disclosure of raw clinical trial data: Figshare (http://figshare.com/), Dryad, (http://datadryad.org), Inter-university Consortium for Political and Social Research (ICPSR) (www.icpsr.umich.edu), and Edinburgh DataShare (http://datashare.is.ed.ac.uk/). We also identified 2 large initiatives: P3G (http://p3g.org/) and Global Alliance (http://www.sanger.ac.uk/about/press/assets/130605-white-paper.pdf). The methods and features of these repositories and initiations will be further explored in interviews with repository managers. These include unique and persistent identification systems for datasets; licenses; sustainability (business) models; information on how repositories define and describe raw data; data formats and standards; methodology of data preparation; standards of quality control; policies of data inclusion and access to data for reuse; system architecture; features that encourage data sharing across geographical and domain boundaries (such as flexibility, type of access, curation); and collaborations with other constituencies including journals and publishers.

Conclusions

These results will inform the development of methodologies of public data disclosure, as well as standards and guidelines for data repositories involved in the public disclosure of participant-level datasets of clinical trials. This may further encourage collaboration and methodological consensus between repositories. The results have the potential to foster collaboration between researchers, journals, publishers, and data repositories to help enhance the reliability and connectedness of the scientific literature.

1Department of Epidemiology and Community Medicine, Medical Faculty, University of Ottawa, Ottawa, ON, Canada, krlezajk@hotmail.com; 2Croatian Medical Journal; 3Health Sciences Library, University of Ottawa, Ottawa, ON, Canada

Conflict of Interest Disclosures

Karmela Krleža-Jerić is a recipient of a COPE grant and is also the founding leader of the Ottawa Group. Lee-Anne Ufholz declares no conflict of interest.

Funding/Support

This study is funded by a grant from the Committee of Publication Ethics (COPE). COPE had no role in defining the study or its implementation.

Back to Top

The Democratization of Knowledge: A Supplement to Open Access

Hans Petter Fosseng, Hege Underdal, Magne Nylenna

Objective

Traditionally, only university hospitals and academic institutions have access to a wide range of journals and databases. In Norway, the majority of hospitals used to have limited library services, and primary care hardly any resources at all. We have established a publicly funded and freely available digital health library and describe our experiences.

Design

The Norwegian Electronic Health Library (NEHL) was established in 2006 based on 3 concepts: equality, quality, and efficiency. NEHL provides anyone with a Norwegian IP address free access to point-of-care tools, guidelines, systematic reviews, scientific journals, and a wide range of other full-text sources (eg, BMJ Best Practice, UpToDate, Cochrane Library, BMJ, New England Journal of Medicine, Lancet, and JAMA). Relevant sources freely available to the public such as open-access journals and health websites are also included. In addition, selected databases and almost 3,000 journals are available to health care personnel and students.

Results

Statistics from Google Analytics available from 2008 to 2012 show an increase in visits and page views of 114% and 89%, respectively. The NEHC website had 1.6 million visits and 4.3 million page views in 2012. Major journals can be accessed directly, and their usage is not included in these figures. More than 1.5 million journal articles were downloaded and approximately 3 million searches in bibliographic databases done. The peak usage is from workplace networks on weekdays, indicating that a majority of users are professionals as intended. From 2008 to 2012 there has been an increase in traffic from search engines (from 20% to 58%). The number of first-time visitors, and the usage of patient information, indicates that the public represents an increasing proportion of the users.

Conclusions

We suggest that making medical knowledge sources nationally available is an important supplement to open access. By providing both the public and professionals access to the same quality content, we believe that the basis for making health decisions becomes more transparent and verifiable. Providing free access to scientific literature by public funding can be perceived as a means for the democratization of knowledge.

Norwegian Knowledge Centre for the Health Services, Oslo, Norway, hpf@nokc.no

Conflict of Interest Disclosures

None reported.

Back to Top

Whither Peer Review Research? Analysis of Study Design, Publication Output, and Funding of Research Presented at Peer Review Congresses

Mario Malički,1 Erik von Elm,2 Ana Marušić1

Objective

As the history of peer review research in biomedicine is the history of Peer Review Congresses, we analyzed study designs, publication outputs, and sources of funding of research presented at 6 previous Congresses (1989-2009).

Design

Retrospective cohort study. We classified study design of all abstracts presented, searched MEDLINE, Web of Science, and the Peer Review Congress website for corresponding full articles, and collected data on authorship, time to publication, article availability, and declared funding sources.

Results

Research presented (n=504) was mostly observational (Table 9). Over time, the number of discussion papers decreased (χ21 for trend=47.422, P<.001) and of cohort studies increased (χ21=10.744, P=.001). A total of 305 (60.5%) presentations were later published in journals (in 10 instances, 2 abstracts were later published as a single paper). Many articles from the first 4 Congresses were published in JAMA special issues (120, 39.3 %); most (63.4%) are currently freely available. The median time to publication in journals other than JAMA was 14.0 months (95% CI, 12.0-16.0). Funding was analyzed in 292 publications available in full text: 54.8% did not mention funding, 8.6% declared no funding, 16.1% had governmental funding, 7.2% private funding, 3.8% university funding, 3.1% publishers’ funding, 3.8% declared their salary sources, 0.7% pharmaceutical funding, and 2.0% other sources. The proportion of funded studies increased over time from 20.6% in 1989 to 43.9% in 2009, with a peak of 55.9% in 2005 (χ21=15.490, P<.001). The mean number of authors increased from 2.1 (95% CI, 1.3-2.2) in 1989 to 3.9 (95% CI, 3.5-4.4) in 2009 (P<.001, ANOVA). There were no changes to the byline of authors between the abstract and published articles for 165 (56.5%) of papers, 82 (28.1%) had changes in the number of authors, and 45 (15.4%) had changes in the byline order.

Table 9. Distribution of Study Designs and Publication Rate of Abstracts Presented at Peer Review Congresses, 1989-2009

Table 9. Distribution of Study Designs and Publication Rate of Abstracts Presented at Peer Review Congresses, 1989-2009

Conclusions

Underreporting is common in research conducted by a community aware of research underreporting; the causes for not publishing are not clear. There is a need for better and more systematic funding of peer review research.

1Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia, ana.marusic@mefst.hr; 2Institute for Social and Preventive Medicine, University Hospital Lausanne, Switzerland

Conflict of Interest Disclosures

None reported.

Tuesday, September 10

Abstracts from day 3 of the Seventh International Congress on Peer Review and Biomedical Publication will be posted at 8:00AM CST on Tuesday September 10.

Quality of Trials

Back to Top

Impact of a Systematic Review on Subsequent Clinical Research: The Case of the Prevention of Propofol Injection Pain

Céline Habre,1 Nadia Elia,1,2,4 Daniel M. Pöpping,3 Martin R. Tramèr1,4

Objective

In 2000, a systematic review identified intravenous lignocaine, administered with venous occlusion, as the most efficacious intervention for the prevention of propofol injection pain. We set out to determine whether, after the publication of the review, the number of trials on this issue had decreased over time and whether authors of subsequently published trials referred to that review and used it to design their study (ie, to justify the choice of a comparator intervention or to estimate study size).

Design

We systematically searched (MEDLINE, Cochrane Central Register of Controlled Trials, EMBASE, and related bibliographies) for all randomized trials testing interventions to prevent propofol injection pain, published since 2002 (ie, 2 years after the publication of the review). We extracted information based on the year of publication, experimental and control interventions, whether the review was cited, and whether authors explicitly declared having used it to design the study. Lignocaine injection with venous occlusion was regarded as the gold standard. Study designs comparing any experimental intervention with the gold standard were regarded as appropriate. Main outcomes were the number of published trials over time, number (percent) of trials citing the review, using it to design the study, and with appropriate study designs.

Results

Between January 2002 and 2013, 136 trials (19,778 patients) were published, without a clear decreasing trend over time. Ninety-nine (72.8%) authors cited the review, but only 21 (15.4%) declared using it to design the study. Designs were appropriate in 34 (25%) trials and inappropriate in 102 (75%). Of the 21 trials in which authors declared using the review to design their study, 18 (86%) had appropriate designs. Of the 115 trials in which authors did not use the review to design their study, only 16 (14%) had appropriate designs.

Conclusions

A large number of trials have been published since the publication of the systematic review. Most authors cited the systematic review; however, only a minority used it as a rational basis for the design of their study. Trials designed on the basis of the review were more likely to be appropriate, thus suggesting that the knowledge of systematic reviews in study design should be encouraged.

1Division of Anaesthesiology, Geneva University Hospitals, Geneva, Switzerland, celine.habre@gmail.com; 2Institute of Social and Preventive Medicine, University of Geneva, Geneva, Switzerland; 3Department of Anesthesiology and Intensive Care, University Hospital Münster, Münster, Germany; 4Medical Faculty, University of Geneva, Geneva, Switzerland

Conflict of Interest Disclosures

Nadia Elia is associate editor and Martin Tramèr is editor in chief, European Journal of Anaesthesiology. No other disclosures reported.

Back to Top

Publication of Randomized Controlled Trials That Were Discontinued: An International Multicenter Cohort Study

Benjamin Kasenda,1 Erik von Elm,2 Anette Blümle,3 Yuki Tomonaga,4 John You,5 Mihaela Stegert,1 Theresa Bengough,2 Kelechi Kalu Olu,1 Matthias Briel,1,5 for the DISCO Study Group

Objective

Our aim was to determine the prevalence of discontinuation of randomized controlled trials (RCTs) for different reasons, the publication history of discontinued RCTs, and differences between industry- and investigator-initiated RCTs with respect to discontinuation and publication.

Design

We established a multicenter cohort of RCTs based on protocols approved by 6 research ethics committees (RECs) from 2000 to 2003 in Switzerland, Germany, and Canada. We extracted data on RCT characteristics and planned recruitment. We determined completion status of RCTs by using information from REC files, publications identified by literature search, and by surveying investigators. We used multivariable logistic regression to investigate the following risk factors for nonpublication of RCTs: trial discontinuation (vs completion), trial initiation by industry (vs investigators), national setting (vs international), sample size below median (vs above), and single-center study (vs multicenter).

Results

We included 894 protocols of RCTs involving patients. Of those, 574 (64.2%) were completed (ie, attained >90% of target sample size), 250 (28.0%) discontinued for any reason, and for 70 (7.8%) the status remained unclear. Reasons for discontinuation were poor recruitment (100/250, 40.0%), futility (37/250, 14.8%), administrative reasons (36/250, 14.4%), harm (25/250, 10.0%), benefit (9/250, 3.4%), and other (43/250, 17.2%). Industryinitiated RCTs (n=538 [60.2%]) were completed in 71.9%, whereas investigator-initiated RCTs (n=356 [39.8%]) were completed in 52.5% of cases. Funding sources of discontinued investigator-initiated RCTs (n=136) were public (n=32 [23.5%]), industry (n=26 [19.1%]), charity (n=15 [11.0%]), public and industry (n=11 [8.1%]), and public and charity (n=2 [1.5%]); 35 (25.7%) discontinued investigator-initiated RCTs had no external funding, and for 15 (11.0%) the funding source remained unclear. Of all discontinued and completed RCTs, 114 (45.6%) and 416 (72.5%) were published as full journal articles, respectively. Discontinued industry-initiated RCTs (n=114) were published in 43.9% and discontinued investigator-initiated RCTs (n=136) in 47.1% of cases. Independent risk factors for nonpublication were trial discontinuation and single-center study (Table 10).

Table 10. Factors Associated With Nonpublication of Randomized Controlled Trials (RCTs) Based on 815 RCTs With Complete Data

Table 10. Factors Associated With Nonpublication of Randomized Controlled Trials (RCTs) Based on 815 RCTs With Complete Data

aThe median study size was 250.

Conclusions

Discontinued RCTs are common, in particular when they are investigator-initiated, and often not published. Our results may raise journal editors’ and researchers’ awareness of existing determinants of RCT nonpublication and the prevalence of unpublished discontinued RCTs.

1Basel Institute for Clinical Epidemiology and Biostatistics, University Hospital Basel, Basel, Switzerland, matthias.briel@usb.ch; 2Cochrane Switzerland, IUMSP, Lausanne University Hospital, Lausanne, Switzerland; 3German Cochrane Centre, Institute of Medical Biometry and Medical Informatics, Freiburg University Medical Centre, Freiburg, Germany; 4Institute for Social and Preventive Medicine, Zurich, Switzerland; 5Departments of Medicine, and Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada

Conflict of Interest Disclosures

None reported.

Funding/Support

This study was supported by the Swiss National Science Foundation (grant 320030_133540/1) and the German Research Foundation (grant EL 544/1-2). Erik von Elm received support from the Brocher Foundation, Hermance, Switzerland. The funding sources had no role in the design and conduct of this study and the writing of this abstract

Back to Top

Terminated Trials in ClinicalTrials.gov: Characteristics and Evaluation of Reasons for Termination

Katelyn DiPiazza,1 Rebecca J. Williams,2 Deborah A. Zarin,2 Tony Tse2

Objective

Early termination of clinical trials raises a broad range of scientific, ethical, and resource issues. Prior research on the topic has focused on specific therapeutic areas and problems related to participant recruitment, but little is known about the reasons for termination across the clinical research enterprise. This study aims to determine the number and characteristics of trials that terminated within a cohort of trials registered at ClinicalTrials.gov. It also examines reasons for termination and the amount and type of results data available from such trials.

Design

In February 2013, we determined the status of all registered interventional studies initiated in 2006 and summarized the characteristics of terminated trials using the registration data elements. We also examined terminated trials with results posted on ClinicalTrials.gov and categorized the explanations for why the study stopped by whether the reason was based on scientific data accumulated from the trial (eg, interim efficacy data) or not (eg, low enrollment). We also summarized the publication status and the type and amount of results data available for a subset of these trials.

Results

Of the 7,852 registered trials initiated in 2006 and verified in the 2 years (if ongoing), 84% (n=6,622) had ended 6 years later and, of these, 12% (n=789) were terminated. In the sample of 917 terminated trials with results posted on ClinicalTrials.gov, 21% (n=193) were categorized as ending prematurely based on scientific data accumulated from the trial (Table 11) and, as of April 2013, 21% (n=193) were published in journals indexed by PubMed. In a subset of terminated trials with posted results (n=861), approximately 71% (n=612) of the trials had summary results data for at least 1 participant in the primary outcome measure.

Table 11. Categorization of Reasons for Termination for 917 Trials With Results Posted to the ClinicalTrials.gov Results Database as of February 2013

Table 11. Categorization of Reasons for Termination for 917 Trials With Results Posted to the ClinicalTrials.gov Results Database as of February 2013

Conclusions

Terminated studies frequently end prematurely for reasons other than scientific data accumulated from the trial. Because many terminated studies are not published, ClinicalTrials.gov is a unique resource for data from such trials.

1Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA; 2National Library of Medicine, National Institutes of Health, Bethesda, MD, USA, rebecca.williams@nih.gov

Conflict of Interest Disclosures

Rebecca Williams is assistant director, Deborah Zarin is director, and Tony Tse is program analyst for ClinicalTrials.gov.

Funding/Support

No external funding outside of the employer of the 3 authors (NLM/NIH).

Quality of Reporting Trials

Back to Top

A Review of Registration and Reporting of “Continuish” Outcomes in Randomized Trials

Steven Woloshin,1 Lisa M. Schwartz,1 Allison Hirst,2 Ly-Mee Yu,3 Alice Andrews,1 Gary Collins,3 Rose Wharton,3 Milensu Shanyinde,3 Susan J. Dutton,3 Omar Omar,3 Sally Hopewell,3 Joshua Wallace,3 Jackie Birks,3 Nicola Williams,3 Eric Ohuma,4 Douglas G. Altman3

Objective

True continuous and ordinal measures, visual analog scales, scores, and counts—“continuish” measures—can be analyzed in many ways: means, medians, and percent above cutoff. Because of this flexibility, investigators can select an approach based on statistical significance. We examined how continuish primary outcomes measures are reported in randomized controlled trials (RCTs), and compared them with trial registry entries meant to avoid data-driven analyses.

Design

Review of 2-arm parallel-group RCTs of treatment published in the PubMed Core Clinical Journals in 2010 (n=568) for explicit continuish primary outcomes in the abstract. Pairs of reviewers extracted data using a standardized form.

Results

Of the 337 trials with a continuish outcome in the abstract, 99 (30%) never specified a primary outcome. We analyzed a random sample of 180 trials from the remainder. Most measures were true continuous like weight (64%); scores (21%); or pseudocontinuous-like visual analog scales (9%). Continuish primary outcomes were reported as mean (69%), percent above cutoff (11%), median (8%), relative change (7%), and other (6%). In 76 (43%) articles, the primary measure was analyzed multiple ways, with consistent statistical significance in 8 (11%). The clinical importance of the difference in the primary continuous outcome was discussed for only 59% of the 90 positive trials. Of eligible articles, 134 of 180 (74%) were registered; in the current registry records, primary outcomes were missing for 6 (4%); 83 (62%) only mentioned the domain or unit without specifying the metric or summary statistic (Table 12). All 5 primary outcome specifications were identical in the current trial registry and the published journal article for only 9 (7%) of 134 trials.

Table 12. Missing and Changed Information Within the Registry and Between the Registry and Journal Article

Table 12. Missing and Changed Information Within the Registry and Between the Registry and Journal Article

aOnly includes registries that archive changes.

Conclusions

Most journal articles of RCTs with continuish outcomes inadequately specify the primary outcome and analyze it in multiple ways with inconsistent statistical significance. Trial registries need to enforce stricter requirements to ensure that analyses of treatment effects are truly prespecified.

1Dartmouth Institute for Health Policy, Dartmouth Medical School, Hanover, NH, USA, steven.woloshin@dartmouth.edu; 2Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK; 3Centre for Statistics in Medicine, University of Oxford, Oxford UK; 4Nuffield Department of Obstetrics & Gynaecology, and Oxford Maternal & Perinatal Health Institute, University of Oxford, Oxford, UK

Conflict of Interest Disclosures

Steven Woloshin and Lisa Schwartz are cofounders and shareholders of Informulary, Inc.

Back to Top

Reporting of Crossover Trials on Medical Interventions for Glaucoma

Tsung Yu,1 Tianjing Li,1 Barbara Hawkins,1,2 Kay Dickersin1

Objective

Crossover trials are clinical experiments in which participants are randomly assigned to receive sequential treatments with the intent of estimating differences within individual rather than at the group level. We aim to describe the methodological and reporting issues in 83 crossover trials testing medical interventions for glaucoma and assess their usefulness in meta-analysis.

Design

As part of a large network meta-analysis, we identified 526 eligible randomized controlled trials testing medical interventions for glaucoma through our comprehensive literature search (searched in November 2009). We abstracted data on the design, analysis, and reporting of 83/526 (15.8%) that used a crossover design.

Results

Seventy-two trials (72/83; 86.7%) studied 2 interventions altogether, and the others studied 3 or more. Only 33/83 trials (39.8%) reported that there was a washout period before a participant crossed over to the next intervention. The description of statistical methods was variable and unclear in most cases. In the trial reports, only 19/83 (22.9%) mentioned the concept of period effect and 25/83 (30.1%) mentioned carryover effect. Eighty-two trials (82/83; 98.8%) used data from more than one period for analysis, but 53/82 (64.6%) did not report if and how they accounted for the paired participants in a crossover design. Seventy-one trials (71/83, 85.5%) presented the results as if the data arose from a parallel-group trial. Only 25/83 trials (30.1%) reported an estimate of treatment effect and associated variability using within-participant differences, and 14/83 (16.9%) reported results at the end of first period that can also be meta-analyzed. Altogether, 36/83 trials (43.4%) reported quantitative data with sufficient details to be included in a meta-analysis.

Conclusions

In our sample, most crossover trials did not adequately report important methodological considerations and had few useful data, making meta-analyses difficult. Inability to integrate trial data into systematic reviews wastes resources and the time of study participants. Peer reviewers should seek advice from those who understand the methods when evaluating manuscripts for publication. We urge the CONSORT group to develop and publish an extension for crossover design to guide and improve the reporting of such trials.

1Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA, kdickers@jhsph.edu; 2Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

This work was supported by grant U01-EY020522-01, National Eye Institute, National Institutes of Health, USA. The funding organization had no role in the design or conduct of this research.

Back to Top

Characterization of Trials Designed Primarily for Marketing Purposes Rather Than Addressing Genuine Clinical Questions: A Descriptive Study

Virginia Barbour,1 Druin Burch,2 Fiona Godlee,3 Carl Heneghan,4 Richard Lehman,4 Joseph Ross,5 Sara Schroter3

Objective

Clinical trials designed to promote drugs, as opposed to address scientific objectives, have been infrequently identified through access to internal industry documents but are frequently suspected. Such trials can be hard to identify and have the potential to distort the medical literature by misleading readers. Our objective is to define characteristics of trials that appear to be primarily marketing driven and estimate their prevalence.

Design

We are conducting a 3-phase study examining drug trials published in 6 general medical journals in 2011. In phase 1, 6 investigators independently reviewed all trial publications to reach consensus on likely marketing trials. We did not have fixed criteria but used expert consensus, based on our understanding of previously described seeding trials. In phase 2, we are identifying predictors of categorization, using blinded researchers to extract trial information (eg, role of manufacturer in design, data analysis, and reporting, average number of patients recruited per center in relation to rarity of the disease, clinical relevance of findings, use of surrogate and composite outcomes, and extent to which conclusions focused on secondary outcomes). To develop a model of independent predictors, we will use multivariate logistic regression to estimate the adjusted odds ratios (and 95% CIs) for studies deemed to be marketing trials. A sensitivity analysis will be performed for the manufacturer-funded trials only. Phase 3 will involve in-depth descriptive research around a subgroup to determine the context in which they appear—within the journal, and in relation to information on the drug’s licensing and marketing.

Results

To date, 25/207 trials (12%) were rated by at least 4 independent investigators as very likely marketing trials and 121 (58%) as very unlikely. After consensus discussion, 41 (20%) trials were considered very likely marketing trials and 14 (7%) as possibly so. Phase 2 and 3 analyses are under way.

Conclusions

Our findings suggest that a fifth of all drug trials published in the highest impact general medical journals in 2011 were designed primarily for marketing purposes. This study will highlight characteristics for editors, reviewers, and readers to be aware of when assessing published trials.

1PLOS Medicine, Cambridge, UK; 2John Radcliffe Hospital, Oxford, UK; 3BMJ, London, UK, sschroter@bmj.com; 4Centre for Evidence Based Medicine, Department of Primary Care Health Science, Oxford, UK; 5Yale University School of Medicine, New Haven, CT, USA

Conflict of Interest Disclosures

Fiona Godlee is editor of the BMJ and Virginia Barbour is chief editor of PLOS Medicine, but they were excluded from evaluations of papers published in these journals. Virginia Barbour is chair of COPE, which contributed a research grant to the study. She was not involved in the committee that awarded the grant. Sara Schroter is an employee of the BMJ and regularly undertakes research into the publishing process. The other authors reported no conflicts of interest.

Funding/Support

This study was funded by a research grant from the Committee on Publication Ethics (COPE). The funder played no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Reporting Guidelines

Back to Top

Consensus-Based Case Report Guidelines Development: CARE Guidelines

Joel J. Gagnier,1,2 Gunver Kienle,3 Douglas G. Altman,4 David Moher,5,6 Harold Sox,7 David Riley,8 and the CARE Group

Objective

Case reports have helped identify effects from interventions and recognize new or rare diseases. Data from case reports—increasingly published in indexed medical journals—is beginning to be systematically collected and reported. However, the quality of published case reports is uneven. One study evaluated 1,316 case reports from 4 emergency medicine journals and found that more than half failed to provide information related to the primary treatment. Case reports, written without reporting guidelines (with the exception of harms), are insufficiently rigorous to guide clinical practice, inform research design, or be aggregated for data analysis. This analysis was conducted to develop and implement systematic reporting guidelines for case reports.

Design

We followed published recommendations for guideline development using a modified Delphi process with (1) a literature review and interviews generating guidelines items, (2) an October 2012 face-to-face consensus meeting to draft reporting guidelines, and (3) postmeeting feedback and guideline finalization.

Results

Recommendations for the reporting of case reports are listed in Table 13

.

Table 13. Items to Be Included in Case Reports

Table 13. Items to Be Included in Case Reports

Conclusions

The CARE guidelines have been developed in a consensus-based process and represent essential information necessary to improve the quality of case reports. These guidelines are generic and will need extensions for specific specialties and purposes. Feedback from use of the guidelines in 2013, though positive, is limited. The analysis of systematically aggregated information from patient encounters may provide scalable, data-driven insights into what works for which patients transforming how we think about “evidence” and its creation, diffusion, and use.

1Department of Orthopaedic Surgery, University of Michigan, Ann Arbor, MI, USA; 2Department of Epidemiology, School of Public Health, University of Michigan, Ann Arbor, MI, USA; 3Institute for Applied Epistemology and Medical Methodology, University of Witten/Herdecke, Freiburg, Germany; 4Centre for Statistics in Medicine, University of Oxford, Oxford, UK; 5Ottawa Hospital Research Institute, Ottawa, ON, Canada; 6Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, ON, Canada; 7The Dartmouth Institute and Geisel School of Medicine at Dartmouth, Hanover, NH, USA; 8Global Advances in Health and Medicine, Portland, OR, USA, driley@gahmllc.com

The Care Group: Alyshia Allaire, Douglas G. Altman, Jeffrey Aronson, James Carpenter, Joel Gagnier, Patrick Hanaway, Carolyn Hayes, David Jones, Marietta Kaszkin-Bettag, Michael Kidd, Helmut Kiene, Gunver Kienle, Ben Kligler, Lori Knutson, Christian Koch, Karen Milgate, Michele Mittelman, David Moher, Hanna Oltean, Greg Plotnikoff, David Riley, Richard Alan Rison, Anil Sethi, Larissa Shamseer, Richard Smith, Harold Sox, and Peter Tugwell

Conflict of Interest Disclosures

Joel Gagnier, University of Michigan, and David Riley, Global Advances in Health and Medicine, organized this consensus-based guideline development project. No honoraria were paid; however, conference expenses were reimbursed. The volunteer steering committee consisted of Joel J. Gagnier, Gunver Kienle, David Moher, and David Riley.

Funding/Support

The Department of Orthopaedic Surgery, the Office of the Vice-President of Research at the University of Michigan, and Global Advances in Health and Medicine provided funding for this project. David Moher is funded through a University of Ottawa Research Chair. Funding support was used to reimburse the expenses of conference attendees.

Back to Top

Poor Description of Nonpharmacological Interventions: A Remediable Barrier to Use in Practice?

Tammy Hoffmann, Chrissy Erueti, Paul Glasziou

Objective

To evaluate the completeness of intervention descriptions in randomized trials of nondrug interventions, identify the most frequently missing elements, and assess whether missing details can be obtained from trial report authors.

Design

We assessed all reports of randomized trials of nonpharmacological interventions published in 2009 in 6 leading general medical journals; 133 reports met inclusion criteria. As 4 had evaluated 2 nonpharmacological interventions, we evaluated descriptions of 137 interventions. Based on the primary report and its references, and any appendices or websites, 2 independent raters assessed whether the intervention description had sufficient detail to allow replication (CONSORT item 5) for each element of an 8-item checklist. Differences between assessments were resolved through discussion. For reports with missing details, questions were e-mailed to corresponding authors and, if authors replied, the raters reassessed relevant items.

Results

Of 137 interventions, 53 (39%) were adequately described. Using the 63 responses from 88 contacted authors (71% response rate), the number of interventions described adequately increased to 81 (59%) (Figure 7

). Among the checklist items that scored worst in primary reports was “intervention materials” (47% complete), but it improved the most following author response (92%). Some authors (27/70) were able to send materials or provide further information; other authors (21/70) could not, with reasons including copyright or intellectual property concerns, not having the materials or intervention details, or not aware of the importance of providing such information. Although 46 interventions (34%) had a relevant website containing further information or materials, many websites were not mentioned, not freely accessible, or no longer functioning.

Figure 7. Interventions Checklist Items Rating as Adequately Described, Initially and After Authors’ Reply

Figure 7. Interventions Checklist Items Rating as Adequately Described, Initially and After Authors' Reply

Conclusions

The omission of essential information about interventions is a substantial, yet remediable, obstacle to the replication and use of treatments evaluated in clinical trials. Reducing this loss will require action by funders, researchers, and editors at multiple stages, and long-term repositories of materials linked to publications.

Centre for Research in Evidence-Based Practice, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Queensland, Australia, paul_glasziou@bond.edu.au

Conflict of Interest Disclosures

None reported.

Funding/Support

The authors were supported during this work by NH&MRC (grant 0527500). The funder had no role in the conduct of the study.

Back to Top

Impact of Adding a Limitations Section in Abstracts of Systematic Reviews on Reader Interpretation: A Randomized Controlled Trial

Amélie Yavchitz,1,2 Philippe Ravaud,1,3-6 Sally Hopewell,1,5,7 Isabelle Boutron1,3-5

Objective

We aimed to assess the impact of a Limitations section in abstracts of systematic reviews on reader interpretation.

Design

In a 2-arm parallel-group randomized controlled trial, we compared abstracts with and without a Limitations section and selected a sample of abstracts of systematic reviews evaluating the effects of health care interventions with conclusions favoring the beneficial effect of the experimental treatment. We modified the selected abstracts by (1) removing the original Limitations section (when it existed) and (2) adding a Limitations section written according to specific guidance. The Limitations section was written by 1 researcher and evaluated independently by another. The created Limitations section focused on the limitations of evidence as recommended in the PRISMA for Abstract checklist (item 9). All abstracts were standardized, with the treatment name, authors, and journal masked. Study acronyms were also deleted. The same abstract, with or without the Limitations section, randomly assigned to 300 corresponding authors of clinical trials published between 2010 and 2012 and indexed in PubMed. Participants were invited by e-mail to connect to a secure website to complete the survey and were blinded to the study hypothesis. The primary outcome was the participants’ confidence in the results of the study based on the information reported in the abstract. Secondary outcomes were the reader’s perception of the quality and the validity of the systematic review.

Results

Three hundred participants were randomized; 150 assessed an abstract with a Limitations section and 150 an abstract with no Limitations section. There was no statistically significant difference for the assessment of abstracts with Limitations section vs without Limitations section on the confidence in the results (scale 0-10, mean [SD] = 4.4 [2.3] vs 4.6 [2.5], P=.5); the confidence in the validity of the conclusion (scale 0-10, mean [SD] = 4.0 [2.3] vs 4.1 [2.5], P=.8); and the benefit of the experimental intervention to patients (scale 0-10, mean [SD] = 4.3 [2.3] vs 4.4 [2.6], P=.6).

Conclusion

Adding a Limitations section in abstract on the quality of evidence of systematic review did not impact readers’ interpretation.

1INSERM U738, Paris, France; 2Department of Anesthesiology and Critical Care, Beaujon University Hospital, Clichy, France, amelie.yavchitz@bjn.aphp.fr; 3Centre d’Épidémiologie Clinique, AP-HP (Assistance Publique des Hôpitaux de Paris), Hôpital Hôtel Dieu, Paris, France; 4Paris Descartes University, Sorbonne Paris Cité, Faculté de Médecine, Paris, France; 5French Cochrane Centre, Paris, France; 6Department of Epidemiology, Columbia University Mailman School of Public Health, New York, NY, USA; 7Centre for Statistics in Medicine, Oxford University, UK

Conflict of Interest Disclosures

None reported.

Funding/Support

This study received funding from the Fondation pour la Recherche Médicale (FRM, Équipe Espoir de la Recherche 2010). The funder had no role in the design and the conduct of this study

Back to Top

Beyond STARD: Characterizing the Presence of Important Elements in Diagnostic Test Accuracy Reports

Natasha Cuk,1 Carmen C. Wolfe,2 Douglas G. Altman,3 David L. Schriger,1 Richelle J. Cooper1

Objectives

STARD, by specifying a minimum content for diagnostic test accuracy papers, is aimed at improving reporting. Our experience as editors leads us to believe that there are several topics for which STARD is insufficiently demanding: clarity of hypothesis statements (what test characteristics are considered acceptable performance); sample size considerations; determination of cut points for continuous tests; sensible use of receiver operating characteristic (ROC) curves; and handling of clustered data. We sought to describe the current reporting of diagnostic accuracy studies with respect to these issues.

Design

We identified 20 journals (6 general medicine, 7 major specialties, and 7 randomly selected subspecialties) that publish original clinical research and were ranked high in Impact Factor. For each journal, we randomly sampled up to 10 articles published in 2008-2012 that assessed diagnostic tests (identified by title, abstract, or main analysis reporting test characteristics or ROC curve). We developed, piloted, and revised a standardized form used by trained independent raters to capture the elements needed to assess the aforementioned items.

Results

We identified 186 articles in 20 journals (1 journal had no diagnostic papers, and 1 journal only had 6 in the 5-year period). In only 40% of cases was the title sufficient to identify the article. Only 10 of these 20 journals’ Instructions for Authors refer to STARD. Data for selected key findings are shown in Table 14

.

Table 14. Characteristics of 186 Diagnostic Accuracy Studies

Table 14. Characteristics of 186 Diagnostic Accuracy Studies

ROC indicates receiver operating characteristics.

aItem is in STARD checklist.

Conclusions

While only 9% of articles stated an a priori success threshold, 98% made claims about the utility of the test. This is akin to stating that a randomized trial was positive or demonstrated efficacy without specifying a clinically important difference. We have identified areas for which the conduct and reporting of studies of diagnostic test performance could be improved. These data could be used by the STARD group when revising the guideline. Reporting could also be improved by convincing journals to endorse and follow STARD, since half of these high-impact journals have not yet done so.

1David Geffen School of Medicine at UCLA, Los Angeles, CA, USA, richelle@ucla.edu; 2UCLA Department of Emergency Medicine, University of California Los Angeles, Los Angeles, CA, USA; 3Centre for Statistics in Medicine, University of Oxford, Oxford, UK

Conflict of Interest Disclosures

None reported.

Funding/Support

David Schriger’s time is supported in part by a unrestricted grant from the Korein Foundation and Douglas Altman’s time is supported by Cancer Research UK, neither of which has any influence on the decision to do this research project or its execution.

Back to Top

Identifying Barriers to Uptake and Implementation of CONSORT

Larissa Shamseer,1,2 Laura Weeks,3 Lucy Turner,1 Sharon Straus,4 Jeremy Grimshaw,1,2 David Moher1,2

Objective

To describe the development of a behavior-change intervention to improve implementation of the Consolidated Standards of Reporting Trials (CONSORT) guidelines.

Design

A systematic approach to intervention development, accounting for theory, evidence, and practical issues, was employed. Development consisted of the following steps: (1) Identify the problem and stakeholders to be targeted. (2) Determine which barriers and facilitators need to be addressed through a series of semistructured interviews with trial authors and journal editors and a survey of journal editors. Thematic content analysis was used to group interview data into the 12 domains of the theoretical domains framework (TDF); survey data were summarized using descriptive statistics. (3) Identify intervention components (mapped relevant theoretical domains to established behavior change techniques guided by evidence and experts).

Results

Six editors of journals that endorse CONSORT, 1 editor of a nonendorsing journal, and 10 authors of trials submitted to Implementation Science and the Canadian Medical Association Journal were interviewed. Seventy-eight journal editors (27.6% response rate) completed the survey. Only 13% of CONSORT-endorsing journals (n=56) require that peer reviewers check for CONSORT adherence, and only 35.3% indicate using CONSORT to determine whether a trial should be published. Eighty-one percent of editors expressed support for an electronic CONSORT tool, and 59% wanted educational tutorials about CONSORT. Based on our findings, the following TDF domains and behavior change strategies have been identified as key target areas moving forward: knowledge: provision of CONSORT documents and evidence of CONSORT impact; skills: development of training materials and webinars about how to use CONSORT; beliefs about consequences and environmental context and resources: development of an electronic tool to facilitate compliance by authors and for editors/reviewers; motivations and goals: providing assessments on the completeness of trial reporting at individual journals to demonstrate need for improvement; social influence: use of social media to connect with and recruit key stakeholders to disseminate CONSORT information.

Conclusions

A more active approach than previously used is needed to ensure CONSORT implementation by authors and journal editors and peer reviewers. The identified interventions should be developed, implemented, and evaluated.

1Ottawa Hospital Research Institute, Ottawa, ON, Canada; 2University of Ottawa, Ottawa, ON, Canada, lshamseer@ohri.ca; 3Ottawa Integrative Cancer Centre, Ottawa, ON, Canada; 4Li Ka Shing Knowledge Institute, Toronto, ON, Canada

Conflict of Interest Disclosures

None reported.

Funding/Support

This project was funded by a team grant jointly awarded by the Canadian Institutes of Health Research and the Canadian Foundation for Innovation. The funders had no role in the design, collection, analysis, and interpretation of the data or in the writing of the manuscript or decision and approval to submit the abstract to the Peer Review Congress.

Back to Top

WebCONSORT Impact of Using a Web-Based Tool to Improve the Reporting of Randomized Trials: A Randomized Controlled Trial

Sally Hopewell,1,2 Isabelle Boutron,2 Douglas G. Altman,1 David Moher,3 Victor Montori,4 Virginia Barbour,5 David Schriger,6 Philippe Ravaud2

Objective

The CONSORT statement is an evidence-based guideline for reporting clinical trials. In addition, a number of extensions have been developed that specify additional information for more complex trials. The aim of this study is to evaluate if a simple web-based tool (WebCONSORT, which incorporates a number of these different extensions) improves the completeness of reporting of randomized trials published in biomedical publications.

Design

We are conducting a multicenter randomized trial. Journals (n=435) that endorse the CONSORT statement (ie, referred to in Instruction to Authors) but do not actively implement it (ie, require authors to submit a completed CONSORT checklist) have been invited to participate. Authors of participating journals are requested, at the manuscript revision stage, to use the web-based tool to improve the reporting of their randomized trial. Authors (n=302) registering to use the tool are randomized (using centralized computer generated randomization) to intervention or control. Authors and journal editors are blinded to the allocation. In the intervention group, authors are directed to the WebCONSORT tool. The tool allows authors to obtain a customized CONSORT checklist and flow diagram specific to their trial design (eg, noninferiority trial, pragmatic trial, cluster trial) and type of intervention (eg, pharmacological or nonpharmacological). The checklist items and flow diagram should then be reported in the manuscript and the completed checklist submitted to the journal along with the revision. In the control group, authors are directed to a different version of the WebCONSORT tool. This version of the tool includes the flow diagram but not the main checklist or elements relating to CONSORT extensions. The flow diagram should then be reported in the manuscript and submitted to the journal along with the revision. The main outcome measure is the proportion of poorly reported CONSORT items (initial and extensions) reported in each article.

Results

Randomization commenced on March 25, 2013, and, as of June 13 2013, 59 journals have agreed to participate.

Conclusion

This randomized trial is still open to recruitment and prelimentary findings will be presented.

1Centre for Statistics in Medicine, Oxford University, Oxford, UK, sally.hopewell@csm.ox.ac.uk; 2Paris Descartes University, France; 3Ottawa Hospital Research Institute, Ottawa, ON, Canada; 4Mayo Clinic, Rochester, MN, USA; 5PLOS Medicine, Cambridge, UK; 6Annals of Emergency Medicine, Los Angeles, CA, USA

Conflict of Interest Disclosures

The authors are members of the CONSORT Group.

Funding/Support

This study received funding from the French Ministry of Health. The funder had no role in the design or conduct of this study.

Postpublication Access, Dissemination, And Exchange

Back to Top

Stability of Internet References in General Medical Journals

Paula A. Rochon,1,2 Wei Wu,2 Jerry H. Gurwitz,3 Sunila R. Kalkar,4 Joel Thomson,5 Sudeep S. Gill2,6

Objective

To evaluate the stability of Internet references over time that were used in leading general medical journals.

Design

We identified all original contributions published in 5 leading peer-reviewed general medical journals published in print and online (Annals of Internal Medicine, BMJ, JAMA, Lancet, and New England Journal of Medicine) and a leading online-only general medical journal (PLOS Medicine) published at 2 time points (January 2005 and January 2008). We followed the sample prospectively and determined the number and percent of the Internet references that remained accessible after 5 years (from November 2008 to March 2013).

Results

We identified 68 Internet references in the 2005 publications (n = 89) and 86 Internet references in the 2008 publications (n=76) (Table 15

). Over a 5-year period, the rate of functional Internet references decreased from 51% to 37% in articles published in 2005 and decreased from 78% to 44% in articles published in 2008. We also evaluated the overall sample (2005 and 2008 articles) in 2013; the rate of functional Internet references was 37% for the 5 journals published in print and online and 59% for the online-only journal (P=.03). Among the Internet references cited in the Methods section, only 30% (95% CI: 20%-43%) remained accessible. The Internet references in other sections (Introduction, Results, or Discussion/Comment) had a significantly higher accessibility rate (47%, 95% CI, 37%-57%, P =.04). Commercial Internet references also had a higher accessibility rate (61%, 95% CI, 41%-78%), compared to government Internet references (39%, 95% CI, 27%-52%) and noncommercial organization Internet references (36%, 95% CI, 27%-48%).

Table 15. Description of Internet References

Table 15. Description of Internet References

Conclusions

The use of Internet references in medical journals has increased, while the stability of Internet references has deceased substantially over time. This decline was most pronounced in the Methods section of articles, where retention of the exact information on study methodology as originally cited may be most crucial to permit subsequent confirmation.

1Women’s College Research Institute, Women’s College Hospital, and Departments of Medicine and Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada, paula.rochon@wchospital.ca; 2Institute for Clinical Evaluative Sciences, Toronto, ON, Canada; 3Division of Geriatric Medicine, Department of Medicine, University of Massachusetts Medical School, Worcester, MA, USA; 4SRK Informatics, Toronto, ON, Canada; 5Nanotechnology Engineering, University of Waterloo, ON, Canada; 6St Mary’s of the Lake Hospital, and Department of Medicine, Queen’s University, Kingston, ON, Canada

Conflict of Interest Disclosures

None reported.

Funding/Support

This work was supported by team grant OTG-88591 from the Canadian Institutes of Health Research (CIHR) Institute of Nutrition, Metabolism, and Diabetes. The funding source had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the abstract.

Back to Top

Electronic Culling of the Clinical Research Literature: Filters to Reduce the Burden of Hand Searching

Nancy L. Wilczynski, K. Ann McKibbon, R. Brian Haynes

Objective

To facilitate the transfer of new, valid, relevant knowledge into clinical practice, research staff in the Health Information Research Unit (HiRU) at McMaster University have created a health knowledge refinery (HKR). The HKR begins with critical appraisal of original and review studies in 122 top clinical journals and leads to the creation of the McMaster PLUS (MacPLUS) database. The time and resources to critically appraise the literature are substantial. We determined if Clinical Queries search filters for large bibliographic databases could be modified to electronically cull the clinical research literature to reduce the burden of hand searching.

Design

The Clinical Queries (search filters available for use in PubMed) were modified to include only text words and a NOT string to exclude irrelevant content. A retrospective database of all content indexed in the 122 journals was created by searching MEDLINE via PubMed for a 17-month period. We tested the modified Clinical Queries in this retrospective database to determine if articles contained in the MacPLUS database were retrieved by the modified Clinical Queries.

Results

A total of 66,939 articles were downloaded from PubMed for the 122 journals over 17 months of publishing, May 1, 2010, to September 30, 2011. This is the number of articles that HiRU staff would need to review over 17 months (average of 3,938 articles per month—at a time estimate of 92 hours per month). Of these 66,939 articles 3,701 (5.5%) met our criteria for the MacPLUS database; 53 articles were missed. Review of the content of the 53 missed articles showed that the research evidence was redundant and/or of limited relevance for clinical application. Given prior validation of the search filters, results are shown in Table 16

using all articles rather than showing the results for the development and validation data sets. Use of the new filters reduced manual processing time by 55%.

Table 16. Results of Filtering the Content of 122 Top Clinical Journals

Table 16. Results of Filtering the Content of 122 Top Clinical Journals

Conclusion

Search filters can be used to electronically cull the clinical research literature to reduce the burden of hand searching.

Health Information Research Unit and Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada, wilczyn@mcmaster.ca

Conflict of Interest Disclosures

None reported.

Back to Top

Letters and Comments Published in Response to Research: Whither Postpublication Peer Review?

Margaret A. Winker

Objective

To describe features of postpublication peer review published in journals, comparing frequency, features, and accessibility of letters and comments.

Design

The first 20 research articles published in 2012 in each of the 8 ICMJE member journals published in English plus PLOS Medicine were evaluated to determine whether letters or comments had been published; access, characteristics, and interval to publication; and whether links to other types of postpublication peer review were provided.

Results

Eight journals permitted letters and 4 permitted comments on all article types (Table 17

). Five journals published any letters and 3 published any comments in response to the articles. Three journals published no letters or comments in response to any of the articles. Of the 8 journals that permitted letters, 31 (19%) of 160 articles had any letters published. Of the 5 journals that published any letters, 31 (31%) of 100 articles had any letters published. Of the 4 journals that permitted comments, 23 (29%) of 80 articles had any comments. Of the 3 journals that published any comments, 23 (38%) of 60 research articles had comments. Eighty percent (144 of 180) of articles had no letters or comments posted. The mean time from publication of the article to letters was 15 weeks; comments were published from 1 day to 6 months after the article was published. All journals publishing letters or comments included conflict of interest disclosures. Letters were more likely than comments to include author responses. All letters and some comments included references. Four journals linked articles to journal-based postpublication peer review; for 3 journals that did not link to related letters, letters could be identified only via journal or PubMed search. Three journals had different access rules for articles and letters. One journal linked articles to nonjournal postpublication peer review.

Table 17. Postpublication Peer Review Frequency, Features, and Accessibility for 8 Journals

Table 17. Postpublication Peer Review Frequency, Features, and Accessibility for 8 Journals

NA indicates not applicable; PPPR, postpublication peer review.

Conclusions

Most research articles had no postpublication peer review letters or comments published in the journal. Some journals did not link to postpublication peer review letters or comments from the article; only 1 journal linked to nonjournal postpublication peer review. Some journals used different access rules for articles and postpublication peer review. Postpublication peer review needs to be substantially improved to live up to its potential for helping readers assess study quality and impact.

PLOS Medicine, San Francisco, CA, USA, mwinker@plos.org

Conflict of Interest Disclosures

Margaret Winker reports no conflicts of interest, other than working for PLOS Medicine, which uses comments rather than letters.

Back to Top

Likes, Shares, and Tweets: The Growing Role of Social Media at a General Medical Journal

James A. Colbert,1,2 Jennifer M. Zeis,1 Pamela W. Miller,1 Ruth Y. Lewis,3 Jonathan N. Adler,1 Edward W. Campion1

Objective

Many medical journals now have a presence on Facebook and Twitter, yet the experience of a general medical journal with these social media sites has yet to be described in the literature.

Design

We sought to characterize the interactions of Facebook and Twitter users with a large, weekly general medical journal, the New England Journal of Medicine (NEJM). We obtained data from NEJM.org, the NEJM Facebook webpage, and the NEJM Twitter feed for usage between January 1, 2012, and December 31, 2012.

Results

Facebook has become sixth among all websites as a source of referrals to NEJM.org with a total of 206,191 visits referred during 2012. As of December 31, 2012, 359,006 unique Facebook users had “Liked” the NEJM Facebook page, representing a combined network of 83 million friends. During the course of 2012, NEJM posted 727 times on its Facebook wall, and these posts were seen by 32,660,674 users. Medical quizzes received the most comments (mean, 130 vs 55 for all other posts; P<.001), but journal content posts received higher numbers of shares and likes. Posts containing images received more comments than those without images (65 vs 27, P<.001) as well as more shares (68 vs 31; P<.001) and more likes (242 vs 123; P<.001). However, posts without images actually directed more traffic to the NEJM.org site than those with images (142 vs 23 click-throughs; P<.001). NEJM Twitter followers doubled from 47,500 to 93,000 over the course of 2012. By December 2012, Twitter ranked tenth overall in sources of web traffic to NEJM.org, bringing more than 16,000 visitors that month. Tweets about research study results were retweeted more often than other types of tweets (29.9 vs 19.3, P<.0001).

Conclusions

Facebook and Twitter have proven to be important outlets for dissemination of journal content to a large, worldwide audience. The reach of NEJM through these outlets has grown substantially over the past year, and both are driving additional traffic to the journal’s website. Further research will explore how medical journals can use Facebook, Twitter, and other sources of social media to connect more effectively with readers in the digital age.

1New England Journal of Medicine, Boston, MA, USA, jcolbert@partners.org; 2Division of Medical Communications, Brigham and Women’s Hospital, Boston, MA, USA; 3Earlham College, Richmond, IN, USA

Conflict of Interest Disclosures

None reported.

Back to Top

POSTER SESSION ABSTRACTS

Note: Abstracts reflect the status of the research at the time the abstracts were accepted for presentation.

Open Access And Dissemination

Back to Top

Authors’ Views on Online-Only Publication

Frauke Becher,1 Are Brean,1 Erlend Hem,1 Christine Laine,2,3 Alicia Ludwig,3 Mary Beth Schaeffer,2,3 Catharine Stack,2,3 Arlene Weissman3

Objective

Many biomedical journals have both print and electronic versions, providing the opportunity for some articles to appear online only. We wanted to determine authors’ views on online-only publication in such settings.

Design

Using Survey Monkey, we surveyed individuals who served as corresponding author on at least one article published in 2012 in the Annals of Internal Medicine (US-based journal) and the Journal of the Norwegian Medical Association (Scandinavia-based journal). In addition to asking about authors’ reading habits, we asked them to suppose that the journal they had published in began to publish some articles online only with no print publication, specifying that online-only articles would be indexed in PubMed as were those that appeared in both electronic and print versions. We then asked whether and how the possibility of online-only publication would influence their likelihood of submitting future work to the journal. Both journals publish new issues twice a month. The US-based and Scandinavia-based journals have circulations of approximately 85,000 and 28,650, respectively, and receive about 3,000 and 1,500 manuscripts per year, respectively.

Results

Seventy percent (237/335) of authors at the Scandinavia-based journal and 161 of 274 (59%) authors at the US-based journal completed the survey. Forty-five percent of Scandinavia-journal authors and 26% of US-journal authors reported being less likely to submit articles if they knew their article would be published online only, with no subsequent print publication (Table 18)

. A smaller number of authors (5% Scandinavia, 11% US) noted being more likely to submit if their article was available online only. Reasons for being less likely to submit varied between the Scandinavia-based and US-based journals. Most surveyed authors (78% Scandinavia, 75% US) believed the decision of whether to publish their article in print or online only should either be determined by editors with right of refusal by the authors or be determined jointly by authors and editors.

Table 18. Results of Survey About Authors’ Views or Online-Only Publications

Table 18. Results of Survey About Authors' Views or Online-Only Publications

Conclusions

As electronic formats become an increasingly common way that readers access biomedical journals, journals may choose to cease print publication. In the interim, most journals have print and electronic versions. The possibility that an article might be published online only in such settings could influence authors’ decisions about where to submit their work.

1Journal of the Norwegian Medical Association, Oslo, Norway; 2Annals of Internal Medicine, Philadelphia, PA, USA, claine@acponline.org; 3American College of Physicians, Philadelphia, PA, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

No external funding was provided for this study. Contributions of staff time and resources came from Annals of Internal Medicine and Journal of the Norwegian Medical Association.

Back to Top

Open-Access Publication of Research Papers: Who Can Pay?

Hajira Dambha,1 Roger Jones2

Objective

To examine the implications of open-access publication for a UK primary care research journal, the British Journal of General Practice (BJGP).

Design

We retrospectively surveyed and analyzed the funding sources of all original research papers published in the BJGP in 2011-2012 (before open access was introduced). We identified the research funding sources from the papers and by asking the authors and determined if these funders supported the provision of article publishing charges (APCs), and whether the funding sources were in the United Kingdom.

Results

Of the 216 papers that were analyzed, 49% had potential open-access funding. Outside the UK, only 6% of papers had this funding available. In the UK, 60% of papers had potential open-access funding. The remaining 40% of UK papers did not have access to funding that could support open-access publication in the BJGP.

Conclusions

Current research funding does not adequately support open-access publication of primary care research papers. Many primary care authors, within and outside the UK, may wish to make their papers available through open access but do not have access to APC funding, raising issues of funding for them and their target journals. This funding gap may have implications on authors’ choice of journals and possibly, because of the potential disincentivizing effect of APCs, on submission rates.

1University of Cambridge, Department of Primary Care, Institute of Public Health, Forvie Site, Cambridge, UK, hajiradambha@doctors.org.uk; 2British Journal of General Practice, London, UK

Conflict of Interest Disclosures

None reported.

Funding/Support

No external funding received.

Back to Top

Benchmarking Prominent Medical and Scientific Journal Facebook and Twitter Popularity

Robert P. Dellavalle,1,2 Natalia S. Dellavalle,3 Lisa M. Schilling1

Objective

To describe medical and scientific journal social media popularity.

Design

We used computer-assisted character searches and independent author data entry between January 9 and June 3, 2013, examine the public Facebook and Twitter metrics of the 10 medical and science journals with the highest Impact Factors in 2012 that also maintained unique Facebook and Twitter pages: New England Journal of Medicine (NEJM), Lancet, Nature, Science, Cell, JAMA, Nature Medicine, Annals of Internal Medicine, BMJ, and PNAS.

Results

On June 3, 2013, the mean age of the 10 journal Facebook pages was 4.1 years and ranged from 2.7 (Cell) to 5.8 (BMJ) years; the mean Twitter account age was 3.7 years and ranged from 1.3 (Nature) to 4.6 (BMJ) years. The sum of Facebook “likes” for all 10 journals increased 18% from 723,197 to 854,659 between January 9, 2013, and June 3, 2013; the sum of Twitter followers increased 32% from 400,642 to 528,911. The mean number of Facebook “likes” was 85,466 and ranged from 399,955 (NEJM) to 6,737 (Annals of Internal Medicine) (Table 19

). The mean number of new “likes” per day during the study period was 91 and ranged from 269 (NEJM) to 5 (Annals of Internal Medicine). The mean number of Facebook postings between May 28, 2013, and June 3, 2013, was 14 and ranged from 49 (BMJ, including 1 video, 1 quiz post, 1 Pinterest, and 1 podcast) to 3 (Lancet). JAMA and NEJM had the most quizrelated posts (6 and 5, respectively). The mean number of Twitter followers was 52,891 and ranged from 121,539 (Science) to 4,817 (PNAS). The mean number of new Twitter followers per day was 88 and ranged from 219 (Science) to 10 (Annals of Internal Medicine). The mean number of Tweets between May 28, 2013, and June 3, 2013, was 21 and ranged from 83 (BMJ) to 4 (Nature Medicine and Annals of Internal Medicine). Science tweets used the most hashtags (37) and BMJ tweets had the most “@”-directed comments (13).

Table 19. Facebook and Twitter Activity for 10 High-Impact Medical and Science Journals

Table 19. Facebook and Twitter Activity for 10 High-Impact Medical and Science Journals

Conclusions

We benchmarked prominent medical and scientific journal social media popularity and provide relative journal rankings using Facebook and Twitter metrics. The data demonstrate a robustly increasing social media audience for these journals.

1University of Colorado School of Medicine, Aurora, CO, USA, robert.dellavalle@ucdenver.edu; 2Department of Veterans Affairs Medical Center, Denver, CO, USA; 3East High School, Denver Public Schools, Denver, CO, USA

Conflicts of Interest Disclosures

None reported.

Back to Top

Investigating the Use and Users of the Norwegian Electronic Health Library by Indirect Indicators

Runar Eggen, Magne Nylenna

Objective

The Norwegian Electronic Health Library (NEHL) is a publicly funded website for health care professionals and students, launched in 2006, that provides free access to important sources of knowledge intended for health care personnel. Most content is available to anyone with a Norwegian IP address. From 2011 to 2012, the library has seen a sharp increase in the number of page views, visits, and unique visitors. The increase was much sharper than in previous years. We aimed to find out whether this rise was due to higher usage by health care personnel or by the general public.

Design

We had to use proxy variables relying on data from Google Analytics. These dimensions were analyzed: kind of networks users originated from; time of day and week the usage peaked; type of content accessed; first-time/returning visitors; time spent/page views per visit; and search engine/direct access/devices.

Results

A majority of users came via network companies supplying organizations such as hospitals. Visits from workplace networks did not increase much from 2011 to 2012. The usage peaked during office hours. From 2011 to 2012, evening use increased slightly. There was an 8-fold increase in page views of patient information. First-time visitors increased considerably, from 40% to 48%. The percentage of visitors coming from search engines like Google increased from 39% to 58%. Page views increased 26% (from 3.4 million to 4.3 million), visits 43% (from 1.1 million to 1.6 million), and unique visitors 66% (from 496,000 to 827,000). Time spent per visit decreased by 23%, and pages per visit decreased by 12%. Mobile/tablet usage increased 6-fold.

Conclusions

The pattern of user networks and time distribution of use point toward a majority of users being at work and most likely being health professionals. The development from 2011 to 2012 indicates that more lay people have discovered the library. When detailed information of the users of a digital knowledge resource is missing, indirect indicators of use and users may be helpful.

Norwegian Knowledge Centre for the Health Services (NOKC), Oslo, Norway, rue@nokc.no

Conflict of Interest Disclosures

The authors are both employees of the Norwegian Knowledge Centre for the Health Services (NOKC) and this study was done as part of their work. The authors declare no financial interest in the findings of this study, nor receipt of any external funding for this project.

Back to Top

Do Readers Read What Peer Reviewers Selected?

Marian M. Tolksdorf,1 Markus K. Heinemann1,2

Objective

To determine if the manuscripts selected by peer review in various subject categories of a cardiothoracic surgical journal are read in accordance with their general representation.

Design

Subject categories of all cardiac articles in the journal were determined between 2004 and 2012 by 2 independent investigators. For the same 9-year period, the full-text PDF downloads as tracked by the publisher were analyzed in the same way and compared to the overall group. All downloads of an article over the years were summed with the sum determining the final rank within the year of publication. The 90 most-cited papers were also analyzed for comparison.

Results

The journal main group comprised 1,079 manuscripts; 624 dealt with cardiac surgery and were assigned to 1 of 9 subject categories. The top 10 subset articles of each year were downloaded between 75 and 1,048 times (mean, 216; total, 19,406). Among the top 90 articles, cardiac surgery was overrepresented compared to thoracic (cardiac, 61/90 vs 624/1,079; thoracic, 27/90 vs 397/1,079). Among the subjects, readers preferred general cardiac (18/61 vs 116/624), which includes the annual statistics of the journal’s scientific mother society (7/90 articles, 1,591 downloads). The 2 articles achieving more than 1,000 downloads dealt with heart failure treatment (2,077 of 19,406). With the exception of “tumor” and “congenital,” the subject categories comprised more original articles than case reports. However, the prevailing article type did not permit a conclusion about article quality. The analysis of the 90 most-cited manuscripts during the same time period showed an overlap of 34 papers (34/90 = 38%) found among both most downloaded as well as most cited. The subject categories were similar with 2 exceptions: cardiopulmonary bypass had more citations, and transplantation and heart failure had more downloads.

Conclusions

Whereas a scientific journal may have a dedicated mission to publish a representative selection of all its relevant subject categories, readers show particular interests. These may be spawned by unique characteristics (society journal, exclusive information) or ongoing controversial debates (heart failure treatment). Editors can use bibliometric analyses to balance the content of their journals if needed.

1Cardiac, Thoracic and Vascular Surgery, Universitäetsmedizin Mainz, Mainz, Germany; 2Thoracic and Cardiovascular Surgeon, Mainz, Germany, heinemann@uni-mainz.de

Conflict of Interest Disclosures

None reported.

Back to Top

MacPLUS Federated Search: A Centralized Internet-Based Resource to Bring Best Evidence to Health Care Professionals

Emma C. Iserman, Alfonso Iorio, R. Brian Haynes

Objective

Evidence from health care research is only helpful for patients if clinicians are aware it exists. As new evidence is published, clinicians risk falling ever further behind what is the most current best practice. MacPLUS Federated Search (MPFS) is an online service designed to help clinicians stay up to date and access current, relevant, peer-reviewed published evidence quickly and easily from multiple sources, recognizing that no one such source is comprehensive or ever fully current.

Design

MPFS provides e-mail alerts to newly published, highquality, clinically relevant evidence and a search engine that simultaneously searches multiple evidence-based resources. Search results are ordered according to a hierarchy of evidence. This study analyzed the possible impact of MPFS. Beginning in June 2011, medical students, residents, or staff physicians began, after randomly selected logins, to receive an online impact assessment measure about the impact of that login on their practice, including relevance and use for a specific patient, and if a specific patient is expected to receive health benefits.

Results

For users who received e-mail alerts, 56% (502/900) of responses indicated the information received was totally relevant to at least 1 patient, and of those who indicated they would use the information with a specific patient, 92% (236/255) said they expect or possibly expect a health benefit to that patient. For users who did searches, of those who indicated the search was for a clinical query, 81% (126/155) indicated they found information relevant to their objectives. Of those who indicated they will use the information for a specific patient, 93% (41/44) thought there would be or would possibly be a health benefit to the patient.

Conclusions

Results of this survey suggest that using MPFS can have an impact on the information used by health care professionals, which might then improve patient care. Providing organized access to current, high-quality, relevant, and useful published research to professionals could be an important part of improving health care.

McMaster University, Hamilton, ON, Canada, iserman@mcmaster.ca

Conflicts of Interest Disclosures

None reported.

Funding/Support

Funding for this research was provided by Ontario Ministry of Health and Long-term Care Academic Funding Program. The funder provided monetary support only.

Authorship

Back to Top

Honorary and Ghost Authorship in Selected Nursing Journals

Emma C. Iserman, Alfonso Iorio, R. Brian Haynes

Objective

To assess the prevalence of honorary and ghost authors in leading nursing journals in 2010-2012 from the perspective of both authors and editors; and to compare this with the prevalence previously reported for a group of high-impact biomedical journals.

Design

We selected journals to include a representation of generalist (n=6) and specialist (n=4) journals: that also followed publishing authorship standards as determined by the ICMJE or COPE guideline recommendations and that were indexed by Thompson Reuters Journal Citation Report. Using an adaptation of the survey of Wislar et al (a 35-item questionnaire assessing contributions and participation of all authors or assistance or contributions from unnamed authors), an online survey of 1,350 authors of articles published 2010-2012 in 10 peer-reviewed nursing journals and of 203 editors was undertaken in February and March 2013. Targeted articles included research, reviews, and quality improvement (QI) reports. Editorials were excluded on the assumption that most are not data based and many are by single authors. All authors and editors were surveyed in the sampling frame by accessing e-mails of corresponding authors retrieved from publication data or an online search. Two authors evaluated each article type based on criteria established a priori. Interrater checks were applied and differences resolved by consensus.

Results

Response rates were 556/1,350 (41%) for the authors and 60/203 (30%) for the editors. Following exclusion of incomplete data, article-based data represented by 423 authors were analyzed. The majority (72%) of articles were research, 5.2% QI, and 22.7% review articles. Using prespecified criteria, honorary authorship was identified in 68% and ghost authorship identified in 27%. Editors reported adhering to guidelines, although 37% have discovered honorary authors in their own journal, and 89% were aware of ghost authorship in other journals over the past 3 years. No differences were identified between journals or by their publishers.

Conclusion

Despite authorship guidelines, conclusions about authorship status and other issues appears to be widespread among nursing journals.

1American Journal of Nursing, Wolters Kluwer Health, New York, NY, USA; 2University of Pennsylvania School of Nursing, Philadelphia, PA, USA; 3Australian Journal of Nursing Practice, Scholarship & Research and World Health Organization Collaborating Centre for Nursing, Midwifery & Health Development, University of Technology, Sydney, Lindfield, Australia, john.daly@uts.edu.au

Conflict of Interest Disclosures

Each author reports no financial conflict of interest. Shawn Kennedy is a full-time employee of Wolters Kluwer Health, the publisher of several of the journals included in the study, but the journals were chosen solely by the authors of the study; her time and administrative support were provided in kind. Joel Barnsteiner is a consulting editor for 1 of the journals and receives honorarium for editorial work; she did not receive payment for this study, but travel for presenting this work will be partially underwritten by the American Journal of Nursing. John Daly received no funding and has no disclosures.

Back to Top

Recommendations to ICMJE for Improving the Guidelines for Authorship Disclosures

Fay Ling,1 John Kerpan2

Objective

The guidelines for authorship criteria developed by the International Committee of Medical Journal Editors (ICMJE) have been widely endorsed by biomedical journals. The American Journal of Respiratory and Critical Care Medicine (AJRCCM) is a leading medical journal that not only endorses the guidelines, but also requests authors to include statements of each author’s contribution(s) and publishes the statements with the articles. This study investigates authors’ contributions described in articles published in AJRCCM and makes recommendations for improving the ICMJE guidelines.

Design

Authors’ contribution statements published in AJRCCM in 2012 were manually transferred into datasheets. There were 199 research articles with a total of 2,041 authors’ contributions included in the analyzed data pool. The datasheets have authors’ contribution descriptions listed in rows and authors’ abbreviated names listed in columns. Here, we simply refer to the author contributions as author roles.

Results

We reported the top 9 most common author roles and 24 least common author roles in terms of percentage of mentions

.

We then conducted data analysis and reported an average of 7.5 total author roles for each article and close to an average of 4 roles per author in each article. Finally, we identified authorship descriptions that need clarification.

Conclusions

As the nature of author collaboration is substantially different from 1 study to another, authorship criteria will be always a topic of debate. Current, common practice for declaring authorship is that authors describe their contributions either by using the terms adopted from ICMJE, which are too broad, or by using their own descriptions, which are often unclear/obscure. Although our study was performed on only one journal, we evaluated some of our competitors’ journals and found that the authors’ contributions were published in similar formats. To make authorship information more transparent and consistent, we suggest that the ICMJE improve its guidelines to help authors report their contributions, by providing recommended descriptors that can be used for the different stages of research and the variety of supporting roles.

1American Thoracic Society, Publications, New York, NY, USA, fling@thoracic.org; 2Northwestern University, Feinberg School of Medicine, Department of Pulmonary and Critical Care Medicine, Chicago, IL, USA

Conflict of Interest Disclosures

None reported.

Back to Top

Authorship: Attitudes and Practice Among Norwegian Researchers

Magne Nylenna,1,2 Frode Vartdal,2 Erlend Bremertun Smeland,2,3 Frode Fagerbakk,2 Peter Kierulf2

Objective

Attitudes to and practice of scientific authorship vary. We have studied this variation among researchers in a university hospital and medical school in Norway.

Design

All faculty, researchers, and PhD students at Oslo University Hospital and the Medical Faculty, University of Oslo (approximately 2,700), were invited by e-mail to answer a web-based questionnaire in January 2013. The researchers were asked to report their authorship experiences and to score their (1) agreement with and (2) ability to practice according to 13 principal statements on authorship qualifications and criteria on a 5-level Likert scale (1=completely agree, 5=completely disagree). The statements were taken from the International Committee of Medical Journal Editors (ICMJE) and other recommendations on authorship.

Results

A total of 654 questionnaires were returned (response rate 24%, in line with previously reported web-based surveys). Of these respondents, 25% had published fewer than 5 scientific articles, 43% 5 to 49, and 32% had more than 50 articles. Ninety-seven percent reported knowledge of defined authorship criteria, and 68% regarded breeches of these as scientific misconduct. Thirty-six percent had experienced pressure to include undeserved authors in their papers, more in basic science (46%) than in community medicine (25%). Researchers with less than 6 years of research experience found decisions on authorship in general more difficult than more experienced researchers (48% vs 30%). The respondents’ agreement with the statements on authorship was higher than their ability to practice according to them for all statements. For all statements combined, the average score was 1.4 (agreement) vs 2.3 (ability to practice). The discrepancy between agreement with and ability to practice was highest for the ICMJE requirement of “substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data.” Attitudes to and practice of authorship criteria varied among different groups of researchers.

Conclusions

Almost all the responding researchers had knowledge of formal authorship criteria. Most of them agreed with the criteria, but found it harder to practice according to them. Traditions and culture may explain differences between different groups of researchers.

1Norwegian Knowledge Centre for the Health Services, and Department of Health and Society, Faculty of Medicine, University of Oslo, Oslo, Norway, magne.nylenna@kunnskapssenteret.no; 2Faculty of Medicine, University of Oslo, Oslo, Norway; 3Institute for Cancer Research, Oslo University Hospital, Oslo, Norway

Conflict of Interest Disclosures

None reported.

Funding/Support

The survey was funded by the Faculty of Medicine, University of Oslo.

Back to Top

Honorary Authorship and Coercive Citation in Medical Research

Allen Wilhite, Eric A. Fong

Objective

This project measures the extent of, incentives for, and reactions to citation and authorship manipulation in medical grant proposals and publication. We looked into 3 types of manipulation: honorary authorship (adding authors who do not contribute), gratuitous citation (adding citations that are not pertinent), and coercive citation (editors directing authors to add citations to articles from their journal with no indication the manuscript was lacking in attribution or missing content, and no direction to specific articles).

Design

We studied these issues by e-mailing a survey to 37,500 medical researchers and nursing professors. We received 3,485 responses for a response rate of 9.3%. In addition to their knowledge of and personal experience with citation and authorship manipulation, we asked about their motives and opinions. We also gathered information on their academic rank, sex, publication success, and grant experience.

Results

Of our respondents, 3,054 said they were aware of authorship manipulation. Honorary authorship was common: 1,101 respondents (31%) say they felt “obligated” to add an honorary author, even though 80% view the practice as inappropriate. Physicians reported that they most commonly added the directors of their laboratories, while nurses added authors who were “in a position of authority and could affect their career.” Similarly, 936 respondents said they have added honorary authors to grant proposals, and of those, 681 (72.7%) say they did so to increase their chances of being funded. Finally, 996 respondents said they are aware of coercive citation, and 258 reported that they have personally been directed by an editor to add citations to the editor’s journal even if the citations were not material to the research. Response bias is a serious limitation in these survey data, and one must be careful about generalizing, but the demographics of the respondents and our target population match up well.

Conclusions

Our study adds further evidence on the existence and extent of authorship and citation manipulation in medical research. We continue to suggest that blind review of grant proposals will reduce the incentive to add honorary authors to funding proposals.

College of Business Administration, University of Alabama in Huntsville, Huntsville, AL, USA, wilhitea@uah.edu

Conflict of Interest Disclosures

None reported.

Funding/Support

This project is partially supported by a summer research grant from the College of Business Administration, University of Alabama in Huntsville.

Back to Top

Use of the “Antighostwriting Checklist” at a General Medical Journal: Results of a Pilot Study

Elizabeth Wager,1 Iain Hrynaszkiewicz,2 Jigisha Patel2

Objective

The World Association of Medical Editors (WAME) defines ghostwriting as “when someone has made substantial contributions to writing a manuscript and this role is not mentioned in the manuscript” and considers it “dishonest and unacceptable.” We tested the feasibility and effects of asking authors to complete an antighostwriting checklist, which asks about the involvement of professional writers, and to determine the frequency of use of medical writers, translators, and authors’ editors in submitted articles.

Design

All manuscripts submitted to BMC Medicine from October 2010 to April 2011 were included in the study. Alternate weeks were allocated to checklist or control. During checklist weeks, all corresponding authors were asked to complete the online checklist (supplied as published) and an additional short survey (developed for this study). If the Acknowledgment, Competing Interests, or Author Contribution sections were incomplete, authors were also asked to supply missing details. During the control weeks, if the required sections were missing, authors were sent the journal’s standard letter requesting this information but did not receive the checklist or survey.

Results

There were 180 submissions were included in the study, for which 91 corresponding authors received the checklist and survey, and 51 (56%) completed it. Nearly all manuscripts had complete contributor statements by the second submission, and there was no difference between the groups in the completeness of this information (35/36 for control 44/46 for checklist groups). The checklist identified 1 case in which a professional writer had been involved. The survey indicated that professional translators had been used in 5 cases (10%), authors’ editors (from within the authors’ institution and not funded by the research sponsor) in 11 cases (22%), and that authors obtained informal, unpaid editorial help (eg, from a native English-speaking colleague or friend) in 10 cases (20%).

Conclusions

We could not determine whether the antighostwriting checklist is effective for identifying unacknowledged writing assistance. However, this study revealed the frequency of use of translators, authors’ editors, and informal editorial assistance and the feasibility of using the checklist.

1Sideview, Princes Risborough, UK, liz@sideview.demon.co.uk; 2BioMed Central, London, UK

Conflicts of Interest Disclosures

None reported.

Funding/Support

No external funding was obtained for this project. BioMed Central provided staff time. Elizabeth Wager is self-employed and received no payment for this work. All other authors were employees of BioMed Central at the time of the study.

Back to Top

Chinese Authors, Reviewers, and Editors’ Attitudes Toward ResearcherID

SUN Jing, LIU Huan, QIAN Shou-chu

Objective

Many authors share the same full names worldwide (eg, WANG Chen , LI Jian-jun), and these names are presented differently by different journals and databases, which makes it difficult to disambiguate author names in the published literature and assign accurate attribution/credit. We preliminarily investigated the attitudes of Chinese authors, reviewers, and editors toward ResearcherID, an author identification system produced by Thomson Reuters.

Design

We distributed a questionnaire about ResearcherID to 30 authors (20 by telephone and 10 by e-mail), 30 reviewers of the Chinese Medical Journal (20 face-to-face and 10 by telephone), and 40 editors from 25 Chinese core medical journals (29 face-to-face and 11 by telephone). The questionnaire contained 4 questions: (1) Have you ever heard of ResearcherID? (2) Do you think this system is useful and important for literature retrieval? (3) What problems can be solved when you use this system? (4) Do you have any questions about this system?

Results

We received responses from all 40 editors and 30 reviewers and 27 of 30 authors. Five editors, 3 reviewers, and no authors reported understanding the full capability of ResearcherID and the problems it can solve. All the responders reported that ResearchID may be a useful tool for literature retrieval and that it can partly solve the problem of duplication of names. Some of the participants responded that this tool can partly handle the following issues: problems posed by nonunified name formats of researchers in medical journals; accurately tracking the research papers of academic leaders by using this identifier; and convenience for editors to judge researchers’ academic achievements and select proper reviewers for journals. Participants also raised questions, such as the following: How are authors’ articles that are published in journals that are not indexed in Web of Science tracked? Is ResearcherID universal to literature searching in different biomedical databases? Does RearcherID address the accuracy of self-identified contributions of an author? How are incomplete and inaccurate information handled?

Conclusion

ResearcherID and its complete functionality are not well known to researchers and editors in China, but it may be helpful to them.

Chinese Medical Journal, Chinese Medical Association, Beijing, China, scqian@126.com

Conflicts of Interest Disclosures

None reported.

Bias

Back to Top

How Are Gender and Geography Associated With Sponsorship of Collaborative Cancer Research?

Gordon H. Sun,1,2,3 Nicholas M. Moloci,1 Kelsey Schmidt,1 Mark P. MacEachern,4 Reshma Jagsi1,5

Objective

Funding and publication of collaborative cancer research may be consciously or unconsciously influenced by researcher characteristics such as gender and geographic location. We examined whether these characteristics were associated with sponsorship of oncologic clinical trials authored by formally named collaborative groups.

Design

We conducted a bibliometric analysis of PubMed from 2001 to 2011, screening 5,015 randomized and nonrandomized clinical trials using a protocol developed by a medical librarian and piloting the search string to maximize eligible article yield. To ensure reliability of data collection, 3 coauthors conducted 3 sequential 100-article evaluations, obtaining error rates of ²5% for all variables by the final evaluation. We collected information on first and corresponding author gender and nationality, study sponsorship, and other variables such as total number of collaborators. Associations between authorship and sponsorship were analyzed with chi-squared tests, while time-based trends were analyzed using the Cochran-Armitage test.

Results

Within 2,424 eligible articles, first (1,848, 76.5%) and corresponding (1,944, 80.5%) authors were predominantly male. Gender distribution did not change significantly over the study time frame. Authors most commonly hailed from the United States by country and Europe by continent. Among first authors, 55% were from Europe, 30% North America, 12% Asia, 2% Australia, 1% Africa, 1% South America, and 1% other. Collaborative trials with authors from more than 1 country increased from 38.0% in 2001 to 46.9% in 2011 (P for trend=.001). Industry (966, 39.9%) and the US government (642, 26.5%) were the 2 largest research sponsors. Industry sponsorship increased significantly from 36.7% in 2001 to 63.0% in 2011 (P for trend<.001), while no significant change in US government sponsorship over time was observed. There were 35.8% of female first authors funded by the US government vs 23.9% of men (P<.001), compared to 40.8% of female corresponding authors vs 22.9% of men (P<.001). In contrast, 41.0% of male first authors were funded by industry vs 34.4% of women (P=.008), compared to 41.3% of male corresponding authors vs 33.1% of women (P=.001). Industry-funded studies were not significantly more likely to originate from either Europe or the United States.

Conclusions

Key authors of collaborative cancer research are predominantly male and from either the United States or Europe. Women were particularly underrepresented among authors of industry-sponsored oncologic clinical trials.

1Robert Wood Johnson Foundation Clinical Scholars, University of Michigan, Ann Arbor, MI, USA; 2Department of Otolaryngology-Head and Neck Surgery, University of Michigan, Ann Arbor, MI, USA, gordonsu@med.umich.edu; 3VA Center for Clinical Management Research, Ann Arbor VA Healthcare System, Ann Arbor, MI, USA; 4A. Alfred Taubman Health Sciences Library, University of Michigan, Ann Arbor, MI, USA; 5Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA

Conflict of Interest Disclosures

Gordon Sun is a Robert Wood Johnson Foundation Clinical Scholar supported by the US Department of Veterans Affairs and has received grants or honoraria from the Blue Cross Blue Shield of Michigan Foundation and the BMJ Publishing Group. Reshma Jagsi is on the medical advisory board for Eviti and is supported by grants from the National Institutes of Health, American Cancer Society, National Comprehensive Cancer Network, and Burroughs-Wellcome/Alliance for Academic Internal Medicine, and she is conducting a trial supported by the Breast Cancer Research Foundation, for which the study drug is provided by Abbott Pharmaceuticals. The other coauthors report no disclosures.

Back to Top

Association Between Personal Financial Conflicts of Interest and Recommendation of Medical Interventions: A Systematic Review

Andreas Lundh,1,2 Anders W. Jørgensen,1,3 Lisa Bero4

Objective

Financial conflicts of interest may influence scientific data presentation and therefore influence which treatments are recommended in review articles and clinical guidelines. We conducted a systematic review investigating whether authors of scientific opinion pieces or clinical guidelines with personal financial conflicts of interest related to drug, device, or medical imaging companies were more likely to recommend the companies’ products.

Design

We searched the Cochrane Methodology Register, MEDLINE, and EMBASE for eligible studies. In addition, we searched similar systematic reviews, reference lists of included studies and Web of Science for studies citing the included studies and contacted experts for additional relevant studies. We included research studies comparing the association between conflicts of interest and recommendations in individual journal articles and guidelines. Two assessors independently included studies, extracted data, and assessed studies for risk of bias. We calculated pooled risk ratios (RRs) for dichotomous data (with 95% confidence intervals).

Results

Based on a preliminary search, we included 3 studies (303 journal articles about drug treatments). Articles written by authors with any financial conflicts of interest were more likely to recommend a company’s drug than articles by authors without conflicts of interest (risk ratio: 6.31 [95% confidence interval: 1.66 to 23.92]). Despite the inclusion of only three studies, the heterogeneity was substantial (I2: 66%). Full data analysis and further exploration of data are under way.

Conclusion

Our preliminary findings suggest that recommendations to use a particular drug are associated with financial conflicts of interest of the authors of the recommendation.

1Nordic Cochrane Centre, Rigshospitalet, Copenhagen, Denmark, al@cochrane.dk; 2Department of Infectious Diseases, Hvidovre University Hospital, Copenhagen, Denmark; 3ENT Department, Aarhus University Hospital, Copenhagen, Denmark; 4Department of Clinical Pharmacy, University of California San Francisco, San Francisco, CA, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

This study received no external funding. The Nordic Cochrane Centre provided in-house resources.

Back to Top

Risk Factors for Noncompletion of Cochrane Reviews

Richard G. McGee,1 Patrick J. Kelly,1 Jonathan C. Craig,1,2 Navelle S. Willis,2 Ann E. Jones,2 Gai Y. Higgins,2 Ruth L. Mitchell,2 Angela C. Webster1,2

Objective

To determine those factors associated with the timely completion of a Cochrane review.

Design

We collected the time from title registration to publication or withdrawal for all titles registered with the Cochrane Renal Group for the period January 1, 1997, to April 1, 2012. We conducted a competing risks analysis, with publication as the event of interest, withdrawal as the competing risk, and active registrations censored at June 30, 2012. We fitted competing risk proportional hazard models and calculated cumulative incidences. We investigated the following potential predictive factors: contact author’s first language was English, contact author had previously published a Cochrane review (any group), the continent of origin for the contact author (Africa, Asia-Pacific, Europe, North America, South America), number of listed authors, whether the review had received any internal or external funding, and the number of studies in the review.

Results

We identified a total of 296 registered titles: 101 published, 131 active registrations, and 64 withdrawals (21 withdrawals occurred prior to protocol submission, 12 after protocol submission, 25 after protocol publication, and 6 after review submission). Figure 8

shows the probability of publication, withdrawal, or being an active registration over time since title registration, with approximately 50% of registrations still active at 5 years, a third published, and the remainder withdrawn. The univariate analysis indicated that English as a first language, a previous Cochrane publication, year of registration, and external funding were associated with publication (P<.05), with all but year increasing the chance of publication. In the multivariate analysis, the statistically significant factors were English as first language (hazard ratio [HR]=1.74, 95% CI 1.13-2.68; P=.01) and a previous Cochrane publication for contact author (HR=2.80, 95% CI 1.02-7.66; P=.05) as being associated with increased likelihood of publication, while later years were associated with reduced likelihood of publication (2002-2006 HR=0.64, 95% CI 0.44-.95; 2007-2012 HR=0.18, 95% CI 0.09-.37; P<.001).

Figure 8. Probability of Being Active, Withdrawn, or Published Since Time From Registration

Figure 8. Probability of Being Active, Withdrawn, or Published Since Time From Registration

Conclusions

Despite a Cochrane model providing author support and training and a commitment to publish, our analysis shows that the successful publication of a review still depends on author- and review-related factors. This has implications for the evidence base and potentially may be magnified for regular journals.

1Sydney School of Public Health, University of Sydney, Sydney, Australia, angela.webster@sydney.edu.au; 2Cochrane Renal Group, Sydney, Australia

Conflict of Interest Disclosures

None reported.

Back to Top

Do Declarative Titles Affect Readers’ Perceptions of Research Findings? A Randomized Controlled Trial

Elizabeth Wager,1 Douglas G. Altman,2 Iveta Simera,2 Tudor P. Toma3

Objective

Many journals prohibit the use of declarative titles that state study findings, yet a few journals encourage or even require them. We compared the effects of a declarative vs a descriptive title on readers’ perceptions about the strength of evidence in a research abstract describing a randomized trial.

Design

Study participants (medical or dental students or physicians) were presented with 2 abstracts describing studies of a fictitious treatment (Anticox) for a fictitious condition (Green’s syndrome). The first abstract (A1) described an uncontrolled, 10-patient case series; the second (A2) described a randomized placebo-controlled trial involving 48 patients. All participants rated identical A1 abstracts (with a descriptive title). Participants were randomized so that half rated a version of A2 with a descriptive title and half with a declarative title. For each abstract, participants indicated their agreement with the statement “Anticox is an effective treatment for pain in Green’s syndrome” using 100-mm visual analog scales (VAS) ranging from “disagree completely” to “agree completely.” The hypothesis was that the difference in perceptions of efficacy between A1 and A2 would be greater when A2 had a declarative title. The VAS scores were measured by an investigator unaware of group allocation.

Results

Data are available from 144 participants from 4 centers. There was no significant difference between the declarative and descriptive title groups’ views about study conclusions as expressed on VAS scales; in fact, the mean difference between A1 and A2 was smaller when A2 had a declarative title (Table 20

).

Table 20. Visual Analog Scale (VAS) Scores for Descriptive and Declarative Titles

Table 20. Visual Analog Scale (VAS) Scores for Descriptive and Declarative Titles

12-sided P value from t-test.

Conclusions

This study found no evidence that use of a declarative title affected readers’ perceptions about study conclusions. Contrary to our hypothesis, there was no significant difference in the expressed levels of confidence in the conclusions of the randomized trial between participants who received the abstract with a declarative title and those who received the abstract with a descriptive title. This suggests that editors’ fears that declarative titles might influence readers’ judgments about study conclusions may be unfounded, at least in relation to reports of randomized trials; however, our study design may not have been sensitive to small effects.

1Sideview, Princes Risborough, UK; liz@sideview.demon.co.uk; 2EQUATOR Network Centre for Statistics in Medicine, Oxford, UK; 3University Hospital Lewisham, London, UK

Conflict of Interest Disclosures

None reported.

Funding/Support

This project was funded by Sideview.

Citations

Back to Top

Standardizing Citation of Bioresources in Scientific Publications

Anne Cambon-Thomsen,1 Paola De Castro,2 Federica Napolitani,2 Anna Maria Rossi,2 Alessia Calzolari,2 Laurence Mabile,1 Elena Bravo,2 as Members of the Bioresources Research Impact Factor (BRIF) Working Group3

Objective

An increasing portion of biomedical research relies on the use of bioresources (physical resources like biobanks, databases, bioinformatic tools, etc). Most scientific publications cite not only textual or documentary resources but also bioresources. While citation of textual resources relies on clear editorial guidelines, citation rules for bioresources are still to be defined. This situation does not allow their traceability, which would facilitate their use and foster their sharing. A key element for assessing their use and impact on research production is their systematic and standardized citation in journal articles. We report ongoing activities toward standardizing the bioresources citation process with biomedical editors, in the context of a work on tools for recognition and measure of the impact of bioresources use on knowledge production.

Design

The concept of a Bioresource Research Impact Factor (BRIF) was introduced in 2003, and an international working group was created (http://www.gen2phen.org/groups/brif-bioresource-impact-factor). Key elements are a scheme of unique identifier for bioresources; promotion and adoption of standardized rules for bioresources citation within editorial guidelines and instructions to authors and creation of a specific bioresource field in the publication metadata scheme. The present work relates to a pilot project developed by the journal editors BRIF subgroup on the 2 last aspects. This pilot consisted in contacting a sample group of 90 biomedical journal editors (in 2 rounds: 50, then 40) with 4 questions to test their awareness on standardizing citations for bioresources. The selection of sample editors was balanced between open-access and non-openaccess journals indexed in the Web of Science, and chosen with different levels of Impact Factor.

Results

As of June 10, 2013, 22 of the contacted editors answered (12 from open-access journals and 10 from non-open-access journals). Thirteen editors have agreed to participate in further steps.

Conclusions

The attention of editors on the standardization of bioresources citation and on implementing the use of a relevant unique identifier is slowly growing. To pursue this goal, a workshop is planned to discuss practical proposals, highlight the various dimensions at stake, and try to reach consensus on key elements.

1INSERM and Université de Toulouse III, Toulouse, Cedex, France, anne.cambon-thomsen@univ-tlse3.fr; 2Istituto Superiore di Sanità, Rome, Italy; 3The BRIF (Bioresource Research Impact Factor) working group is an international collaboration led by Anne Cambon-Thomsen and managed by Laurence Mabile with an online forum (http://www.gen2phen.org/groups/brif-bio-resource-impact-factor).

Conflict of Interest Disclosures

None reported.

Funding/Support

The work presented in the abstract is part of the BRIF initiative. The BRIF initiative was supported by funds from the European Union’s Seventh Framework programme (FP7/2007-2013) through project “Biobanking and Biomolecular Resources Research Infrastructure” (BBMRI), grant agreement 212111, collaborative projects GEN2PHEN (Genotype-to-Phenotype Databases: A Holistic Solution), grant agreement 200754, and BioSHaRE-EU (Biobank Standardisation and Harmonisation for Research Excellence in the European Union), grant agreement 261433. The funders had no role in any step of this work.

Back to Top

Analyzing the Impact of Case Reports in a Medical Genetics Journal: Citations for Case Reports in the Impact Factor Window Compared to 3 to 5 Years Later

John C. Carey,1,2 Steve D. Weist,2 Margaret S. Weist1,2

Objective

We investigated the question of the citation history (a proxy for impact) of case reports in a medical genetics journal; we hypothesized that case reports were cited more often in the years after the Impact Factor (IF) window than during the 2-year period used to calculate IFs.

Design

Using Scopus (a bibliographic database containing article citations) as a source, we compiled the number of citations for all 62 “Clinical Reports” published in the American Journal of Medical Genetics in 12 issues, January 2007-June 2007 for 2008-2009, and compared the figure to cites for 2010-2012 (+ early 2013). We also assessed the types and abstracts of papers that cited the report, noting confirmatory observations and ensuing papers that identified a causative gene for the condition described in the report.

Results

The mean number of total citations for all 62 papers over almost 6 years (2007-2013) was 8.3 (515 cites) with a range of 0 (1 paper) to 49 (1). Ten of the 62 reports were followed by a confirmatory observation, and 5 preceded gene identification. The mean number of cites in the 2 years of the IF window was 2.8 (174 cites) and 5.14 (319) for 2010 to the present. Of the 62 papers, 57 were cited more often in the post-IF window period than in 2008-2009.

Conclusions

Case reports were cited almost twice as much in the 3- to-5 year period after publication compared to the IF window of 2 years after publication. This may seem obvious because of the additional year (2008-2009 vs 2010-2012), but it is the 2-year window that is searched to calculate the IFs. Use of the IF underestimates the long-term impact of case reports in terms of citation history. These data may not be generalizable to all disciplines but do suggest that for genetics reports, the 5-year IF may be a better indicator of citation history than the more widely used 2-year IF.

1University of Utah School of Medicine, Department of Pediatrics, Division of Medical Genetics, Salt Lake City, Utah, USA; 2American Journal of Medical Genetics, Salt Lake City, Utah, USA, john.carey@hsc.utah.edu

Conflict of Interest Disclosures

None reported.

Funding/Support

The journal office and staff are funded from a contract with the University of Utah and Wiley-Blackwell, publisher of the American Journal of Medical Genetics.

Back to Top

Are Online Article Views in the First Year After Publication Correlated to Citations?

Deborah Levine,1 David F. Kallmes,2 Elkan Halpern,3 Herbert Y. Kressel4

bjective

To assess if our biomedical imaging journal citations were correlated with online views in the first year after article publication.

Design

We retrospectively assessed manuscripts published in Radiology from February 1, 2008, to July 1, 2011. Monthly online article view data (HTML and PDF, excluding abstract views) were obtained from the journal online host. Citation data was accessed from science citation index. Monthly data (normalized for first 12 months after print publication) were assessed. Subanalysis included (1) limiting studies to those published in the first 24 months of the study period (to allow more time for citations to accumulate) and (2) section analysis for those sections with >20 manuscripts. An additional descriptive analysis of the top 3% of citations and online views was performed.

Results

A total of 1,389 articles were included. Article online views were highest in the first month, ranging from 0 to 3,604 (mean, 244+298), with cumulative 12 monthly totals from 0 to 22,030 (mean, 802+1,153). Citations ranged from 0 to 296 (mean, 14+2). Monthly correlation coefficients for views vs citation for the entire study group ranged from .15 to .19 (P<.0001). Correlations were slightly higher but still weak for the subanalyis of the first 24 months (.16-.21, n=813 articles). There was no obvious time trend for correlations. The correlation coefficient for first month of downloads was a reasonable metric for the range of correlations within subject area. The relationship of total citations to total online views depends on the section (P<.0001, Table 21

). Relatively high, positive correlations between downloads and citations were noted for Reviews and for Molecular Imaging, while low correlations were noted for Controversies. State of the art and review articles (28/49, 57%) were the most likely categories of articles to have online views in the top 3%. H1N1 influenza, pulmonary embolism, calcific tendonitis, liver lesions, and limiting radiation dose were original report topics that were highly viewed. The top 3% of citations were predominately from original reports, with cardiac imaging being relatively overrepresented and radiation dose being a frequent topic.

Table 21. Citations and Article Online View Data by Section of Journal

Table 21. Citations and Article Online View Data by Section of Journal

aTwo articles in top 3% of online views were from groups with less than 20 articles, thus not in the table, head and neck (n=1), medical physics (n=1).

Conclusions

Overall downloads significantly correlate with citations but the association is weak. Since correlations vary by subject area, subanalysis by content area is important to better understand these potential correlations.

1Radiology and Department of Radiology, Beth Israel Deaconess Medical Center, Boston, MA, USA, dlevine@rsna.org; 2Mayo Clinic, Department of Radiology, Rochester, MN, USA; 3Massachusetts General Hospital, Institute for Technology Assessment, Boston, MA, USA, 4Radiology Editorial Office, Radiological Society of North America, Boston, MA, USA

Conflict of Interest Disclosures

None reported. Other disclosures that do not pertain to this research include the following: Hebert Kressel reports receiving royalties from Medrad/Bayer-Shering for Endorectal Coil and Elkan Halpern reports being a consultant for Hologic and providing expert testimony for Ameritox.

Back to Top

Is a Uniform Name Format Necessary for Accurate Search Results of Chinese Authors?

SUN Jing, LIU Huan, GAO Jian, QIAN Shou-chu

Objective

Duplicate names are common for Chinese authors because of the use of 2- and 3-word elements in names (eg, WANG Chen and LI Jian-jun), which are expressed as 2 and 3 Chinese characters in the Chinese language. However, there are no uniform requirements for Chinese name format in different medical journals, which makes searching for articles by Chinese authors confusing.

Design

We selected 3 authors with 2-word names (2 common and 1 rare) and 4 authors with 3-word names (2 common and 2 rare). These 7 authors provided lists of their articles published in 2012 (87 articles published in 51 different journals). We searched PubMed Single Citation Matcher using the date 2012 and each author’s name in different formats: author’s complete surname and given name in full spelling with a hyphen in the given name for the 3-element names, complete surname and given name in full spelling without hyphen, surname followed

by all initials for the given name, and surname followed by the first initial for those with hyphenated given names. We compared the search results with the lists of publications provided by the authors.

Results

Generally, the PubMed search results differed from the lists provided by the authors

(Table 22)

. The retrieval results using full spelling with hyphen ( eg, LI Jian-jun) were more likely to match the lists of publications provided by the authors, whereas those using surname followed by the first initial were most likely to be inaccurate (eg, WANG C). Some searches showed too many publications, from 2- to 400-fold, with high false-positives. Analysis of all the false-positives and false-negatives revealed more problems. Preliminary results show that the retrieval accuracy was associated with the format of author’s full name as published in the journals.

Table 22. Search Retrieval Results Using Different Name Formats

Table 22. Search Retrieval Results Using Different Name Formats

Conclusions

Search results in PubMed for authors with Chinese names are inaccurate because of different author name formats used by journals and common Chinese surnames. For Chinese names with 3 elements, full spelling of the surname and given name with a hyphen for the given name is more suitable for Chinese authors, and journal editors should consider following a standard format for manuscripts from Chinese authors.

Chinese Medical Journal, Chinese Medical Association, Beijing, China, sunj@cma.org

Conflict of Interest Disclosures

None reported.

Back to Top

Citable and Noncitable Items and the Impact Factor

Tobias Opthof,1,2 Loet Leydesdorff,3 Ruben Coronel1

Objective

The Impact Factor is a mathematical monstrum because citations to all items in the numerator are related to a limited set of “citable items” (the number of reviews and original articles only) in the denominator. Thus, Impact Factors are artificially higher than if they were restricted to citations to citable items only. Evidently, noncitable items (abstracts, letters, editorials) can also be cited, and the term is a misnomer. The relation between citation of citable vs noncitable items has been studied only from the perspective of the cited items, not from the citing items. This distinction is relevant because an important quality indicator (the Impact Factor) can thus be manipulated.

Design

We recalculated the Impact Factor of 6 leading cardiovascular journals in 2009 for separate categories of cited papers; and then have analyzed the flow of citations for Circulation Research during 2011 to papers published in 2009 while taking into account the contribution of citable and noncitable items for both the citing and the cited papers. All data were obtained in March 2012 from the Web of Science.

Results

The ranking of the 6 cardiovascular journals based on the standard, published Impact Factor changed from 1-2-3-4-5-6 to 5-4-2-3-1-6 if based on citations to all items, including noncitable items added to the denominator. The 2820 citations in 2011 of Circulation Research papers published in 2009 was broken down into 90.6% from citable to citable, 3.3% from noncitable to citable, 5.8% from citable to noncitable, and 0.3% from noncitable to noncitable.

Conclusions

Ranking of journals based on the Impact Factor is sensitive to the categories of papers taken into account. For the top ranking journal in this category (Circulation Research), the contribution of citations other than from citable articles to citable articles amounts to 10%.

1Department of Experimental and Clinical Cardiology, Academic Medical Center, Amsterdam, the Netherlands, t.opthof@inter.nl.net; 2Department of Medical Physiology, University Medical Center Utrecht, the Netherlands; 3Amsterdam School of Communication Research (ASCoR), University of Amsterdam, the Netherlands

Conflict of Interest Disclosures

None reported.

Dissemination

Back to Top

Attitude Toward Publications in Electronic Journals: A Glimpse Into the Minds of the Decision Makers in Universities

Muhammad Aslam

Objectives

The growth of electronic journals (EJs) is reflected by the increasing number of EJ articles being presented during appointments, tenure tracking, and promotions. Some of the decision makers have an attitude of suspicion toward articles published in even high-profile EJs, leading to unnecessary delays in decision making. This study was carried out to evaluate the attitude of decision makers in universities of Eastern Mediterranean region toward articles published in EJs.

Design

The survey was carried out in December 2012. The decision makers were defined as the persons directly involved in decisions about appointments, tenure tracking, and promotions of faculty in their respective universities. A questionnaire was circulated via e-mail among 290 decision makers from 11 universities in the Eastern Mediterranean region through their vice rectors. The questions probed the knowledge, attitude, and practice of the participants toward articles published in EJs.

Results

There were 205 responders. Of them 184 (89.7%) were older than 55 years. Forty-two (20.48%) were associated with a journal (print or electronic), while 7 (3.41%) were on editorial boards of EJs. A total of 164 (80%) responders were “cautious about articles in EJs” while making selection, promotion, or tenure tracking decisions. Reasons given for caution were “No particular reason” 27 (13.17%), “EJs do not observe standard peer review protocols” 101 (49.26%), “Getting an article accepted in EJs is easy” 117 (57.07%), “No quality control system” 79 (38.5%), “No recognition/indexation” 56 (27.31%), “Short life of journal” 132 (64.39%), “Less visibility (circulation)” 114 (55.60%), “Editorial boards of EJs comprise unrecognized people” 99 (48.29%), “Authority of reviewers and editorial board cannot be verified” 117 (57.07%), and “EJs are not associated with any famous organization/society” 124 (60.48%). None of the respondents wanted to reject articles published in EJs; however, all wanted to keep the final decision pending subject to verification of the credentials of EJs. Of the respondents, 132 (64.39%) agreed “to accept articles printed in EJs with Impact factor without further scrutiny,” and 101 (53.65%) felt that “EJs indexed with recognized indices may be accepted without further scrutiny.”

Conclusions

There is a significant prejudice among decision makers of the universities of the Eastern Mediterranean region about articles published in EJs. This is mainly due to false beliefs and lack of knowledge about quality indicators of electronic publications.

University of Health Sciences, Khayaban-e-Jamia Punjab, Lahore, Pakistan, professormaslam@yahoo.com

Conflict of Interest Disclosures

None reported.

Back to Top

Publishing a Bilingual Journal: The Example of Deutsches Ärzteblatt International

Christopher Baethge

Objective

Most important medical journals publish in English, but the majority of all periodicals are published in languages other than English. While those journals serve an important purpose for regional communities, they risk scientific marginalization. An increasingly adopted solution to this dilemma is to publish bilingually, in the regional language and in English. Since 2008, at the official periodical of the German Medical Association, Deutsches Ärzteblatt, all scientific articles (original and review papers, editorials, letters) appear in English as well as German and are published via Deutsches Ärzteblatt International, an open-access online journal. We compare seminal characteristics before and after launching the international version.

Design

The data presented are part of the routine benchmarking process at Deutsches Ärzteblatt International.

Results

Bilingual publication of Deutsches Ärzteblatt International is cost-intensive (translation cost per page: 65.8 USD) and fraught with challenges (eg, translation quality control and citation rules for both article versions). However, it is an opportunity to improve: Deutsches Ärzteblatt International became indexed in all major databases and received an Impact Factor. Submissions of original and review articles rose from 17.3 per month before to 23.6 after bilingual publication started (P<.0001) and averaged 28 per month in 2012 (published reviews and original articles per year: approximately 100; Table 23

). Scientific citations increased, and of all external citations in 2012, 65.7% (318/484) originated from English articles and 57.6% (279/484) from authors in non-Germanspeaking countries. Citations in the lay press, as monitored in 5 leading German newspapers and magazines, increased from 12.7 per year before to 33.0 (P=.0058). Currently, the level of German national press citations is probably due to the regional importance of some of the articles, similar to that of leading international medical journals (New England Journal of Medicine: 52.2/year since 2008; Lancet: 37.8; JAMA: 29.0; BMJ: 35). In an anonymous survey (return rate: 66%, 233/353), almost all authors were satisfied or very satisfied with the translation (98.6%, 230/233), emphasizing increased international visibility as the most important advantage of publishing in bilingual journal (63%, 147/233).

Table 23. Deutsches Ärzteblatt International: Parameters of Journal Success Before and After Initiation of Bilingual Publication (2008)

Table 23. Deutsches Ärzteblatt International: </em>Parameters of Journal Success Before and After Initiation of Bilingual Publication (2008)

at-test for difference between number of manuscripts per months submitted before and after bilingual publication initiation in January 2008 (2004-2007 vs 2008-2012): P<.0001; t=5.92, df=100.18.

bMann-Whitney U test for difference between lay press citations per year before and after bilingual publication initiation in January 2008 (2001-2007 vs 2008-2012): P=.0058, z=2.76.

Conclusion

Bilingual publication was decisive in improving international recognition and, in turn, number and quality of submitted manuscripts.

Deutsches Ärzteblatt International and Department of Psychiatry and Psychotherapy, University of Cologne Medical School, Cologne, Germany, baethge@aerzteblatt.de

Conflict of Interest Disclosures

The author is the editor of Deutsches Ärzteblatt International and has initiated bilingual publication.

Back to Top

Ten Years’ Experience Teaching Health Professionals to Write and to Publish Articles

Esteve Fernández,1 Ana M. García,2 Elisabet Serés,3 Fèlix Bosch3,4

Objective

Journal articles play a key role in disseminating biomedical information. Nevertheless, many professionals lack training in scientific writing. To counteract this shortcoming, we organize face-to-face training seminars on scientific writing for health professionals. This study describes our 10-year experience (2004-2013) in these 2-day seminars and includes data about participants’ satisfaction and their views on their learning experience.

Design

We obtained the data using 2 questionnaires anonymously and voluntarily completed by trainees (undergraduate, graduate, and postgraduate mainly in medicine, biology, and pharmacy). The first questionnaire was completed immediately after each seminar and reflected participants’ satisfaction with the classroom sessions. The second, a follow-up questionnaire, was sent 2 years (mean value) later by e-mail to obtain views regarding the training seminars and their applicability. This questionnaire included questions about knowledge, skills, and attitudes on scientific writing before and after attending the seminar. Responses in both questionnaires were based on a Likert-type scale from 0 to 4.

Results

A total of 683 students (72% women) participated in the 27 seminars; 80% completed the first questionnaire and considered the experience as very positive (median score 4). Thirty percent of the trainees answered to the follow-up questionnaire. Most of them (74%) reported they had participated in drafting a scientific paper after the training seminar. They also reported an improvement in their knowledge (mean 1.5; 95%CI: 1.3-1.7), attitudes (mean 0.8; 95% CI: 0.7-0.9), and skills (mean 0.8; 95% CI: 0.7-0.9) related to writing and publishing scientific papers. Likewise, they indicated the need for training in these issues at both the undergraduate (mean 3.1; 95% CI: 2.9-3.3) and graduate level (mean 3.9; 95% CI: 3.8-4.0).

Conclusions

The format of the training seminars satisfied the needs of the trainees and facilitated the improvement in the skills for scientific writing and publishing. Respondents unanimously agreed there is a need for training on scientific writing for health professionals. Interventions like these could be helpful mostly in the countries with a lesser tradition in this field.

1Universitat de Barcelona and Institut Català d’Oncologia, L’Hospitalet de Llobregat, Spain, efernandez@iconcologia.net; 2Universitat de València and ISTAS, València, Spain; 3Esteve Foundation, Barcelona, Spain; 4Department of Experimental and Health Sciences, Universitat Pompeu Fabra, Barcelona, Spain

Conflict of Interest Disclosures

Esteve Fernández and Ana García were the teachers of the training seminars and they received an honorarium for them. Félix Bosch and Elisabet Serés work at the Esteve Foundation, a nonprofit scientific and educational organization, which organized and sponsored these training seminars.

Back to Top

Google Scholar, Ovid, and PubMed Underestimate Orthodontic RCT Numbers Compared to Hand Searching

Rhian C. Fitzgerald,1 David T. Sawbridge,2 Jayne E. Harrison1

Objectives

Compare the accuracy of hand searching, as per Cochrane Collaboration criteria, vs electronic searching for selecting orthodontic randomized controlled trials (RCTs) published in the four key orthodontic journals between January 1, 2001, and December 31, 2010.

Design

One author (R.C.F.) satisfactorily completed the Cochrane Oral Health Group’s hand searching test search. All issues of the American Journal of Orthodontics and Dentofacial Orthopaedics (AJODO), Angle Orthodontist (AO), European Journal of Orthodontics (EJO), and Journal of Orthodontics (JO) published between January 1, 2001, and December 31, 2010, were hand searched to identify all RCTs published in this time frame. The various electronic search strategies assessed the same journals over the same time period. Searches were performed using text words. MEDLINE was searched via Ovid and PubMed, using the term publication type RCT for “orthodontic.” A free-text PubMed search using the terms “orthodontic” AND “random*” was also performed. Google Scholar was searched using “orthodontic” AND “random*.” The numbers of RCTs identified using each method were compared. Intraexaminer and interexaminer reliability was assessed.

Results

Hand searching identified 218 RCTs, fulfilling Cochrane criteria, published in these journals between 2001 and 2010. A total of 52 RCTs (23.9%) were found by all 3 MEDLINE searches; however, 38 RCTs (17%) were not identified. The free text PubMed search using the terms orthodontic AND random* was the most sensitive, missing 45 RCTs (20.6%). Ovid was significantly less sensitive than PubMed (OR 8.43, 95% CI 5.48, 12.97), missing 157 RCTs (72.0%), while PubMed missed 51 (23.4%). Google Scholar located 99 RCTs (45.4%); of these, 8 were not located by the 3 MEDLINE search methods. However, the retrieval rate of Google Scholar was very high, producing approximately 7,141 hits. The lowest kappa score was 0.98 indicating excellent intraexaminer and interexaminer reliability.

Conclusion

Hand searching was much more sensitive than electronic searching at identifying RCTs published in key orthodontic journals. There was a significant underestimation of RCTs using Ovid and PubMed and Google Scholar. Combining 3 methods of searching MEDLINE and Google Scholar via text words underestimated the number of RCTs by 30 (13.8%). This may bias systematic reviews relying on simple electronic search methods to identify eligible papers.

1Orthodontic Department, University of Liverpool, Liverpool, UK, rhianfitz@yahoo.co.uk; 2University of Liverpool, Liverpool, UK

Conflict of Interest Disclosures

None reported.

Ethical Issues and Misconduct

Back to Top

Ethics in Practice: Improvements in Ethical Policies and Practices in Wiley Health Science Journals Following a 2-Stage Audit Cycle

Chris Graf,1 Alice Meadows,2 Allen Stevens,3 Elizabeth Wager4

Objective

We undertook a descriptive study of publication ethics policies and processes at European health science journals published by Wiley. We used standards identified in the Committee on Publication Ethics (COPE) audit tool (http://publicationethics.org/files/audityourjournal.pdf) and a cycle of audit, feedback, and re-audit to initiate and measure change.

Design

In phase 1, October 2009, journal publishing managers (JPMs) liaised with editors from 115 journals to answer 21 questions about their journals’ policies. The JPMs reported the data. In late 2010, we compared responses with the best practice standards identified by COPE and provided individual feedback for each JPM to share with his or her editors. We congratulated journals that followed best practice. We provided advice where improvements were indicated. We illustrated how each journal’s responses compared with aggregate responses from all journals. In phase 2, starting mid-2011, we repeated the process.

Results

Eighty-two journals completed phase 1 and phase 2 (Table 24

). The 33 journals that did not complete phase 2 do not share common characteristics. In phase 1, more than 60% of journals reported best practice in response to 3/21 questions related to redundant publications (74%), correcting errors (68%), and authors’ conflicts of interest (81%). In phase 2, more than 60% reported best practice in response to 7/21 questions: raising concerns with authors (66%), asking reviewers to comment on research ethics (60%), checking for redundancy (88%), correcting errors (85%), issuing retractions (68%), labeling and linking retractions (67%), and authors’ conflicts of interest (87%). Between phases 1 and 2, changes of less than 5% were recorded in 8/21 questions: concerns raised with institutions, COPE flowcharts used, cases referred to publishers or COPE, corrections published about authors’ conflicts, systems for editors’ papers, author conflicts disclosed, and appeal mechanism.

Table 24. Results From Phase 1 (115 Journals) and Phase 2 (82 Journals)

Table 24. Results From Phase 1 (115 Journals) and Phase 2 (82 Journals)

Conclusions

Our 2-stage audit cycle delivered modest but real improvements in journal policies and practices in a number of important areas. It enabled us to identify where further attention is required (eg, describing funding sources, prompting reviewers to comment on research ethics). It systematically examined the effectiveness of audit as a tool to initiate change. Some editors appear to be unaware of COPE despite being members. A next step might be to address this.

1John Wiley & Sons, Richmond, Victoria, Australia, cgraf@wiley.com; 2Wiley, Malden, MA, USA; 3Wiley, Oxford, UK; 4Sideview, Princes Risborough, Buckinghamshire, UK

Conflict of Interest Disclosures

Chris Graf, Alice Meadows, and Allen Stevens work for Wiley and as such benefit from the company’s performance. Chris Graf holds the unpaid position of treasurer for COPE. Elizabeth Wager is a freelance trainer and publications consultant and received consultancy fees from Wiley for this project; she also created the COPE audit tool as part of her unpaid work for COPE.

Funding/Support

Wiley provided time and encouragement for Chris Graf, Alice Meadows, and Allen Stevens to undertake this work and provided consultancy fees for Elizabeth Wager.

Back to Top

Scientific Misconduct in South Asian Medical Journals: Results of an Online Survey of Editors

Sandhya Srinivasan,1 Sanjay A. Pai,1,2 Rakesh Aggarwal,3 Peush Sahni1,2,4

Objectives

Data on scientific misconduct in South Asian medical journals are limited. We conducted a survey of medical journal editors in the region to determine the frequency and types of scientific misconduct and preventive and remedial steps being taken to address the issue.

Design

We requested, by e-mail, editors of medical journals published in the English language from South Asia (Afghanistan, Bangladesh, Bhutan, India, Maldives, Nepal, Pakistan, and Sri Lanka), identified using the NCBI and EMBASE journals database (n=123), to complete an online survey.

Results

Of 46 editors who responded, 43 reported encountering scientific misconduct, most often 1 to 3 times annually. Table 25

shows the common types of misconduct reported. Forty editors reported detecting misconduct before publication; 30 also reported detecting it after publication. Of the 40 editors who wrote to authors about the allegations, 33 received replies; 11 found these to be unsatisfactory. Twenty-eight editors informed authors’ institutions of the allegations, and 12 at least sometimes received satisfactory replies. Of the 25 editors who identified scientific misconduct after publication, 18 retracted the papers. When authors withdrew their paper following allegations of misconduct, most editors allowed this and took no further action. Twenty-two editors indicated that their journals lacked a defined mechanism to deal with misconduct before publication; all 22 felt that such a policy may be useful.

Table 25. Editors Reporting Each Type of Scientific Misconduct (n=46)

Table 25. Editors Reporting Each Type of Scientific Misconduct (n=46)

Conclusions

Our survey indicates that medical journal editors in South Asia frequently encounter scientific misconduct of several different types. Journal editors did often write to the authors or their institutions about the misconduct, though this did not always elicit satisfactory responses. They often let authors off lightly by not retracting the paper or by allowing it to be withdrawn without any other action. Increasing awareness of South Asian journal editors in dealing with scientific misconduct and forming a collaborative network may allow for a concerted approach to the problem.

1Indian Journal of Medical Ethics, Mumbai, India; 2National Medical Journal of India, Bangalore, Karnataka, India; 3Department of Gastroenterology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India; 4Department of Gastrointestinal Surgery, All India Institute of Medical Sciences, New Delhi, India, peush_sahni@hotmail.com

Conflict of Interest Disclosures

None reported.

Back to Top

Authors’ Awareness of Publication Ethics: An International Survey

Sara Schroter,1 Jason Roberts,2 Elizabeth Loder,1,3,4 Donald B. Penzien,5 Sarah Davies,1 Timothy T. Houle6

Objective

Knowledge of publication ethics is an integral element of research and clinical training. However, formal training in publication ethics often appears to be an afterthought. We conducted a survey to gauge perceptions of authors of research submissions to medical journals regarding the ethical aspects of selected medical publishing scenarios.

Design

Corresponding authors of research submissions to 20 biomedical journals were invited to take part. Respondents were asked to rate the perceived level of unethical behavior (0 to 10) presented in 5 vignettes containing key variables that were experimentally manipulated on entry to the survey. They were also asked to rate their perceived level of knowledge of 7 ethical topics related to publishing (prior publication, exclusion of an author, inappropriate inclusion of an author, self-plagiarism, undeclared conflicts of interest); 4 of 5 vignettes described unethical behavior.

Results

A total of 3,668/10,582 (35%) responded. Having an article triaged vs reviewed, the time to decision, and type of submitted article were not associated with response rate. We observed differences in response rates by country: the top 3 were New Zealand (52%), Norway (46%), and Sweden (45%) and the lowest 3 were Korea (18%), unreported country (26%), and Finland (27%). Respondents worked in 100 countries and reported varying levels of publishing experience. Seventy-four percent (n=2,700) had received ethical training from a mentor, 46% (n=1,677) a partial course, 31% (n=1,130) a full course, 60% (n=2,206) an online course, and 221 (6%) none; only a small proportion rated training as excellent. There was a full 0- to 10-point range in ratings of the extent of unethical behavior within each vignette. Among respondents, 10% to 24% rated the behavior in the vignettes as entirely ethical. Ratings were statistically affected by experimental manipulations in all vignettes, suggesting respondents made judgments based on the context of behaviors. Differences in perceived ethical knowledge were observed across countries.

Conclusions

There was great variation in responses to all vignettes and levels of perceived knowledge about publication ethics, implying diversity in the teaching of publication ethics. If efforts to introduce uniformly applicable ethical standards are to succeed, formal instruction and provision of universally recognized training resources are required.

1BMJ, London, UK, sschroter@bmj.com; 2Headache: The Journal of Head and Face Pain, Plymouth, MA, USA; 3Division of Headache and Pain, Department of Neurology, Brigham and Women’s/Faulkner Hospitals, Boston, MA, USA; 4Harvard Medical School, Boston, MA, USA; 5Head Pain Center, Department of Psychiatry and Human Behavior, University of Mississippi Medical Center, Jackson, MS, USA; 6Departments of Anesthesiology and Neurology, Wake Forest University School of Medicine, Winston-Salem, NC, USA

Conflict of Interest Disclosures

Sara Schroter is an employee of the BMJ but regularly undertakes research into the publishing process. Elizabeth Loder receives salary support from the BMJ to her institution (Brigham and Women’s Hospital) for services as a research editor. Sarah Davies was an employee of the BMJ but now works for Synthes. Donald Penzien and Timothy Houle receive research support from Merck & Co Pharmaceuticals. Jason Roberts ported no conflicts of interest.

Funding/Support

This study was funded by a research grant from the Committee on Publication Ethics (COPE). The funder played no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Editorial and Peer Review Process

Back to Top

Facilitating Rapid Peer Review: A Study of PLOS ONE Reviewer Timing

Rachel Bernstein,1 Krista Hoff,1 Emma Dupin,1 Damian Pattinson2

Objective

Rapid review and publication are crucial for accelerating scientific progress, and the time required to solicit reviews and wait for their return can cause significant delays. We postulate that the processes of finding reviewers and the subsequent time for the individual reviews to be completed may not be independent, with potential implications for the overall editorial time. Reviewers may be more likely to agree to an assignment if offered a long deadline, speeding the first step, but this long deadline may delay the second step. Alternatively, a short deadline could accelerate the second step, but reviewers may be less likely to agree, lengthening the first. Our study investigates this relationship using a robust data set from PLOS ONE, the world’s largest peer-reviewed journal, to determine the ideal deadline to balance these competing needs.

Design

We collected data for 223,266 reviewer invitations sent by PLOS ONE academic editors from 2010 to 2012. The default deadline was 10 or 14 days (changed during the test period) and subject to adjustment by academic editors. In all cases, reviewer reminders were sent before and after the deadline. We measured the correlation between the reviewer deadline and 3 related metrics: reviewer invitation success rate (reviewers agreed per invitations sent); individual review submission time; and overall editorial decision time, which includes both the time to find reviewers and to wait for review completion.

Results

The most common reviewer deadlines were 10, 14, 15, 20, and 21 days. On average, reviewers returned their completed review on the deadline day, regardless of length. Increasing the deadline slightly increased the likelihood of an individual reviewer agreeing (42% at 10 days to 46% at 21 days), but the total editorial time increased parallel to the individual reviewer submission time increase (Figure 9

).

Figure 9. Reviewer Deadlines, Task Completion Time, and Percentage of Successful Review Invitations at PLOS ONE

Figure 9. Reviewer Deadlines, Task Completion Time, and Percentage of Successful Review Invitations at PLOS ONE

Conclusions

The small increase in successful reviewer invitations achieved by a longer deadline does not accelerate the review process, as shown by the direct correlation between individual review submission time and overall editorial decision time. Therefore, a shorter deadline accelerates the review process, and the PLOS ONE default deadline is now 10 days.

1Public Library of Science, San Francisco, CA, 2Public Library of Science, Cambridge, UK rbernstein@plos.org

Conflict of Interest Disclosures

None reported.

Funding/Support

The authors are PLOS employees. Work was conducted during salaried time. PLOS supported this study to inform decisions about reviewer deadlines to optimize the peer review process.

Back to Top

Assessment of the Quality of Open Peer Review Using the Draft COPE Guidance

Hajira Dambha,1 Roger Jones2

Background

Biomedical journals have a responsibility to evaluate and maintain the quality of the peer review process and to provide feedback and guidance to its reviewers. The Committee on Publication Ethics (COPE) has recently produced draft guidance on the content, standards, and principles of peer review.

Objective

To determine how the COPE guidance may be used in the assessment of the ethical components of peer review quality.

Design

We devised an abbreviated checklist from the COPE guidance. This included 14 items, some weighted, with a maximum possible total score of 34. Two raters independently assessed the ethical quality of peer reviews using the checklist, without reference to the accompanying manuscripts. One hundred twenty peer reviews were selected from original papers submitted to the British Journal of General Practice (BJGP), which operates an open peer review system. They included qualitative studies, trials and other quantitative methods and were selected from manuscripts accepted and declined publication.

Results

All peer reviews scored well on the abbreviated COPE checklist, with a mean total score of 24 (95% confidence interval, 26.20-26.97) with good interrater concordance and with no difference between accepted and rejected papers. Maximum scores were achieved on items related to the absence of critical, derogatory, or personal comments. Many items on the checklist were not particularly discriminatory: the full scoring ranges were not used by the raters, and we were unable to use the scores to distinguish between the quality of reviews.

Conclusion

Within the BJGP, most reviews scored well on our checklist, and a more sensitive measure may be needed to provide feedback to reviewers. The draft COPE guidance alone may not be sufficient to determine the overall quality of the peer review, or its usefulness to the editor, but has the potential to identify shortcomings in the ethical aspects of peer review. Further use and assessment of the COPE guidance within other biomedical journals is required to determine the generalizability of these results.

1University of Cambridge, Department of Primary Care, Institute of Public Health, Forvie Site, UK, hajiradambha@doctors.org.uk; 2British Journal of General Practice, London, UK

Conflict of Interest Disclosures

None reported.

Back to Top

Effect of Training Workshop on Quality of Students’ Peer Reviewing of Research Abstracts: A One-Arm Selfcontrol Educational Intervention

Seyed-Mohammad Fereshtehnejad,1,2 Hamid Reza Baradaran,2 Maziar Moradi Lakeh2

Objective

We aimed to assess the effects of training workshops for student peer reviewers on appraisal’s quality of the original abstracts using standardized checklists.

Design

Nine 2-day training workshops were held consisting of 330 biomedical students (ie, medicine, nursery, pharmacy, and allied medicine) with the median study year of 4. Workshop curriculum included explanations on the 31-item checklist (adopted from CONSORT and STROBE) for peer reviewing of the abstracts as well as several tips about each item using lectures, simulations, and group discussions during 20 hours. Data from the first workshop were used to calculate the reliability of the peer reviewing checklist, and the validity of the scoring weights for the items was approved using the Delphi method. To evaluate the effect of workshop, students reviewed 3 sample abstracts twice, at the beginning (pretest) and the end (posttest) of the workshop using the standardized checklist. Both the total and section-specific scores were compared between the pretest and posttest and with that of the epidemiology expert.

Results

Cronbach’s alpha was calculated as excellent (0.926) in “Methods” and acceptable in two other domains, “Results” (0.650) and “Conclusion” (0.739) during the pilot workshop. In the main study, the highest coefficient of variation (CV) was observed in the “Statistical Methods” (CV=120%) and “Study Design” (CV=115%) sections of the pretest. The overall scores significantly changed after participation in the peer review workshop (P<.05). Regression analysis showed that the standardized changes in the “Methods” section had the largest effect on the changes in total score of the abstract (beta=0.342, P<.001). In comparison with the reference values, the largest improvement was seen in the scores of the “Methods” section.

Conclusions

The current study is one of the few to evaluate the effects of training workshops on the appraisal’s quality in student peer reviewers. Our findings showed that the most important gap in students’ competence to use the appraisal checklists was in research methodology. However, the familiarity with the abstracts could also contribute in the improvement of the scores during the posttest. In addition, more studies are needed to assess whether the peer reviewers generally score higher in the first round of assessment.

1Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences & Society (NVS), Karolinska Institutet, Stockholm, Sweden, sm.fereshtehnejad@ki.se; 2Medical Education Research Center, Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran

Conflict of Interest Disclosures

None reported.

Back to Top

The Fate of the Rapidly Rejected Manuscript: Where Does It Go Next?

Olivia Wilkins,1,2 Virginia A. Moyer,3 Lewis R. First4

Objective

To determine whether manuscripts that are “rapid rejected” and thus not sent out for further peer review are subsequently published elsewhere, and in which types of journals they are published.

Design

A manuscript was defined to meet “rapid reject” criteria if the editor in chief or associate editors judged it to be of poor quality on the basis of methodology and analysis of results, an already known and published contribution, or inappropriate content for the readership of general practicing pediatricians (eg, an animal study or too subspecialized for a general pediatric clinical journal). All articles “rapid rejected” that were sent to Pediatrics from 2009 and 2010 were tracked from June 2009 to June 2012 using author and/or title searches in PubMed to determine if they were published in any MEDLINE-indexed journal in the 2 to 3 years following the rejection from our journal.

Results

Of 1,268 (17%) manuscripts rapidly rejected out of 7597 submitted over the 2 years, 302 (24%) were published in 169 journals indexed in MEDLINE. Of these, 57 (19%) appeared in 7 other mainstream general pediatric journals with lower Impact Factors than that of Pediatrics; 80 (26%) appeared in 38 pediatric subspecialty journals, and the remaining 165 (55%) appeared in general or subspecialty adult journals. The average time from rejection to publication elsewhere was 392 days for the period studied from 2009 to 2012, using the first day of the month as the date of publication for the journal issue an article appeared in.

Conclusions

The majority of manuscripts rapidly rejected by Pediatrics were not published elsewhere within 2 years. Of the 24% that were published, most appeared in lower impact nonpediatric-focused specialty journals.

1Pediatrics, Department of Pediatrics, University of Vermont College of Medicine, Burlington, VT, USA; 2Fordham University, New York, NY, USA; 3American Board of Pediatrics, USA; 4University of Vermont College of Medicine, Burlington, VT, USA, lewis.first@uvm.edu

Conflict of Interest Disclosures

Olivia Wilkins has no conflicts of interest or financial disclosures. Virginia Moyer is vice president for maintenance of certification and quality at the American Board of Pediatrics and chair of the US Preventive Services Task Force. Prior to March 31, 2013, she served as deputy editor of Pediatrics, stepping down from that role when she assumed her new role for the American Board of Pediatrics. She has no financial disclosures. Lewis First is currently also chair of the National Board of Medical Examiners and professor and chair of the Department of Pediatrics at the University of Vermont College of Medicine. He is also chief of pediatrics at the Vermont Children’s Hospital at Fletcher Allen Health Care. He has no financial disclosures.

Back to Top

Systematic Review of the Effectiveness of Training Programs in Writing for Scholarly Publication, Journal Editing, and Manuscript Peer Review

James Galipeau,1 David Moher,1,2 D. William Cameron,1,2 Craig Campbell,3 Paul Hébert,1,2 Paul Hendry,2 Anita Palepu,4 Becky Skidmore5

Objective

To systematically review, evaluate, and synthesize information on whether training in writing for scholarly publication, journal editing, and manuscript peer review (ie, journalology) effectively improves educational outcomes, such as measures of knowledge, intention to change behavior, and measures of excellence in training domains.

Design

A systematic review was conducted involving forward-searching using the SCOPUS citation database and conducting searches of the pre-MEDLINE, MEDLINE, EMBASE, ERIC, and PsycINFO databases. Comparative studies evaluating at least one training program/course/class of interest will be included. The review includes studies with populations of those centrally involved in writing for scholarly publication, journal editing, and manuscript peer review or any other group peripherally involved; interventions or evaluations of training in any specialty or subspecialty of medical writing and publishing targeted at the designated population(s); and comparators: (1) before-and-after administration of a training class/course/program of interest, (2) between 2 or more training classes/courses/programs of interest, or (3) between a training class/course/program and any other intervention(s) (including no intervention). Outcomes include any measure of effectiveness of training as reported, including but not limited to measures of knowledge, intention to change behavior, measures of excellence in training domains (ie, writing, peer review, editing), however reported.

Results

This review is ongoing. Preliminary findings indicate the existence of a sufficient number of relevant studies to yield meaningful data. The likely included studies appear to vary in research design (including randomized controlled trials), outcome measures (including quantitative and qualitative measures), and quality.

Conclusions

The results of this systematic review will provide authors, editors, peer reviewers, and other potential trainees with evidence on the effectiveness of training in journalology. The knowledge gained will also provide a solid evidence base from which to develop new training courses and programs, ultimately improving the quality of research practices both within Canada and abroad.

1Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa Hospital-General Campus, Ottawa, ON, Canada, jgalipeau@ohri.ca; 2University of Ottawa, Faculty of Medicine, Ottawa, ON, Canada; 3Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada; 4Centre for Health Evaluation and Outcome Sciences, University of British Columbia, Department of Medicine, Vancouver, BC, Canada; 5Independent consultant, Ottawa, ON, Canada

Conflict of Interest Disclosures

None reported.

Funding/Support and Role of the Funder

This research project is funded by the Canadian Institutes of Health Research. The funder has no role in the design, collection, analysis, and interpretation of the data; in the writing of the manuscript; or in the decision to submit the manuscript for publication.

Back to Top

Poor Manuscript Title as a Predictor for Manuscript Rejection in a General Medical Journal: A Cohort Study

Petter Gjersvik,1 Pål Gulbrandsen,1 Erlend T. Aasheim,2 Magne Nylenna1,3,4

Objective

An editor’s first assessment of a manuscript is to a large degree based on its title and abstract. No study has so far addressed the importance of the manuscript title before editorial assessment and peer review. We hypothesized that submitted manuscripts with a poor title (ie, titles that need major changes) would be more likely to be rejected than manuscripts with titles that need minor or no changes.

Design

Criteria for poor, fair, and good titles (ie, major, minor, and no changes needed) were defined, tested, and refined by all 4 investigators, based on the editorial policy and practice of the Journal of the Norwegian Medical Association, on which all have served as editors for 3 to 19 years. A satisfactory interrater agreement was achieved (Spearman’s rho=0.71). All manuscripts submitted from September 2009 to August 2011 for publication as original article (n=211) or review article (n=110) were recorded. Title quality was rated by consensus by two former editors of the journal (P.Gu., M.N.) without knowing whether the manuscripts had been accepted or not. Main outcome measures were rejection rate and odds ratio for rejection.

Results

For original articles, rejection rates for manuscripts with poor, fair, and good titles were 88%, 73%, and 61%, respectively (P=.002). For review articles, the corresponding rejection rates were 83%, 56%, and 38% (P<.001). Odds ratio for rejection of manuscripts with poor titles vs good titles was 4.6 (95% CI, 1.7-12.3) for original articles and 8.2 (2.6-26.4) for review articles. In a logistic regression model with manuscripts with poor or good titles, title quality explained 14% and 27% of the variance of the outcome (rejection or acceptance) for original and review articles, respectively.

Conclusions

Although other and more important factors contribute to whether a manuscript is rejected or accepted, in this study a poor title was significantly associated with manuscript rejection. Criteria for title quality can be subjective and may vary between journals, but is probably more consistent within one journal. Authors should know and comply with the rules, policy, and tradition for article titles in the journal to which they submit a manuscript.

1University of Oslo, Oslo, Norway, petter.gjersvik@gmail.com, petter.gjersvik@medisin.uio.no; 2Journal of the Norwegian Medical Association, Oslo, Norway; 3Norwegian Knowledge Centre for the Health Services, Oslo, Norway; 4Norwegian University for Science and Technology, Trondheim, Norway

Conflict of Interest Disclosures

Peter Gjersvik and Erlend Aasheim work part-time as editors at the Journal of the Norwegian Medical Association.

Back to Top

Attitudes Toward Blinding of Peer Review and Perceptions of Efficacy Within a Small Biomedical Specialty

Reshma Jagsi,1,2 Katherine Egan Bennett,3 Kent Griffith,4 Rochelle DeCastro,5 Calley Grace,3 Anthony Zietman2,6

Objective

Peer reviewers’ knowledge of author identity may influence review content, quality, and recommendations. Therefore, the International Journal of Radiation Oncology, Biology, Physics (Red Journal), the official journal of the American Society for Radiation Oncology, implemented doubleblind peer review in 2011. Because previous studies of blinding efficacy have tended to consider larger disciplines than radiation oncology, in which preliminary research findings are often presented at conferences and a small group of investigators serves as the pool of authors and reviewers on any given topic, we sought to evaluate the efficacy of blinding, as well as attitudes toward the new policy.

Design

Between May and August 2012, all Red Journal corresponding authors and reviewers completed a questionnaire regarding demographics, attitudes, and perceptions of success of blinding.

Results

Authors (n=408) were more likely to be female than reviewers (n=519): 32% v 20% (P<.001). Authors were less likely to hold senior academic positions (18% full professors, 16% associate professors) than reviewers (32% full professors, 25% associate professors, P<.001). Many reviewers (43%) had been reviewing for more than 10 years. Respondent attitudes are detailed in Table 26

: only 13% of authors believed the reviewers should be informed of their identities, and 21% believed they should know the identities of their reviewers. In 603 reviews, reviewers believed they could identify the author in 19% and suspected in 31%; believed they knew the institution(s) from which the paper originated in 23% and suspected in 34%. In the 301 cases when a reviewer believed he or she knew or suspected author identity, 42% indicated that prior presentations served as a clue, and 57% indicated that literature referenced did so. Of those who believed they knew or suspected origin and provided details (n=133), 13% were entirely incorrect (provided details but correctly identified neither author nor institution).

Table 26. Attitudes Toward Blinding

Table 26. Attitudes Toward Blinding

Conclusions

In a small specialty where preliminary research presentations are common and occur in a limited number of venues, reviewers are often familiar with research findings and suspect author identity even when manuscript review is blinded. Nevertheless, blinding appears to be effective in many cases, and support for continuing blinding was strong from both authors and reviewers.

1Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA, rjagsi@med.umich.edu; 2International Journal of Radiation Oncology, Biology, Physics; 3Scientific Publications, American Society for Radiation Oncology (ASTRO); 4University of Michigan Comprehensive Cancer Center, Ann Arbor, MI, USA; 5Center for Bioethics and Social Science in Medicine, University of Michigan, Ann Arbor, MI, USA; 6Radiation Oncology, Massachusetts General Hospital, Boston, MA, USA

Conflict of Interest Disclosures

Reshma Jagsi is a consultant for Eviti and receives grant funding from the National Institutes of Health, American Cancer Society, and National Comprehensive Cancer Network for unrelated work. No other disclosures reported.

Back to Top

A Comparison of the Quality of Reviewer Reports From Author-Suggested Reviewers and Non-Author-Suggested Reviewers in Journals Operating on Open or Closed Peer Review Models

Maria K. Kowalczuk,1 Frank Dudbridge,2 Shreeya Nanda,1 Stephanie Harriman,1 Elizabeth C. Moylan1

Objective

BioMed Central journals allow authors to suggest reviewers, whom the editors may or may not invite to peer review manuscripts. The objective of this study was to assess (1) whether reports from reviewers recommended by authors show a bias in quality and recommendation for final decision, compared with reviewers suggested by other parties, and (2) whether reviewer reports for journals operating on open or closed peer review models differ.

Design

We compared 2 journals of similar sizes and rejection rates that publish in the field of microbiology and infectious diseases. One journal focuses on research in biology and uses single-blind peer review; the other journal is medical with open peer review. All the procedures for handling manuscripts and peer review are identical in the 2 journals, except that the referees are anonymous in the biology journal and are named in the medical journal. In each journal we analyzed 100 manuscripts that had a final decision (accept or reject). Each manuscript had 2 reviewers, 1 suggested by the authors and 1 by another party. Each reviewer report was rated using an established review quality instrument (RQI).

Results

In both journals, the reviewers suggested by the authors were slightly faster to return their reports and much more likely to recommend acceptance. Reviewers not suggested by the authors were significantly less likely to recommend acceptance, and there was a stronger correlation between their recommendation and the final decision. There was no difference in the quality of reports between author- and non-authorsuggested reviewers. There was, however, a difference in the overall quality of reports between the open and closed peer review journals. The overall RQI score was 5% higher under the open model (Table 27

), owing mainly to higher scores on questions relating to feedback on the methods (11% higher), constructiveness (5% higher), and amount of evidence substantiating reviewers’ comments (9% higher).

Table 27. Comparison of Reviewer Quality Instrument (RQI) Scores Between Journals Operating on Open and Closed Peer Review Models

Table 27. Comparison of Reviewer Quality Instrument (RQI) Scores Between Journals Operating on Open and Closed Peer Review Models

Abbreviation: NS, nonsignificant.

aVan Rooyen S, Godlee F, Evans S, Black N, Smith R. Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial. BMJ. 1999;318(7175):23-27.

Conclusions

Although reviewers suggested by authors are more likely to recommend acceptance, editors appear to acknowledge this potential bias and put more weight on the reports from the other reviewer. The quality of reports is higher on the open peer review model.

1BioMed Central, London, UK, Maria.Kowalczuk@biomedcentral.com; 2Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK

Conflict of Interest Disclosures

The authors are employed by BioMed Central and work closely with the editors of the journals that are the object of this study. Frank Dudbridge is an associate editor for BMC Bioinformatics, a BioMed Central journal that was not an object of this study.

Back to Top

Correlation of the Final and the Preliminary Evaluation Scores of the Academy of Finland’s Peer Review Panels on Biomedical Engineering, 2006-2012

Juha Latikka

Objective

The objective of this study was to analyze how the final peer review panel scores correlate with the preliminary evaluations given prior to the panel by the panel members. This was done to estimate if and how much the actual panel meeting changes the order of the evaluated project plans.

Design

Preliminary peer review evaluations from biomedical engineering panels from 2006-2012 calls were compared with the final panel scores. The forms of funding studied were research project grants and postdoctoral projects. The total number of research plans was 189. Each of the research proposals was given a preliminary evaluation by 2 or more panel members before the panel meeting. For each project plan, an average of the preliminary scores was calculated. Then, each reviewer’s scores were normalized by setting the average of scores to 0 and standard deviation to 1. Another average of the preliminary scores was then calculated for each proposal. Both these averages were compared with the final panel scores by calculating the Pearson’s correlation coefficients. In addition, Spearman’s rank correlation was calculated as the ranking is important when doing funding decisions.

Results

The correlation between the final and preliminary scores for each panel was between 0.781 (2007 panel, n=27) and 0.936 (2006 panel, n=37). The correlation on final and normalized preliminary scores was between 0.540 (2007 panel, n=27) and 0.903 (2012 panel, n=27). The Spearman’s rank correlation was between 0.779 (2011 panel, n=32) and 0.936 (2006 panel, n=37). Spearman’s rank correlation was calculated also for the two funding instruments separately, giving values between 0.839 (2008 panel, n=17) and 0.940 (2011 panel, n=18) for research project grants. For postdoctoral projects the Spearman’s rank correlation values were from 0.720 (2007 panel, n=12) to 0.963 (2006 panel, n=13).

Conclusion

In the biomedical engineering panels in 2006-2012 calls, the panel meeting brought additional information on the peer review evaluation of research projects and affected the scores and ranking of the proposals when compared to separate preliminary evaluations.

Academy of Finland, Helsinki, Finland, juha.latikka@aka.fi

Conflict of Interest Disclosures

None reported.

Back to Top

Are Reviewers’ Scores Influenced by Citations to Their Own Work? An Analysis of Submitted Manuscripts and Review Reports

David L. Schriger,1 Samantha P. Kadera,1 Mickey M. Murano,1 Erik von Elm2

Objective

Academic medical researchers are frequently judged by the number of citations that their work receives in the literature and derived Impact Factors. These same academics serve as reviewers for journals and might be more likely to endorse papers citing their own work. We investigated whether manuscripts that contain a citation to the reviewer’s work received higher evaluations than those that do not.

Design

We checked every research paper submitted in 2011-2012 to the Annals of Emergency Medicine that was sent out for peer review and determined if it contained citations to each reviewer’s work. We obtained each reviewer’s score for overall manuscript quality (1=worst to 5=best) and compared ratings of cited and noncited reviewers to determine whether citation affected the reviewers’ ratings using descriptive statistics and ordinal and standard logistic regression models accounting for clustering by manuscript.

Results

The 395 manuscripts had 999 review reports. Of those, 66 manuscripts (83 reports) had 1 or more cited reviewer. Nine of the 66 manuscripts had cited reviewers only and 57 had both cited and noncited reviewers (Figure 10

). In the 329 manuscripts without a cited reviewer, the mean score was 2.48 (SD 1.19; 799 reports). In the 66 manuscripts with 1 or more cited reviewers, it was 2.86 (SD 1.25; 200 reports); when excluding the 83 reports by cited reviewers, it was 2.86 (SD 1.17; 117 reports). The unadjusted odds ratio for an increase of 1 score point was 1.60 (95% CI: 0.96-2.67) in reports by cited vs noncited reviewers. When adjusting for each manuscript’s mean score, this odds ratio was 0.92 (95% confidence interval, 0.63-1.34), demonstrating that manuscript quality was a confounder. This was confirmed when dichotomizing scores in low (1-3) and high (4-5) quality and using standard logistic regression.

Figure 10. Ratings by Noncited and Cited Reviewers for 66 Papers with ≥1 Cited Review

Figure 10. Ratings by Noncited and Cited Reviewers for 66 Papers with ≥1 Cited Review

Conclusions

In a leading specialty journal, editors assigned cited reviewers to better manuscripts (ie, with higher mean quality scores). Ratings by cited reviewers were generally higher than those by noncited reviewers, but this was largely due to differences in quality of reviewed manuscripts. We found no evidence of an independent effect of citation of reviewer’s own work on their ratings.

1University of California Los Angeles, Los Angeles, CA, USA, schriger@ucla.edu; 2Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland

Conflict of Interest Disclosures

None reported.

Funding/Support

David Schriger’s time is supported in part by a unrestricted grant from the Korein Foundation, which has no influence on the decision to do this research project or its execution.

Back to Top

Laboratory Investigation Editorial Internships: Peer Review Opportunities for Young Investigators

Catherine M. Ketcham,1 Robert W. Hardy,2 Brian P. Rubin,3 Gene P. Siegal2

Objective

Reviewing manuscripts is an important activity for academic scientists, yet there are no proven programs for training young scientists in peer review. Therefore, Laboratory Investigation (LI), an investigative pathology journal, elected to offer promising trainees an opportunity to review manuscripts for LI as editorial interns (EIs). The purpose of this study is to evaluate the quality of the reviews provided by the EIs.

Design

EIs (98) were recruited based on recommendations of department chairs (68 EIs, 69.4%), editors of the journal (11 EIs, 11.2%), and self-referral (19 EIs, 19.4%). Manuscripts were sent to EIs based on their areas of expertise. The EIs submitted reviews under the same guidelines as the 2 senior reviewers assigned to the manuscript. Only EI reviews that were of high quality were included in decision letters to avoid burdening authors with inexpert criticism. The EIs were expected to judge their own performance by comparing their comments with those of senior reviewers.

Results

Seventy-six EIs were assigned to 18% of peer-reviewed manuscripts (290/1,643) from July 2009 through December 2012. Editorial interns returned 221 reviews and accepted assignments 77% of the time (222/290), which compares favorably with senior reviewers (67%; 3,618/5,432). Editorial interns were reliable about returning reviews; more than 99% of reviews promised were provided (221/222), and review time was 11 days on average. Senior reviewers provided promised reviews 92% of the time (3,317/3,618) and took an average of 13 days. Ninety percent of the EI reviews were included in decision letters (200/221). Overall, 41% of interns were responsive and on-time and provided useful reviews (40/98), 12% were not responsive or provided poor-quality reviews (11/98), and there was not enough information to reach a conclusion on the remainder (47%, 46/98).

Conclusion

The EIs agreed to review at a high rate and quickly turned in quality reviews; therefore, we judge this to be a successful program.

1United States and Canadian Academy of Pathology, Augusta, GA, USA; 2Department of Pathology, University of Alabama at Birmingham, Birmingham, AL, USA, gsiegal@uab.edu; 3Cleveland Clinic, Department of Anatomic Pathology, Cleveland, OH, USA

Conflict of Interest Disclosures

Catherine M. Ketcham is the managing editor of Laboratory Investigation and an employee of Ketcham Solutions, Inc, which receives payment from the United States and Canadian Academy of Pathology for journal management. Robert W. Hardy and Brian P. Rubin are senior associate editors of Laboratory Investigation and receive a stipend from the United States and Canadian Academy of Pathology. Gene P. Siegal is the editor in chief of Laboratory Investigation and receives a stipend from the United States and Canadian Academy of Pathology.

Back to Top

Crowd-Sourcing in Medical Publishing: American Journal of Preventive Medicine’s Childhood Obesity Challenge

Jill Waalen,1 Brian Saelens,2 Elsie Taveras,3 Beverly Lytton,1 Bill Silberg,1 Kevin Patrick1

Objective

Crowd-sourcing is increasingly being used to solve a variety of problems. In July 2012, the American Journal of Preventive Medicine (AJPM), with support from the Robert Wood Johnson Foundation, launched the online Childhood Obesity Challenge, an experiment in using crowd-sourcing to expand the journal’s publishing model by encouraging submissions from authors both inside and outside academia; accommodating formats beyond the traditional manuscript; and speeding publication of innovative solutions that may not have undergone extensive study.

Design

The Challenge was publicized using Facebook and Google ads, Twitter, blog posts, press releases, and e-mail blasts to relevant professional organizations as well as AJPM’s database of authors and reviewers. Entrants uploaded a 3-page summary of their project or idea, along with supporting videos, apps, and website links. At the end of the 6-week submission period, 3 academic editors chose entries for peer review on innovation, feasibility, and potential reach. The top 10 peer-reviewed entries were then evaluated by an 8-member judging panel—representing expertise in business, public policy, and health care systems, as well as to childhood obesity research—for selection of the 1st- through 3rd-prize winners. Total review time was 6 weeks.

Results

Three rounds of the Challenge were completed during July 2012-May 2013, with 107 submissions received in round 1, 34 submissions in round 2 (policy focus), and 25 submissions in round 3 (clinical solutions). Of the 166 total submissions, 134 (81%) were from nonacademic authors who had never submitted a manuscript to AJPM, most representing programs being implemented by nonprofit organizations. The website attracted 68,454 visits and 7,898 registered users. Articles describing the 1st-prize winners were published in AJPM and winning teams received nominal cash prizes (Table 28

). Winners report participation in the Childhood Obesity Challenge has resulted in new linkages with researchers and programs with complementary goals.

Table 28. 1st-Place Winners of AJPM Childhood Obesity Challenge

Table 28. 1st-Place Winners of AJPM Childhood Obesity Challenge

Conclusions

Crowd-sourcing combined with peer review extended the reach of AJPM to new authors outside of academia and sped publication of innovative ideas not likely to have been captured by the traditional publishing model. Future Challenges (to include other topics) will focus on broadening publicity to attract ideas from even more diverse sources.

1American Journal of Preventive Medicine, University of California, San Diego, San Diego, CA, USA, jwaalen@scripps.edu; 2University of Washington and Seattle Children’s Research Institute, Seattle, WA, USA; 3Harvard Pilgrim Health Care Institute and Harvard Medical School, Boston, MA, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

Funding for this project was provided by the Pioneer Portfolio of the Robert Wood Johnson Foundation. The funder had no role in the conduct of the Challenge or the preparation of this abstract.

Postpublication Peer Review

Back to Top

What Do Authors Really Want to Know Postpublication?

Mary Beth Schaeffer,1,2 Christine Laine,1,2 Darren Taichman,1,2 Jill Jackson,1,2 Parsh Mehta,2 Catharine Stack1,2

Objective

Annals of Internal Medicine has created a manuscript data center where all authors of a paper can see information about their manuscript not only from submission through publication, but also related to the attention the article receives after publication: web hits, media citations, scientific citations, and reader comments. Providing postpublication information is resource intensive, so we evaluated the authors’ use of the information to see whether it provides sufficient value to justify the resources devoted to making postpublication data available to authors.

Design

We sent an e-mail to authors between 6 and 10 weeks after publication with a link to the manuscript data center, which displays the web hits on the first page. Each author received the e-mail once. Authors may also view any correspondences we sent during the review of their manuscript and general information about Annals. In fall 2012 we added the ability to track the number who open these pages after receiving the e-mail. We compared open rates of 4 e-mail blasts over the past several months.

Results

The 4 e-mail blasts that were sent between fall 2012 and spring 2013 had a 48% to 69% open rate. Authors viewed the information shown in Table 29

Table 29. Areas Visited by People Who Opened E-mail

Table 29. Areas Visited by People Who Opened E-mail

Conclusions

A large proportion of the authors access the information in the Manuscript Data Center. Open rates were highest for media citations followed by scientific citations and correspondences. Authors appear interested in the information we collect regarding the attention their articles receive postpublication. We should continue the service and publicize it to increase the attractiveness of publication in Annals of Internal Medicine.

1Annals of Internal Medicine, Philadelphia, PA, USA, mschaeffer@acponline.org; 2American College of Physicians, Philadelphia, PA, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

No external funding was provided for this study. Contributions of staff time and resources came from American College of Physicians and Annals of Internal Medicine.

Quality of Reporting

Back to Top

Temporal Change in the Use of P Values and Confidence Intervals for Reporting Intergroup Comparisons in Abstracts in Major General Medical Journals

Rakesh Aggarwal,1 Shambhavi Srivastava,1 Aditya N. Sarangi,1 Peush Sahni2

Objective

Two approaches for reporting intergroup comparisons, namely, P values and confidence intervals (CIs), provide different information to readers. In general, it is believed that providing both these measures makes the scientific papers and their abstracts more informative. In this study, we assessed changes in the use of P values, CIs or both in abstracts of papers reporting intergroup comparisons that were published in 5 general medical journals over a 20-year period.

 

Design

Abstracts of papers published in 5 general medical journals (Annals of Internal Medicine [AIM], BMJ, Lancet, New England Journal of Medicine [NEJM], and JAMA) during 1991, 1996, 2001, 2006, and 2011 were extracted from PubMed to identify papers that contained intergroup comparisons. From each abstract, the number of outcomes for which intergroup comparisons were reported and the type of information provided for each comparison (P value, CI, or both) was extracted. Proportions of intergroup comparisons for which P value, CI, or both were reported were calculated for each journal-year pair.

Results

Data were available on 3,525 outcome comparisons (AIM 424, BMJ 655, JAMA 788, Lancet 840, NEJM 818) during the 5 years. In the year 1991, all the 5 journals taken together reported 340 (56.8%), 121 (20.2%), and 138 (23.0%) comparisons as P values, CIs, and both, respectively. In 2011, these had changed to 104 (14.4%), 245 (33.9%), and 374 (51.7%), respectively. The data on frequency of use of CIs and P values for individual journals are shown in Table 30

. Over the study period, each of the 5 journals showed a progressive decline in the frequency of use of P values alone and increase in that of CIs (with or without P values). However, even in 2011, 3% to 37% of abstracts in the 5 journals continued to provide data as P values alone and 14% to 49% as CIs alone.

Table 30. Proportion of Outcomes Reported as P Value, Confidence Interval, or Both

Table 30. Proportion of Outcomes Reported as P Value, Confidence Interval, or Both

All data are shown as percent for each journal-year combination. The length of horizontal color bar in each cell is proportional to the value in that cell. Thus, bars in each column combine to form a bar diagram showing change over time.

Conclusions

Though the use of CIs has increased in major medical journals over the past 20 years, abstracts of many articles still do not provide results using CIs. Journals need to popularize the reporting in abstracts of CIs, which make for easier understanding of effect size, as well as P values.

1Department of Gastroenterology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India, aggarwal.ra@gmail.com; 2Department of Gastrointestinal Surgery, All India Institute of Medical Sciences, New Delhi, India

Conflict of Interest Disclosures

None reported.

Back to Top

Reporting Standards for Translational Animal Research

David Krauth,1 Calvin Gruss,2 Rose Philipps,1 Lisa Bero1

Objective

To assess the reporting of preclinical animal studies examining the effects of statins on atherosclerosis outcomes and to determine if the journals where the articles are published have reporting requirements for animal research. We selected atherosclerosis since it is the most common outcome addressed in preclinical statin research aside from cholesterol lowering.

Design

We searched MEDLINE (January 1966-April 2012) and identified 63 articles evaluating the effects of statins on atherosclerosis outcomes in animals. Based on our systematic review of quality assessment instruments for animal research, we assessed 8 criteria addressing the reporting of the study methods and animal details. We reviewed journal websites to determine if the 36 journals publishing the 63 articles directed authors to reporting standards for animal studies. We also assessed whether or not reporting improved following publication of the ARRIVE guidelines in 2010.

Results

Compliance with animal welfare requirements was reported in 51 of 63 articles. Test animal characteristics (ie, species, strain, substrain, genetic background, age, supplier, sex, and weight) were fully reported in 52 articles and partially reported in 11 articles. Environmental parameters (ie, housing and husbandry conditions, nutrition, water, temperature, and lighting conditions) were fully reported in 22 articles and partially reported in 39. Only 2 articles reported exclusion/inclusion criteria for the animals; 58 reported the sample size. Criteria to assess risk of bias were poorly reported with randomization reported in 30 articles, blinding of investigators in 23, and whether all animals were accounted for in 39. Among the 36 journals where the 63 articles were published, 9 required only that authors report compliance with animal welfare regulations, 16 had specific reporting requirements (2 reference the ARRIVE guidelines), and 11 did not have any reporting requirements. Randomization, whether all animals were accounted for, and sample size were the only criteria reported more frequently after publication of the ARRIVE guidelines.

Conclusions

Although there are published reporting guidelines for preclinical research, reporting in our sample was poor. Reporting could be improved by requiring authors to adhere to published reporting standards.

1Department of Clinical Pharmacy, Institute for Health Policy Studies, San Francisco, CA, University of California, San Francisco, USA, berol@pharmacy.ucsf.edu; 2Vanderbilt University School of Medicine, Nashville, TN, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

We acknowledge our funding sources: the National Institute of Environmental Health Sciences (grant R21ES021028) and the Vanderbilt University School of Medicine Medical Scholars Program. The funders had no role in the study design, data collection and

analysis, decision to publish, or preparation of the abstract.

Back to Top

Compliance Reporting in Randomized Controlled Trials: A Literature-Based Study

Francois Cachat,1 Katharina Theodoropoulou,1 Caroline Maendly,2 Hassib Chehade,1 Erik Von Elm3

Objective

To study the reporting of compliance of trial participants in pediatric and general medical journals.

Design

We identified all randomized controlled trials published 1997-1999 and 2007-2009 in 4 pediatric and 4 general medical journals and randomly selected a maximum of 30 from each. We extracted information on trial characteristics, compliance definition, methods of assessment of compliance in the trial, and number of participants excluded for noncompliance. We defined compliance as adherence to the prescribed medication, but did not exclude articles using other or no explicit definition of compliance.

Results

We included 358 full articles reporting on trials. Of those, 152 (42%) were conducted in children, 193 (54%) in adults, and 13 (4%) in both. The experimental intervention included 1, 2, or 3 drugs in 85%, 12%, and 3% of the trials, respectively. The routes of administration were either oral (82%), by inhalation (6%), subcutaneous (6%), or topical (6%). Compliance information was reported in 215 (60%) papers. Of those, 155 (72%) did not include a proper definition of compliance. In 332 (93%) studies, the number or proportion of participants excluded from the study because of noncompliance could not be properly identified. Where reported, the proportion of noncompliant participants varied between 1% and 69%. Methods for compliance assessment included pill counting after package return (51%), patient diary (23%), or questionnaire (26%). More objective methods such as measuring blood or urine drug concentrations were mentioned only rarely. The proportion of articles reporting compliance did not differ between exclusively pediatric or adult population trials (Fischer exact test: P=.911), and between the two publication periods (Fischer exact test: P=.829).

Conclusions

Information on participants’ compliance in randomized controlled trials (RCTs) and methods of assessment is reported only rarely and poorly. Researchers and journal editors should advocate for better compliance assessment and reporting in RCTs. Relevant reporting guidelines such as CONSORT and SPIRIT could recommend compliance reporting more explicitly.

1Department of Pediatrics, Division of Pediatric Nephrology, University Hospital, Lausanne, Switzerland, fcachat@hotmail.com; 2Samaritain Regional Hospital, Vevey, Switzerland; 3Institute for Social and Preventive Medicine, University Hospital, Lausanne, Switzerland

Conflict of Interest Disclosures

None reported.

Back to Top

The Graphical Literacy of Top Medical Journals in 2012

Jennifer C. Chen,1 Mike E. McMullen,2 Carter C. Wystrach,3 David L. Schriger,3 Richelle J. Cooper3

Objective

Graphics are a powerful way to efficiently and effectively convey complex data. Even if complete datasets were posted with each research paper, often graphics can more succinctly communicate that data and elucidate important trends. It is unfortunate that most trials depict a tiny fraction of their data because reports with complex graphics or tables provide a more comprehensive and potentially less biased account of the trial’s results. We evaluate the quantity and quality of graphs in a sample of top ISI journals.

Design

We took a random sample of 10 research articles, published in 2012, from 20 general (6), specialty (7), and randomly selected subspecialty (7) journals that publish original research and were top ranked by the 2011 ISI. We identified the number of tables and graphs in papers and their supplements and analyzed a random sample of 5 graphs for papers with 6 or more data graphs and all graphs in the others. Using methods and abstraction forms developed in previously published research, we catalogued the clarity, completeness, redundancy, special features, and efficiency of the graphs.

Results

The 200 articles had 912 data tables (mean, 4.6/article), and 564 figures, including 80 study flow diagrams, 32 study protocols, 86 images, and 366 data figures (mean, 1.7/article); 72 (36%) had no data figures. We analyzed 344 graphs (Table 31

). Simple bar/point graphs and survival curves predominate. Although we found specific deficiencies in many graphs, 99% were deemed self-explanatory. During the review we noted considerable heterogeneity in the quantity, type, and quality of graphs among journals, which will be reported in the poster.

Table 31. Characteristics of the 344 Graphs in the 128 Papers That Had Data Graphs

Table 31. Characteristics of the 344 Graphs in the 128 Papers That Had Data Graphs

Conclusions

Simple bar and point graphs constitute 54% of all graphs. In many cases these represent a missed opportunity to illustrate additional dimensions in the data. Our data suggest specific items that editors might rectify such as undefined error bars, heavy gridlines that visually compete with the data, and failure to indicate the number of subjects represented by each graph object.

1West Los Angeles VAMC, Los Angeles, CA, USA; 2Emergency Medicine Center, University of California Los Angeles, Los Angeles, CA, USA; 3University of California Los Angeles, Los Angeles, CA, USA, richelle@ucla.edu

Conflict of Interest Disclosures

None reported.

Funding/Support

David Schriger’s time is supported in part by a unrestricted grant from the Korein Foundation, which has no influence on the decision to do this research project or its execution.

Back to Top

Reporting of Study Enrollment: Missed Opportunities to Clarify Selection and Suggestions for Improvement

Jessica J. Wall,1 Richelle J. Cooper,1 Brian Raffeto,2 Douglas G. Altman,3 David L. Schriger1

Objective

CONSORT, STARD, and STROBE provide guidelines for reporting sample selection from the point where patients are screened for eligibility. They do not all include items for earlier steps when patients are being identified as candidates for eligibility screening. We sought to identify (1) the frequency with which investigators provide a numeric account of who could have been screened for eligibility; (2) an explanation whether selection at each step was nonrandom; (3) an account of who dropped out at each step and why; (4) the varying reporting and definition of “convenience” sampling and “consecutive” sampling; and (5) reporting of CONSORT, STARD, and STROBE flow figure elements.

Design

We conducted a systematic cross-sectional review of 20 top (2011 ISI rankings) general (6), specialty (7), and subspecialty (7) journals that publish original clinical research. We identified all original research eligible for CONSORT, STARD, or STROBE consideration and randomly selected 10 randomized controlled trials (RCTs) per journal from 2011-2012, and 5 diagnostic studies and 5 observational studies per journal (2008-2012) that enrolled patients prospectively. Trained reviewers scrutinized text and flow diagrams to collect the aforementioned information on standardized abstraction forms. Analysis was descriptive with stratification on study design.

Results

Reporting of early phases of sample recruitment was poor (Table 32

). Most reports did not indicate how screening occurred. While 11% of RCTs and 37% of diagnostic studies claimed to enroll “consecutive” patients, no RCT and only 11% of these diagnostic studies described how they ensured that no patients were missed. 1.5% of RCTs acknowledged convenience sampling, as did 4% of diagnostic studies; the remaining studies did not describe how they identified patients to be screened for study eligibility.

Table 32. Reporting of Patient Flow in Randomized Controlled Trials (RCTs) and Diagnostic Studies That Used Prospective Enrollmenta

Table 32. Reporting of Patient Flow in Randomized Controlled Trials (RCTs) and Diagnostic Studies That Used Prospective Enrollment

aObservational study data to be presented at the Congress.

Conclusions

At present most studies are not reporting information about the early phases of study enrollment, phases that have bearing on selection and spectrum bias. While our study provides evidence that compliance with CONSORT and STARD is far from perfect, inclusion of items concerning initial steps in sample recruitment could improve reporting and increase the transparency of investigations.

1Department of Emergency Medicine, University of California Los Angeles Medical School, Los Angeles, CA, USA, richelle@ucla.edu; 2David Geffen School of Medicine, University of California, Los Angeles, USA; 3Centre for Statistics in Medicine, University of Oxford, Oxford, UK

Conflict of Interest Disclosures

None reported.

Funding/Support

David Schriger’s time is supported in part by a unrestricted grant from the Korein Foundation and Douglas Altman’s time is supported by Cancer Research UK, neither of which have any influence on the decision to do this research project or its execution.

Back to Top

Use of Clustering Analysis in Randomized Controlled Trials in Orthopedic Surgery

Hanna Oltean, Joel J. Gagnier

Objective

The effects of clustering in randomized controlled trials (RCTs) and potential violation of assumptions of independence are well recognized. Analysis of the effects of clustering on study outcomes is an important component of unbiased reporting. This study looks at the use of clustering analysis in RCTs published in the top orthopedic surgery journals.

Design

RCTs published from 2006 to 2010 in the top 5 journals of orthopedic surgery, as determined by 5-year Impact Factor, that included multiple therapists and/or centers were included. Journals included were American Journal of Sports Medicine (AJSM), Journal of Bone and Joint Surgery (JBJS), Journal of Orthopaedic Research (JOR), Osteoarthritis and Cartilage (OC), and The Spine Journal (SJ). Identified articles were assessed for accounting for the effects of clustering of therapists and/or centers in randomization or analysis. Using a data-driven approach, logistic regression was conducted, with the use of clustering analysis as the outcome followed by a stepwise elimination procedure.

Results

A total of 186 articles were included. The prevalence of use of clustering analysis was 21.5%. In multivariable modeling, adjusting for clustering was associated with a 6.7 times higher odds of inclusion of any type of specialist on the study team (P=.08). Likewise, trials that accounted for clustering had 3.3 times the odds of including an epidemiologist/clinical trials methodologist than those that did not account for clustering (P=.04).

Conclusions

There is a low prevalence of use of clustering analysis in RCTs in the orthopedic literature. The inclusion of specialists in statistics or clinical trials methodology/epidemiology was associated with greater use of this important analysis tool. Investigators planning RCTs should make careful selection of their study methods and teams to ensure that proper design and analyses.

Departments of Orthopedic Surgery and Epidemiology, University of Michigan, Ann Arbor, MI, USA, jgagnier@umich.edu

Conflict of Interest Disclosures

None reported.

Back to Top

Quality of Reporting Methods for Estimating Hazard Ratio From Analyzing Time-to-Event Outcomes in Four General Medical Journals

Yen-Hong Kuo

Objective

Following the principles by International Committee of Medical Journal Editors (ICMJE), a manuscript is expected to “Describe statistical methods with enough detail to enable a knowledgeable reader with access to the original data to verify the reported results.” Hazard ratio is commonly used in clinical research for comparing the hazard rates between two groups when the outcome of interest is a measure of time. It can be estimated by using the Cox proportional hazard regression model (CPHRM) only when the assumption of proportional hazards is met. Several approaches are available for testing this assumption. Therefore, reporting the methods is a good indicator of complying with the ICMJE principles. The purpose of this study was to assess the quality of reporting methods for estimating hazard ratio from analyzing time-to-event outcomes in four general medical journals.

Design

All original research articles published from July through December 2012 in BMJ, JAMA, Lancet, and New England Journal of Medicine (NEJM) with the keyword of “hazard ratio” or “HR” as a result in the abstract were included in the analysis.

Results

A total of 88 articles (19 from BMJ, 26 from JAMA, 13 from Lancet, and 30 from NEJM) were included. Seventy-five articles (85%) used CPHRM for estimating hazard ratio (95% exact confidence interval, 76%, 92%). Among them, only 36 articles (48%) described testing the proportional hazard assumption (95% exact confidence interval, 36%, 60%). The percentages of reporting (from lowest to highest, 27%, 27%, 52%, and 82%) were statistically significantly different among journals (P = .003 by 2-sided Fisher exact test). The methods of testing proportional assumption were specified in only 28 articles (37%, 95% exact confidence interval, 26%, 49%). Testing the interaction between time and treatment is the most commonly reported method (n= 13, 17%), then followed by the Schoenfeld residuals (n=9, 12%) and graphical methods (n=7, 9%).

Conclusion

When the hazard ratio is the main outcome, details of its estimation were frequently lacking in articles published in 4 large general medical journals.

Jersey Shore University Medical Center, Neptune, NJ, USA

Conflict of Interest Disclosures

None reported.

Back to Top

Improving the Completeness of Reporting of Research Articles: Implementing an Automated System

David Moher,1,2 Larissa Shamseer,1,2 James Galipeau,1 Jason Roberts,3 Tim Houle,3,4 Pierre-Olivier Charlebois,5 Chris Ivey5

Objective

Although many reporting guideline checklists exist, they are not always used by authors and journals. Formal interviews and focus groups repeatedly point to the need for an automated system to facilitate the use of the checklists by authors, peer reviewers, and editors.

Design

Using the CONSORT statement checklist as a template, we have developed a content management system (CMS) that facilitates the automated use of the checklist and allows for a more interactive dialogue between authors and editors. The focus of the CMS is on the initial submission of a randomized trial to a journal for publication consideration, namely, the initial vetting of the trial. We have developed a web-based software-as-a-service application that takes the form of an interactive version of the CONSORT checklist. Through a combination of automated and manual processes, the CONSORT checklist is an integral part of writing the final trial report paper.

Results

Once the application is downloaded and activated, by prospective authors, it will enable a copy of the CONSORT checklist to appear integrated in the manuscript writing process. Based on the text of the drafted manuscript and through a combination of automated processes, each checklist item can be populated and highlighted in various ways. Once the checklist is populated, the completed file can be included as part of the manuscript submitted files. The system will be shown with several snapshots of a completed trial.

Conclusions

More complete journal adherence and implementation to CONSORT is associated with more complete and transparently reported randomized trials. To help optimize the implementation of CONSORT, and other reporting guideline checklists, across a larger number of journals requires more automation by prospective authors and editors. This has been demonstrated by our CMS system.

1Ottawa Hospital Research Institute, Ottawa, ON, Canada, dmoher@ohri.ca; 2University of Ottawa, Ottawa, ON, Canada; 3Headache: The Journal of Head and Face Pain, Plymouth, MA, USA; 4Wake Forest School of Medicine, Winston-Salem, NC, USA; 5Koneka, Ottawa, ON, Canada

Conflict of Interest Disclosures

Pierre-Olivier Charlebois and Chris Levy are members of Koneka. David Moher has received funding from the University of Ottawa for this project proof of principle grant.

Funding/Support

This project is funded by Koneka and the University of Ottawa (grant support to David Moher).

Back to Top

Characteristics of Manuscripts Rejected From a Chinese Medical Journal: A Cohort Study

HAO Xiu-yuan, QIAN Shou-chu

Objective

To investigate the characteristics and publication rates of manuscripts previously rejected by a leading Chinese general medical journal.

Design

In a cohort study, we assessed all manuscripts submitted to the Chinese Medical Journal (CMJ) from August 1 to December 31, 2009. Commentaries, letters, and manuscripts withdrawn by authors were excluded. Characteristics of submitted manuscripts included final decision (acceptance or rejection, with and without external peer review), study design, sample size, and funding source. During February 8-16, 2013, rejected manuscripts were searched in PubMed, Google Scholar, and the Chinese Wanfang database with their titles, key words, and authors for further publication information. Corresponding authors of the rejected manuscripts were also sent e-mails to confirm the searching results. Logistic regression analysis was used to determine the factors associated with subsequent publication.

Results

Of 1,005 manuscripts enrolled, 236 (23.5%) were published after the initial review in CMJ, and 769 (76.5%) were rejected (216 were rejected without external review). Of the 769 authors of rejected manuscripts who were sent e-mails, 348 (45.3%) responded. Of the 769 rejected manuscripts, 308 (40.1%) were published in other journals; 461 (59.9%) had not yet been published. The median interval from the initial rejection by CMJ to publication in another journal was 307 days (interquartile range (IQR), 169.5-472.5); 199 (64.6%) were published in specialty journals, and 109 (35.4%) in general medical journals (including 40 papers that were resubmitted after rejection, reconsidered, and published in CMJ). Comparison of the factors associated with published and unpublished manuscripts are shown in the Table 33

. Study design was associated with publication after initial rejection and rejection without external review with less likelihood of subsequent publication. Sample size and funding were not associated with subsequent publication in another journal.

Table 33. Comparison of the Published and Unpublished Manuscripts After Rejection by the Chinese Medical Journal, N=769

Table 33. Comparison of the Published and Unpublished Manuscripts After Rejection by the Chinese Medical Journal, N=769

Conclusions

Previous studies of leading international medical journals have shown that 76% to 86% of rejected manuscripts are eventually published in other journals. Fewer rejected manuscripts from CMJ may be published than those rejected by these other journals. Study design and previous rejection without peer review are associated eventual publication among manuscripts initially rejected by the CMJ.

Chinese Medical Journal, Chinese Medical Association, Beijing, China, scqian@126.com

Conflict of Interest Disclosures

None reported.

Back to Top

An Analysis of the Completeness of Claims Made to Emphasize the Importance of a Research Study

Medell K. Briggs-Malonson, Sara Crager, Richelle J. Cooper, David L. Schriger

Objective

Authors often justify the importance of their research through claims made in the Introduction. In our experience, some of these claims are hyperbolic. A claim can refer to incidence, prevalence, or cost; however, these metrics require a specific numerator and denominator to be meaningful. For example, the absence of a denominator in “there were 7,000 dog bites in California in 2010” invokes a inflated sense of importance compared to the properly structured “1 in 50,000 Californians was bitten in 2010.” In this study, we determined the number, types, and reporting completeness of claims being made in the Introduction of research papers. We defined a claim as any implied fact used by the author to emphasize the significance of the study.

Design

We randomly sampled 5 original research papers from the top 6 ISI general medical journals and the top ISI journal in 14 randomly selected medical specialties and subspecialties published in 2011-2012. We recorded the number and type of claims and whether the numerator and denominator, numerator alone, or neither, was presented for each claim on a standardized abstraction form we developed, piloted, revised, and tested for interrater reliability.

Results

Fifty-seven of the 100 articles contained 106 claims based on prevalence (46%), incidence (37%), or cost (10%) (Figure 11). Reporting was complete in 20% of the claims. Examples of hyperbolic or incomplete claims include “The past decade has witnessed a remarkable global surge in the incidence and severity of Clostridium difficil infection” and “Neck pain results in millions of ambulatory health care visits each year and increasing health care costs.”

Figure 11. Articles With Claims of Importance

Figure 11. Articles With Claims of Importance

Conclusions

The majority of claims are incomplete and at times result in hyperbole. The absence of standardization makes it difficult for readers to judge the relative importance of research efforts. Complete reporting and standardization of denominators could be realized by adding these requirements to existing reporting guidelines, although the standardization would likely have to be discipline specific.

University of California, Los Angeles, CA, USA, schriger@ucla.edu

Conflict of Interest Disclosures

None reported.

Funding/Support

David Schriger’s time is supported in part by a unrestricted grant from the Korein Foundation, which has no influence on the decision to do this research project or its execution.

Back to Top

Trends in the Quality of Data Graphs Over Time and the Role of Peer Review and Editing in Those Trends

Marco R. Fossati-Bellani,1,2 Michael E. McMullen,1,2 Brian Raffetto,2 Claire Drolen,1,3 Richelle J. Cooper,2 David L. Schriger2

Objectives

While there is evidence that prepublication peer review and editing improves scientific papers, that evidence is not overwhelming. We sought to determine whether the quality of data graphs in Annals of Emergency Medicine has improved over time and whether improvements, if any, can be linked to changes in the journal’s editorial processes and to specific reviewer and editor comments made during review.

Design

We randomly selected 60 papers each from 2006 (before there was formal graph review), 2009 (graph review after acceptance), and 2012 (graph review at “revise and resubmit”), using the journal’s electronic database, which contains submitted manuscripts and all editorial comments. We independently scored a random sample of 5 graphs for papers with 6 or more data graphs and all graphs in the others using a standardized form to record the type of graph (eg, scatterplot) and its attributes (75 items). A rater, blinded to each graph’s score, identified all editorial comments about graphs and parsed them by topic and the role of the person who made the comment. We compared each graph’s score in submitted and published manuscripts and determined how often changes were made in response to editorial comments.

Results

There were a median 2, 2, and 3 graphs per article in both submitted and published papers in 2006, 2009, and 2012, respectively (Table 34

). In 2012, 38 of the 60 papers’ editorial packets contained comments about figures. The 266 comments were made by regular reviewers (10%), paper editors (17%), methodology reviewers (24%), and the graphics editor (50%). How these comments relate to changes in the figures will be reported in the poster.

Table 34. Number and Characteristics of Graphs in 3 Time Periods

Table 34. Number and Characteristics of Graphs in 3 Time Periods

Abbreviations: S, submitted manuscript; P, published paper.

Conclusions

There is little evidence that the type or quality of graphs in submitted manuscripts is improving over time. However, for some measures, including the data density index, it appears that the published graphs are better than those originally submitted, and the magnitude of these improvements is higher in 2009 and 2012 than in 2006. While this suggests that the dedicated graph editor has improved figure quality, improvements were not seen across all metrics, and there are still deficiencies in published graphs.

1Emergency Medicine Center, University of California, Los Angeles, Los Angeles, CA, USA; 2University of California Los Angeles, Los Angeles, CA, USA, schriger@ucla.edu; 3Amherst College, Amherst, MA, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

David Schriger’s time is supported in part by a unrestricted grant from the Korein Foundation, which has no influence on the decision to do this research project or its execution.

Back to Top

Is the Relationship Among Outcome Variables Shown in Randomized Trials?

Carter C. Wystrach,1 Ana Lopez-O’Sullivan,1 Richelle J. Cooper,1 David L. Schriger,1 Douglas G. Altman2

Objective

Randomized trials often have more than 1 primary outcome and almost always have secondary outcomes. Comparison of outcomes between study arms is the primary focus of randomized controlled trials (RCTs), but there are times when the relation among outcomes is important. For example, if an asthma trial measured an intermediate outcome (FEV1) and a patient-centered outcome (days missed from school), one might want to know the relationship between them. We examined a random sample of RCTs in high-impact journals to determine how often and under what circumstances they show the relation among outcomes and how often, when using composite outcomes, they show the relations among the components.

Design

We selected 6 general and 7 major specialty journals and randomly selected 4 medical and 3 surgical subspecialty journals with the highest 2011 Impact Factors that published original clinical research. We identified every RCT in 2011 and 2012 issues by searching MEDLINE for publication type RCT or a title with “random*,” and randomly selected 10 articles per journal. For each article we recorded the number of outcomes, the number and characteristics of any analysis relating outcomes, and the use and reporting of composite outcomes. We assessed interrater reliability by having our 2 scorers independently double score 40 papers.

Results

Paired raters achieved 100% agreement on all items except the number of harms (88%). After clarifying the scoring rule, they rescored this item achieving 100% agreement. The majority of RCTs had multiple primary and secondary outcomes (Table 35). Only 16 studies had a single primary outcome and no secondary or harm outcomes. Thus, outcomes could be have been related in 92% of studies, but such relations were only reported in 2 (1%). Thirty-three (17%) investigations measured a composite outcome, 32 of which showed data for each component. None, however, showed cross-tabulation of the components.

Table 35. Outcome Reporting in 200 Randomized Trials

Table 35. Outcome Reporting in 200 Randomized Trials

Conclusions

Readers are rarely shown the relation between outcomes. Mandatory posting of datasets or requirements for detailed appendices would allow readers to see these cross-tabulations, helping future investigators know which outcomes are redundant, which provide unique information, and which are most sensitive to change in certain patient subgroups. At present such information is being lost.

1University of California Los Angeles, Los Angeles, CA, USA, schriger@ucla.edu; 2Centre for Statistics in Medicine, University of Oxford, Oxford, UK

Conflict of Interest Disclosures

None reported.

Funding/Support

David Schriger’s time is supported in part by a unrestricted grant from the Korein Foundation and Douglas Altman’s by a programme grant from Cancer Research UK, neither of which have influence on the decision to do this research project nor its execution.

Reporting Guidelines

Back to Top

Development of a Quality Score for the Assessment of Nonsystematic Review Articles (SANRA)

Christopher Baethge,1,2Stephan Mertens,1Sandra Goldbeck-Wood3

Objective

While systematic reviews are the gold standard in summarizing research findings, the overwhelming majority of review articles in medicine are nonsystematic. This group of reviews is heterogeneous, with quality levels differing considerably. Therefore, we developed a rating instrument for the assessment of the quality of nonsystematic reviews.

Design

Development of a quality scale (Scale for the Assessment of Narrative Review Articles, SANRA) using 7 items regarding key aspects of a review: (1) importance of the topic for the readership of the journal, (2) scope and purpose of the manuscript, (3) description of the literature search, (4) presentation of evidence for statements made, (5) presention of clinically meaningful effects, (6) open research questions, and (7) accessibility of the paper. All items are rated from 0 (failure to meet a low standard) to 2 (good or high standard). In a first study 3 editors rated a sample of 10 randomized general neurology journal articles, retrieved through a PubMed search. In a second study—using a revised version of SANRA—12 consecutive review manuscripts submitted to a general medical journal were assessed by the same raters. Psychometric properties were calculated.

Results

The mean sum score was 7.6 (SD 3.3, range 1-13) in the first and 8.4 (3.7, 3-14) in the second sample. In both studies internal consistency of the scale was sufficient, as suggested by an average Cronbach’s alpha of 0.80 (first study) and 0.84 (second study; Table 36). Intraclass correlations (single measure) of the sum score were 0.68 and 0.76, indicating satisfactory agreement among raters (reliability). In the second sample, average item−total correlation ranged from 0.25 (item 7) to 0.80 (item 5), concordant with good item homogeneity. The raters reported the application of the scale to be practical, but wording of some items was changed as suggested by psychometric measures or when wording seemed ambiguous (eg, item 7).

Table 36. Reliability Results of 2 Studies Testing SANRA

Table 36. Reliability Results of 2 Studies Testing SANRA

Abbreviations: ICC, intraclass correlation (single measure); CI, confidence interval. In brackets: item number; both studies were carried out by the same three raters.

Conclusions

The scale’s feasibility, internal consistency, homogeneity, and interrater reliabilty were adequate. Further psychometric testing is necessary, in larger groups of review articles and by other raters. Validity studies are desirable. SANRA may assist authors, editors, reviewers, and researchers.

1Deutsches Arzteblatt and Deutsches Arzteblatt International, Cologne, Germany, baethge@aerzteblatt.de; 2Department of Psychiatry and Psychotherapy, University of Cologne Medical School, Cologne, Germany; 3Department of Obstetrics and Gynecology, Nordland Hospital, University of Tromso Medical School, Bodo, Norway

Conflict of Interest Disclosures

None reported.

Back to Top

SQUIRE Grows Up! Revising the SQUIRE Guidelines to Meet the Advances in the Science of Quality Improvement

Louise Davies,1,2 Greg Ogrinc,1,3 Paul B. Batalden, Frank Davidoff, David Stevens3

Objective

Begin the process of revising the SQUIRE (Standards for Quality Improvement Reporting Excellence) Guidelines by understanding authors and quality improvement practitioners’ experiences using the SQUIRE guidelines and writing quality improvement manuscripts. Since SQUIRE was developed, the literature of quality improvement has grown, and the field has enjoyed expanded acceptance across medicine. Our goal is to create revised guidelines that will reflect this growing influence and integration into health care.

Design

Evaluative design using focus groups and semistructured interviews with 29 end users of SQUIRE. Sampling of participants was purposive with a goal of achieving maximum variation in work setting, geographic location, health care specialty, manuscript writing experience, and quality improvement research experience.

Results

Three key themes emerged. First, SQUIRE is very useful in the planning stages of a quality improvement (QI) project. However, if the guidelines are followed meticulously during writing, they create long and unwieldy manuscripts. It is not clear to users which items of SQUIRE must be included in every manuscript and which items might be optional. Second, QI interventions are iterative; the current guidelines do not specify how or whether failed iterations—sometimes the most informative part of the work—should be reported. There was disagreement about the best answer to this question. Last, among persons less experienced in the conduct of QI work, there was frustration with what was perceived as highly complex guidelines; they requested tools or visuals to distinguish between “the work of” QI and “the study of” a QI project. Among those more experienced in writing about and doing QI work, the guidelines were seen as relatively clear but with some redundancies, which could be streamlined.

Conclusions

SQUIRE provides good guidance for planning QI, but can be complicated to use during writing, and there is a lack of explicit guidance on some key aspects of reporting. Based on the feedback from users, key goals of the SQUIRE revision will be to streamline, clarify what constitutes complete reporting of quality improvement research, and create a simple framework that beginners as well as experienced users can easily understand.

1White River Junction VA Medical Center, White River Junction, VT, USA; 2The Dartmouth Institute for Health Policy & Clinical Practice, Hanover, NH, USA, Louise.davies@dartmouth.edu; 3Geisel School of Medicine at Dartmouth, Hanover, NH, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

Both authors received funding to perform the research reported herein. The Robert Wood Johnson Foundation, The Health Foundation. The funders have no role in the research, nor have they placed restrictions on the findings or publications.

Back to Top

Consensus-Based Recommendations for Investigating Clinical Heterogeneity in Systematic Reviews

Joel J. Gagnier,1,2 Hal Morgenstern,2 Douglas G. Altman,3 Jesse Berlin,4 Stephanie Chang,5 Peter McCulloch,6 Xin Sun,7 David Moher,8 for the Ann Arbor Clinical Heterogeneity Consensus Group

Objective

Critics of systematic reviews have argued that these documents often fail to inform clinical decision making because their results are far too general, such that findings cannot be applied to individual patients. While there is some consensus on methods for investigating statistical and methodological heterogeneity, little attention has been paid to clinical aspects of heterogeneity. Clinical heterogeneity can be defined as variability among studies in the participants, the types or timing of outcome measurements, and the intervention characteristics. The objective of this project was to develop recommendations for investigating clinical heterogeneity in systematic reviews.

Design

We used a modified Delphi technique with three phases: (1) premeeting item generation; (2) face-to-face consensus meeting in the form of a modified Delphi process; and (3) postmeeting feedback. We identified and invited potential participants with expertise in systematic review methodology, systematic review reporting, or statistical aspects of meta-analyses, or those who published papers on clinical heterogeneity.

Results

Between April and June 2011, we conducted telephone calls with participants. In June 2011 we held the face-to-face focus group meeting in Ann Arbor, Michigan, with an international group of 18 participants with varying expertise, including clinical research methodology, epidemiology, statistics, surgery, and social science. First, we agreed on a definition of clinical heterogeneity: Variations in the treatment effect that are due to differences in clinically related characteristics. Next, we discussed and generated recommendations in the following categories related to investigating clinical heterogeneity: the systematic review team, planning investigations, rationale for choice of variables, types of clinical variables, the role of statistical heterogeneity, the use of plotting and visual aids, dealing with outlier studies, the number of investigations or variables, how to obtain data for these variables, the role of the best evidence synthesis, types of statistical methods, the interpretation of findings, and reporting. We will describe all recommendations in detail at the Congress.

Conclusions

Clinical heterogeneity is common in systematic reviews. Our recommendations can help guide systematic reviewers in conducting valid and reliable investigations of clinical heterogeneity. Findings of these investigations may allow for increased applicability of findings of systematic reviews to the management of individual patients.

1Department of Orthopaedic Surgery, University of Michigan, Ann Arbor, MI, USA; 2Department of Epidemiology, School of Public Health, University of Michigan, Ann Arbor, MI, USA, jgagnier@umich.edu; 3Center for Statistics in Medicine, University of Oxford, Oxford, UK; 4Research and Development, Johnson & Johnson Pharmaceutical, Philadelphia, PA, USA; 5Agency for Healthcare Research and Quality, Rockville, MD, USA; 6Centre for Evidence Based Medicine, University of Oxford, Oxford, UK; 7Kaiser Permanente Center for Health Research and Oregon Evidence-Based Practice Center, Oregon Health Sciences University, Portland, OR, USA; 8Clinical Epidemiology Program, Ottawa Hospital Research Institute; Department of Epidemiology, University of Ottawa, Ottawa, ON, Canada

Conflict of Interest Disclosures

None reported.

Funding/Support

University of Michigan, Department of Orthopaedic Surgery. The university had no role in this project.

Back to Top

Proposal for a Structured Abstract for Case Reports: An Analytical Study

Devendra Mishra,1 Piyush Gupta,1, 2 Bhawna Dhingra,3 Pooja Dewan4

Objective

To develop and test a structured abstract for reporting case reports in biomedical journals.

Design

The study was conducted from September 2011 to May 2013. After discussions among 5 editors of Indian Pediatrics, we developed a 5-point structured format for reporting abstract of case reports (version 1). Using it, structured abstracts were prepared for 10 randomly selected case reports previously published in Indian Pediatrics and assessed independently by two internal reviewers (editorial board members of Indian Pediatrics). They commented on the ability of the abstract to bring out the salient features of each case report and suggested modifications. The comments were discussed among the 5 editors and the format modified (version 2). The 10 abstracts were remodified applying version 2. In addition, to test the structure more broadly to make certain that it will be effective across different types of case reports in different specialties, additional 10 structured abstracts were prepared from published case reports—5 each from specialty journals of dermatology and orthopedics—as per version 2. An external review was solicited from 1 editor each of 4 other journals (one each for a general medical journal, orthopedics, dermatology, and pediatrics). These indexed peer-reviewed journals published from India publish case reports on a regular basis. Each of these 4 external reviewers were sent 5 case reports (related to their specialties) along with its structured abstract. They were asked to provide feedback on important points not covered in each abstract, need for modification in the format (open-ended), and applicability to their journal (5-point Likert scale). Based on their responses, a final format for structured abstract for case reports was prepared (version 3).

Results

One additional heading suggested by both internal reviewers was incorporated in version 2; one each suggested by them was similar to a preexisting heading. External reviewers did not suggest any additional headings, advised reverting to the 5-point structure, and advised rearrangement of headings. The final version 3 is shown in Table 37

Table 37. Proposed 5-Point Structured Abstract for Case Reports

Table 37. Proposed 5-Point Structured Abstract for Case Reports

Conclusions

All reviewers felt that the structured abstract could be applied to their journals with some modifications. The final version would be introduced in Indian Pediatrics with effect from 2014.

1Indian Pediatrics, Department of Pediatrics, MAM College, Delhi, India, drdmishra@gmail.com; 2University College of Medical Sciences, Delhi, India; 3Department of Pediatrics, Lady Harding Medical College, Delhi, India; 4Department of Pediatrics, University College of Medical Sciences, Delhi, India

Conflict of Interest Disclosures

None reported.

Back to Top

Implementation of Adherence to the CONSORT Guidelines by the American Journal of Orthodontics and Dentofacial Orthopedics

Nikolaos Pandis,1 Larissa Shamseer,2 Vincent G. Kokich,3 David Moher2

Objective

(1) To describe the outcome of trial submissions to American Journal of Orthodontics and Dentofacial Orthopedics (AJODO) using the strategy and the types of reporting deficiencies identified. (2) To compare the completeness of reporting of trials published by AJODO before and after a CONSORT implementation strategy.

Design

The AJODO CONSORT implementation strategy consists of the following: submitted trial manuscripts are checked by an associate editor to determine compliance to the CONSORT 2010 checklist. If reporting deficiencies exist, editors identify which items need improvement and ask authors to revise. Revised manuscripts are then reassessed and additional changes are requested, if necessary. Manuscripts are then sent for peer review for assessment. Manuscripts are accepted (with or without revisions) or rejected following peer review. Editorial decisions made during the postimplementation period are described; a summary of the items commonly missing or incomplete from submitted reports. Completeness of reporting of CONSORT checklist items in the periods before (July 2007-July 2009) and after (July 2011-July 2013) the AJODO CONSORT implementation strategy will be compared.

Results

Twenty trials were published in AJODO between July 2007 to July 2009. Of the 42 trials submitted between July 2011 and July 2013, 13 were rejected before assessment, 29 were assessed for CONSORT compliance, and 28 continued on to peer review; 12 were rejected following review and 7 were accepted (9 are undergoing assessment) and completely report all CONSORT 2010 items. Prior to implementation, less than 50% of trials completely reported the following items: background and objectives, sequence generation, allocation concealment, allocation implementation, statistical methods, recruitment end dates, baseline data, effect estimates and confidence intervals, ancillary analyses, harms, trial limitations, generalizability, interpretation. All trials accepted for publication following the CONSORT implementation strategy completely reported all items of the CONSORT checklist.

Conclusions

Trials published using the AJODO CONSORT implementation strategy during the editorial process completely report all CONSORT 2010 items. Other biomedical journals may want to consider adopting a similar strategy to improve the reporting and usefulness of randomized controlled trials they publish.

1University of Bern, Department of Orthodontics and Dentofacial Orthopedics, Bern, AJODO, Switzerland, npandis@yahoo.com; 2Ottawa Hospital Research Institute and University of Ottawa, Ottawa, ON, Canada; 3University of Washington, Seattle, WA, and AJODO

Conflict of Interest Disclosures

None reported.

Back to Top

Update on the Endorsement of CONSORT by High Impact Factor Journals: A Survey of Journal Editors and Instructions to Authors

Larissa Shamseer,1,2 Justin Theilman,1,2 Lucy Turner,1 Sally Hopewell,3 Douglas G. Altman,3 Kenneth F. Schulz,4 David Moher1,2

Objective

This study provides an update on 2 earlier studies (Altman 2005, Hopewell 2008) assessing the extent to which high-impact medical journals incorporate the Consolidated Standards of Trials (CONSORT) guidelines into their journal and editorial process.

Design

Two cross-sectional studies are being completed: study 1: We accessed the online version of Instructions to Authors for the 5 high Impact Factor (IF) journals in 34 medical specialties for 2011 and the top 15 IF general medical journals. We assessed whether journals endorsed CONSORT and, if so, the extent (ie, required vs recommended) of endorsement of the CONSORT checklist, flow diagram, and/or other CONSORT materials (eg, website, Statement paper, Elaboration document) to authors submitting trial manuscripts for publication. Information about CONSORT extensions was also collected. Descriptive statistics of the data were calculated and the difference in proportion of endorsers and extent of endorsement compared to previous years (2003 and 2007) will be calculated. Study 2: We plan to run a previously administered survey among journal editors to gauge how CONSORT is used within the editorial process. We anticipate the results for the survey to be available in September 2013.

Results

As shown in Table 38

, 169 journals were included in the sample of which 96 (57%) endorse CONSORT (compared to 38% in 2003 and 22% in 2007). While 42 journals “require” the use of CONSORT and 50 journals “recommend” its use, only 8 journals explicitly indicate that authors should “submit” a CONSORT checklist. Sixty-eight journals refer to the CONSORT checklist, 55 refer to the flow diagram, and 77 link to the CONSORT website.

Table 38. CONSORT Mentioned in Journals Online Instructions to Authors

Table 38. CONSORT Mentioned in Journals Online Instructions to Authors

Conclusions

Undoubtedly journals want research they publish to be of high quality and high impact. Following earlier recommendations, it would be beneficial if journals were to adopt a uniform message regarding their expectations for reporting and use of guidelines by authors and peer reviewers. Use of CONSORT during the editorial process may help to improve the completeness of randomized trials and facilitate peer review.

1Ottawa Hospital Research Institute, Ottawa, ON, Canada, lshamseer@ohri.ca; 2University of Ottawa, Ottawa, ON, Canada; 3Center for Statistics in Medicine, University of Oxford, Oxford, UK; 4FHI 360, Durham, NC, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

This project was funded by a team grant jointly awarded by the Canadian Institutes of Health Research and the Canadian Foundation for Innovation. The funders had no role in the design, collection, analysis, and interpretation of the data; in the writing of the manuscript or decision to submit the abstract to the Peer Review Congress.

Back to Top

Does Journal Endorsement of Reporting Guidelines Impact the Completeness of Reporting of Health Research? A Systematic Review

Adrienne Stevens,1 Larissa Shamseer,1,2 Erica Weinstein,3 Fatemeh Yazdi,1 Lucy Turner,1 Justin Thielman,1,2 Douglas G. Altman,4 Allison Hirst,5 John Hoey,6 Anita Palepu,7 Iveta Simera,4 Kenneth F. Schulz,8 David Moher1,2

Objective

To assess whether endorsement of reporting guidelines (RGs) by journals impacts the completeness of reporting health research studies.

Design

We conducted a systematic review to evaluate the completeness of reporting of published studies by comparing studies published (1) before and after journal endorsement and (2) in endorsing and nonendorsing journals for a given RG. The RGs providing a minimum set of items to guide authors in reporting a specific type of research, developed with explicit methodology, and using a consensus process were identified from an earlier systematic review and from the EQUATOR Network’s reporting guidelines library (to June 2011). We excluded the main CONSORT guideline from our assessment because of a recently published systematic review. MEDLINE, EMBASE, the Cochrane Methodology Register, and Scopus were searched (October 2011) for evaluations of RGs. Full-text articles were screened independently by 2 reviewers. One person extracted data with 10% verification for study characteristics and complete verification for internal validity and outcome data. For a given RG, analyses were conducted by individual checklist items and their total sum, where applicable. Studies were analyzed (and pooled, where possible) by RR, MD, or SMD with 99% confidence intervals using random effects models.

Results

One hundred one relevant RGs were identified. Of 15,240 records retrieved from the literature search for evaluations, 20 evaluations assessing 7 RGs were included. From relevant studies assessed in those evaluations, we list the journals endorsing the 7 RGs in the Table. Only 10 of the 20 included evaluations could be analyzed quantitatively, and they assessed 6 RGs: BMJ economic checklist, CONSORT for harms, QUOROM, STARD, STRICTA, STROBE. Most RG items were assessed by one evaluation (Table 39). Across analyses, relatively few studies within evaluations were assessed. Given too few evaluations, we could not conduct planned subgroup and sensitivity analyses: extent of journal endorsement; “official” vs “unofficial” guideline extensions; and timing of endorsement (ie, studies published at least 6 months after endorsement).

Table 39. Endorsing Journals and Summary Analyses for 7 Evaluated Health Research Reporting Guidelines*

Table 39. Endorsing Journals and Summary Analyses for 7 Evaluated Health Research Reporting Guidelines*

Abbreviations: end, endorsing journals; n/a, not applicable; non-end, nonendorsing journals; NS, statistically nonsignificant.

*Grayed text refers to comparisons, analyses, and journals not included in the quantitative analyses.

†Across items.

‡Not statistically estimable for 1 item because of zero events in each arm (one evaluation only).

Conclusions

Only 7 of 101 RGs have been evaluated and by few evaluations assessing relatively few publications. Insufficient evidence exists to determine the impact of journal endorsement on completeness of reporting.

1Ottawa Hospital Research Institute, Ottawa, ON, Canada, adstevens@ohri.ca; 2University of Ottawa, Ottawa, ON, Canada; 3Albert Einstein College of Medicine, Bronx, NY, USA; 4Centre for Statistics in Medicine, University of Oxford, Oxford, UK; 5Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK; 6Queen’s University, Kingston, ON, Canada; 7St Paul’s Hospital, Vancouver, BC, Canada; 8FHI 360, Durham, NC, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

This project was funded by a Knowledge Synthesis Grant from the Canadian Institutes of Health Research (FRN 111750). The funder had no role in the design and conduct of the review; in the collection, management, analysis, and interpretation of the data; in the writing, review, or the approval of the abstract; or in the decision to submit the abstract to the Peer Review Congress.

Back to Top

Including Equity in Systematic Reviews: Using the PRISMA-Equity Extension Reporting Guidelines for Systematic Reviews With a Focus on Equity

Vivian Welch,1 Mark Petticrew,2 Peter Tugwell,3 David A. Moher,4 Jennifer O’Neill,3 Elizabeth Waters,5 Howard White6

Objective

We will present the detailed Explanation and Elaboration methods paper for a new reporting guideline to improve the consideration of health equity in systematic reviews. Health equity refers to the absence of avoidable and unfair differences in health outcomes. This reporting guideline is of interest to peer reviewers and journal editors because it can be used to improve transparency and reporting of health equity in equity-focused systematic reviews; defined as those that may have implications for disadvantaged groups, target disadvantaged populations, or assess effects on social gradients in health. This reporting guideline was developed using an evidencedriven and consensus-based process.

Design

We identified deficiencies in reporting of health equity considerations by conducting a systematic review. We then identified additional items or modifications that would address these deficiencies in the existing guideline for Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA). We used a Delphi process to gather input from a range of users with clinical epidemiology, systematic review, and policy expertise (n=323), then held a consensus meeting of international opinion leaders representing researchers, journal editors, funders, and policy makers and used an interactive process to achieve consensus on each reporting item. The importance and relevance of each item was discussed in depth and detailed minutes were recorded. We subsequently conducted focused literature searches of systematic reviews to identify examples of good reporting for each item.

Results

The PRISMA-Equity extension consists of 20 items and was published in PLOS Medicine in October 2012. The Explanation and Elaboration guideline has not yet been published and provides examples for each item and suggestions for methodological approaches relevant to each item. For example, the assessment of effects in different disadvantaged populations may be assessed by subgroup analyses, meta-regression, or other approaches. We will present the evidence supporting each reporting item and examples of good reporting from published systematic reviews.

Conclusions

This equity-focused reporting guideline is relevant to peer reviewers and journal editors because it aims to improve transparency and reporting. We will monitor uptake and implementation of this reporting guideline by assessing its inclusion in journal author guidelines and editorial peer review policies.

1Bruyère Research Institute, Ottawa, ON, Canada, vivian.welch@uottawa.ca; 2London School of Hygiene and Tropical Medicine, London, UK; 3University of Ottawa, Ottawa ON, Canada; 4Ottawa Hospital Research Institute, Ottawa, ON, Canada; 5University of Melbourne, Melbourne, Australia; 6International Initiative for Impact Evaluation, Washington, DC, USA

Conflict of Interest Disclosures

None reported.

Funding/Support

The development of this abstract was funded in part by the Canadian Institutes of Health Research who fund the Campbell and Cochrane Equity Methods Group, the Canada Graduate Scholarship for Vivian Welch, and the Canada Research Chair Program for Peter Tugwell.

Back to Top

Implementation of Reporting Guidelines in and the Quality of Reporting in Chinese and Non-Chinese Surgery Journals

DU Liang,1 LIU Xuemei,2 ZHU Ming,3 JIANG Qian4

Objective

To identify use of reporting guidelines and the quality of reporting in Chinese and non-Chinese surgery journals.

Design

In a cross-sectional study, conducted from June 2012 to February 2013, we searched the published Instructions for Authors of 199 surgery journals listed in Web of Science in 2011 and 40 leading Chinese surgery journals ranked by Institute of Science and Technical Information of China in 2011 to assess for inclusion of reporting guidelines. Using Web of Science and the Chinese Wangfang database, we also assessed the reporting quality of observational studies, randomized controlled trials (RCTs), and systematic reviews (SRs) published in these journals in 2012.

Results

Of the 199 journals in Web of Science, 130 (65.3%) Instructions for Authors were available. Of these journals’ Instructions for Authors, the following included information on ICMJE/WAME (n=50, 38.4%); publication ethics (n=45, 34.6%); conflicts of interest (n=40, 30.8%); CONSORT (n=36, 27.7%); trial registration (n=34, 26.1%); statistical guidelines (n=18, 13.8%); PRISMA/MOOSE (n=14, 10.8%); STROBE (n=10, 7.7%); levels of evidence (n=10, 7.7%); EQUATOR (n=3, 2.3%); STARD (n=2, 1.5%); and PROSPERO (n=1, 0.8%). Instructions for Authors were available for all 40 Chinese journals, but only 1 mentioned ICMJE/WAME, CONSORT, PRISMA/MOOSE. In a preliminary analysis of more than 1,200 observational studies, RCTs, and SRs published in these surgery journals, multiple reporting problems were found. In both Chinese and non-Chinese journals, observational studies rarely reported the study design, sample size calculations, and number of patients included at each stage of the study. Discussion of bias, study limitations, and special characteristics of the patients was especially inadequate in the Chinese journals.

Conclusion

Compliance with reporting guidelines is inconsistent in leading surgery journals and is worse in Chinese surgery journals.

1Chinese Journal of Evidence-based Medicine, West China Periodicals Press, West China Hospital of Sichuan University, Chengdu, P.R. China; 2Chinese Journal of Clinical Thoracic and Cardiovascular Surgery, West China Periodicals Press, West China Hospital of Sichuan University, Chengdu, P.R. China, l_xm20@263.net, cjebm2002@yahoo.com.cn; 3West China Medicine, West China Periodicals Press, West China Hospital of Sichuan University, Chengdu, P.R.China; 4Chinese Journal of Evidence-based Medicine, West China Periodicals Press, West China Hospital of Sichuan University, Chengdu, P.R. China

Conflict of Interest Disclosures

None reported.

Trial Registration

Back to Top

Surgical Trials and Trial Registers: A Cross-sectional Study of Randomized Controlled Trials Published in Surgery Journals Requiring Trial Registration in Their Author Instructions

Julia L. S. Hardt,1 Maria-Inti Metzendorf,2 Joerg J. Meerpohl3

Objective

Trial registration and results reporting are essential to increase transparency in clinical research. Although they have been strongly promoted in recent years, it remains unclear whether they have successfully been implemented in surgery. In this cross-sectional study, we assessed whether randomized controlled trials (RCTs) published in surgery journals requiring trial registration in their author instructions were indeed registered and whether study results of registered RCTs were submitted to the trial register and were thus publicly available.

Design

The 10 highest ranked surgery journals (Journal Citation Reports 2011) requiring trial registration were chosen, and a MEDLINE search for RCTs published in the included journals between June 1 and December 31, 2012, was conducted. Trials recruiting participants before 2004 were excluded because the International Committee of Medical Journal Editors first proposed trial registration in 2004. Then, the International Clinical Trials Registry Platform (ICTRP) was searched to assess whether RCTs identified were indeed registered and whether for registered RCTs results were available in the registry.

Results

Our search retrieved 588 citations; 460 clearly irrelevant references were excluded. A further 25 were excluded by full-text screening. A total of 103 RCTs was finally included (Table 40

). Eighty-five of these RCTs (83%) could be identified in the ICTRP. Forty-nine RCTs registered in ClinicalTrials. gov were clearly obligated to submit results (>12 months since completion date), but for only 7 (14%) of them results had been submitted to the results database.

Table 40. Registration Status of Trials Published in Top Ranked Surgery Journals

Table 40. Registration Status of Trials Published in Top Ranked Surgery Journals

Conclusions

Although still not fully implemented, trial registration in surgery has gained momentum. Submitting study results to ClinicalTrials.gov remains poor.

1Department of Surgery, University Medical Center Mannheim, University of Heidelberg, Mannheim, Germany, julia.hardt@googlemail.com; 2Library, University Medical Center Mannheim, Mannheim, Germany; 3German Cochrane Centre, Institute of Medical Biometry and Medical Informatics, University Medical Center Freiburg, Freiburg, Germany

Conflict of Interest Disclosures

None reported.

Speakers

Isabelle Boutron
PRC Speaker

Brian Nosek
PRC Speaker

Paul Glasziou
PRC Speaker

HOLLY FALK-KRZESINSKI
PRC Speaker

Tony Ross-Hellauer
PRC Speaker