Thursday, September 18
New Research on Authorship and Contributorship
Professors Responsible for Increase in Authorship
Joost PH Drenth, Department of Medicine
This survey was conducted to assess the change over a 10-year period in number and profile of authors of original articles published in the British Medical Journal (BMJ).
A comparative descriptive analysis of the number and appointment of authors to articles published in the BMJ. The specific appointment, order, and number of authors for each
original article published in volume 290 (1985), 300 (1990), and 310 (1995) were examined. Six categories (C) containing hierarchically similar appointments were distinguished: I (professor);
II (head); III (consultant); IV (senior registrar); V (lecturer/registrar); VI (medical student); and VII (house officer).
The number of original articles dropped from 238 (1985) to 125 (1995). The mean (95% confidence interval) number of authors per article increased from 3.92 (3.67-4.17) in 1985,
to 3.87 (3.62-4.13) in 1990, and to 4.46 (4.10-4.82) in 1995. Most authors belonged to category III but its proportion dropped over the decade (1985-1995) from 26.6% to 22.5%, while category I
grew from 13.1% to 20.2%. With regard to first authorship, category I almost doubled in 10 years from 8.9% to 16.8% compared to category V, which decreased from 35.3% to 25.6%. Most last authors
were from category I, 19.7% (1985), and its contribution grew to 29% in 1995, whilst category III decreased from 39.1% (1985) to 25.8% (1995)
Over the last 10 years the number of BMJ authors has increased. A substantial fraction of the increase can be attributed to the rise of category I
authorship (authors who are professors).
Canisius Wilhelmina Hospital, 6532 SZ Nijmegen, The Netherlands
Authorship Policies in US Medical Schools: Report of a Survey
Anne Hudson Jones, Institute for the Medical Humanities
To determine how many US medical schools
a) have adopted an authorship policy; b) are now in the process of developing an authorship policy; or c) are not planning to develop
an authorship policy, and why.
A letter and an anonymous questionnaire were mailed to the deans of the 125 US medical schools identified by the Association of American Medical Colleges (AAMC).
Questionnaires have been returned from 119 medical schools (95%). Twenty-five (21%) have adopted an authorship policy; 11(9%) are in the process of developing a
policy; 77 (65%) do not have a policy and are not developing one. Six respondents did not know whether their schools had discussed developing a policy (5%). Among the reasons given
for not developing an authorship policy were that a policy is not needed (25%); that a policy can be established only at the departmental level because of differing disciplinary
traditions (3%); and that the faculty could not agree on a policy (3%).
Editors of biomedical journals have led the way in developing specific criteria for responsible authorship. But the rewards for authorship (salary increases, promotion, tenure)
are usually conferred by institutions, and these same institutions must mediate disputes about authorship and allegations of scientific misconduct. Thus, it is encouraging that US medical schools
have begun to address issues of authorship and to establish guidelines for their faculties. The experiences of medical schools that have developed authorship policies may be of considerable benefit to schools that have not.
The University of Texas Medical Branch, 301 University Blvd, Galveston, TX 77555-1311, USA
Research on Authorship
Wendela Hoen, Henk Walvoorth, and John Overbeke
To investigate what criteria are used by authors of articles in the Dutch Journal of Medicine (Nederlands Tijdschrift voor Geneeskunde, NTvG) to define authorship and also the criteria according to which authors were listed.
Descriptive study; survey of 450 authors of original articles published in 1995 in NTvG with 3 or more authors. This survey contained questions concerning each author’s contribution to the article (eg, study design, material, practice, statistics, and writing); questions on the criteria for authorship developed by the International Committee of Medical Journal Editors (ICMJE); and open-structured questions on how they determined who was to be mentioned as an author of their article and how order of authorship was determined.
In 1995, 115 original articles were published with 3 or more authors. The 450 authors of the 155 manuscripts were surveyed; 362 authors returned the survey (overall response rate, 80.4%); 352 of the surveys could be analyzed. These questionnaires were returned by 92 first authors, 83 last authors, and 117 authors in between. The questionnaire consisted of 23 items of which 7 were related to the ICMJE criteria. The first 5 most frequently positive answered questions were all ICMJE criteria. Critical reading was done by 86.1% of the authors, approval of the version to be published by 84.7%, design of the study by 74.7%, conception of the study by 64.2%, and revision by 63.4%. No contribution at all was made by 1.1%. The discrepancy between the respondent’s own answers and those of the coauthors was calculated and showed a Gauss distribution. On average respondents claimed only 2 points more for themselves than their coauthors rated for them on a scale from +22 to -22. Sixty-three percent of the respondents met all 3 ICMJE criteria for authorship; 93% met 2 of the 3 criteria; and 57% of the respondents did not know about the criteria. The criteria mentioned for the order of the authors were inconsistent and differed from group to group and from faculty to faculty. For the majority of the respondents this was a main problem.
Conclusion There is much inconsistency in criteria for authorship and even more in guidelines for the authors’ order. The ICMJE authorship criteria are reasonably known and met by many authors, but they are of no help for determining order of authorship. Clearer and more practical guidelines should be developed.
Nederlands Tijdschrift voor Geneeskunde, PO Box 7591, 1070 AZ Amsterdam, The Netherlands
Honorary Authors, Ghost Authors, and Medical Writers in Peer-Reviewed Medical Journals
Annette Flanagin,1 Stephanie Phillips,2 Phil Fontanarosa,1 Lisa Carey,3 Brian P Pace,1 George D Lundberg,1 and Drummond Rennie1
To determine the prevalence of individuals named as authors of peer-reviewed medical journal articles who have not made substantial contributions to the work (honorary authors); to determine the prevalence of unidentified authors (ghost authors) and unidentified medical writers in peer-reviewed articles; and to identify characteristics of articles and corresponding authors of articles associated with honorary authors, ghost authors, and unidentified medical writers.
A self-administered survey was mailed to 1180 corresponding authors of randomly selected articles published in 1996 in 3 peer-reviewed general medical journals (Annals of Internal Medicine, JAMA, New England Journal of Medicine) and 3 peer-reviewed journals that publish supplements (American Journal of Cardiology, American Journal of Medicine, American Journal of Obstetrics and Gynecology). Data collected regarding the corresponding author included age, profession, academic degree and rank, publication experience, perceived writing skill, responsibility/participation in preparing the article, receipt of financial compensation for the article, and attitudes toward professional medical writers and the need to acknowledge writing assistance. Data collected regarding the article included type (original research, review, opinion), journal, inclusion in journal supplement, and funding source. Data collected regarding authors included number of authors, responsibilities and contributions of each author, acknowledgment of writing assistance, and contributions of unidentified authors and writers. The study was designed with a power of 80% to detect a difference of at least 10% between the 2 groups of journals, with a 2-tailed alpha of 0.05. Responses were handled and data were analyzed in a manner to preserve respondent anonymity.
To date, 812 of 1180 surveys have been completed and returned (overall response rate, 69%). Main outcomes to be presented will include prevalence of honorary authors, ghost authors, and unidentified medical writers, as well as characteristics of authors, articles, and journals associated with honorary authors and ghost authors.
1JAMA, 515 N State St, Chicago, IL 60610, USA;2 Project House, Hackensack, NJ, USA;3 Marymount College, Manhattan, NY, USA
Financial Interest and Disclosure in Scientific Publications
Sheldon Krimsky, Department of Urban and Environmental Policy
New conflict of interest regulations issued by the US Public Health Service and the National Science Foundation became effective October 1, 1995. These regulations do not apply to scientific publications supported by federal funds. Since the mid 1980s, an increasing number of biomedical science journals have established their own conflict of interest requirements for contributors. The International Committee of Medical Journal Editors passed a resolution in 1988 that authors should acknowledge any financial relationship that may pose a conflict of interest, but the resolution is not obligatory for the 12 journals participating. This presentation discusses “disclosure of financial interest” in the context of scientific journal publications. A recent study that quantifies the financial interest of authors in 14 biomedical publications provides a context for examining the variations in financial disclosure requirements among scientific journals. Factors that distinguish journal policies are: Who should disclose (editors, authors, reviewers, letter writers)? What interests should be disclosed? Under what criteria should financial disclosures be published in the journal? What constitutes privacy in biomedical publications? Finally, the presentation explores the challenges of incorporating financial disclosure requirements in the peer-review process and the obstacles to the implementation of journal policies on conflict of interest.
Tufts University, Medford, MA 02155, USA
A Survey of Journal Conflict of Interest Policies
Richard M Glass1,2 and Mindy Schneiderman2
To survey medical, biology, and (for comparison) economics journals regarding their conflict of interest policies.
Mailed questionnaire survey of 1,202 English-language journals with circulations of at least 1,000 listed as academic/scholarly publications under the headings of medical sciences (excluding dentistry and nursing), biology, and economics in the 1994 Uhlrich’s International Periodicals Directory. Following the initial mailing in September 1994, nonresponders were sent 2 additional mailings (November 1994 and July 1995) and also received telephone or facsimile prompting if such numbers were available. Comparisons by journal type and circulation size were performed using the chi-square test.
The overall response rate was 54% (648/1,202), ranging from 36% (34/95) for economics journals published outside the United States to 61% (239/393) for medical journals published in the United States. One third or less of the publications in any of the 3 disciplines had written policies concerning conflicts of interest for authors, reviewers, editors, or editorial board members. Almost half (46%) of US medical journals did have written policies for authors, and medical journals were more likely to have such policies than biology or economics journals (P<.001). Among US medical journals, the proportions of journals having written conflict of interest policies and published statements regarding conflicts of interest increased significantly with circulation size, as did the proportion requiring signatures on conflict of interest statements from authors. Most medical and biology journals considered and published substantive articles written by their chief editors.
Most medical, biology, and economics journals have not developed written policies to deal with conflicts of interest. Publication of substantive articles authored by the chief editor of the journal raises questions about conflicts of interest in oversight of the peer review and manuscript acceptance processes.
1JAMA,2 American Medical Association, 515 N State St, Chicago, IL 60610, USA
Evaluating the BMJ Guidelines on Economic Submissions: Prospective Audit of Economic Submissions to the BMJ and The Lancet
Tom Jefferson, Richard Smith, Mike Drummond, Rajendra Kale, and Yunni Yi (United Kingdom)
To assess whether publication (in August 1996) of the BMJ guidelines on peer review of economic submissions to the BMJ and The Lancet made any difference to (a) ease of editing and carrying out the peer-review process; (b) quality of submitted manuscripts; and (c) quality of published manuscripts.
Review of manuscripts before and after publication of the guidelines. All submissions with an economic content (those making explicit comments about resource allocation and/or costs of interventions) submitted during the periods of July 1 through September 30, 1994, to the BMJ and October 1 through December 31, 1995, to the BMJ and The Lancet were included in the “before” phase of the study. Submissions to the BMJ and The Lancet in the period of April 1 through July 1, 1997, were enrolled in the “after” phase of the study. Characteristics of the submissions will be compared for both phases.
We classified the submitted manuscripts as economic evaluations (in which analytical methods are used to analyze alternatives for resource allocation); economic studies (partially analytical reports, ie, cost-outcome studies, descriptive and methodological papers); economic papers (reports with minimal economic input, ie, submissions containing 1-line mentions of cost, editorials, non-systematic reviews, letters, and opinion pieces). Overall, 2,975 manuscripts were submitted to the 2 journals during the “before” study phase; 106 of which were economic (3.56%; 95% CI, 2.92%-4.29%). Of these 37 (34.9%; 95% CI, 25.0%-44.0%) were economic evaluations, 49 (46.2%; 95% CI, 36.7%-55.7%) were economic studies, and 20 (18.8%; 95% CI, 11.4%-26.3%) economic papers. Overall, 13 (12.3%) of 106 were accepted (95% CI, 6.0%-18.5%). Of the 11 (29.7%, 95% CI, 15.9%-47.0%) economic evaluations that were peer reviewed, 7 (18.9%; 95% CI, 7.97%-35.2%) were accepted for publication. Of the 16 (32.7%; 95% CI, 20.0%-47.5%) economic studies that were peer reviewed, 6 (12.2%; 95% CI, 4.6%-24.8%) were accepted for publication. Of the 8 (40%; 95% CI, 19.1%-63.9%) economic papers that were peer reviewed, nil were accepted for publication. There appeared to be little difference between the 2 journals in terms of numbers or editorial fate of the manuscripts.
Major medical journals are sent an array of different types of submissions with an economic content, most of which do not reach publication. Publication of the guidelines with the checklist and editorial manuscript management path should help editors, peer reviewers, and authors to rationalize the process.
Army Medical Directorate, Keogh Barrachs, Ash Vale, Hants GU125RR, UK; BMJ, London, UK; University of York, UK
Changes During Peer Review: Characterizing the Evolution of a Clinical Paper
Gretchen P Purcell1 and Frank Davidoff
We sought to categorize the inadequacies in reports of clinical research and to characterize quantitatively the resultant changes during the peer-review and editorial processes.
We selected original research articles from 2 issues of Annals of Internal Medicine and identified all substantive textual changes between the submitted manuscripts and the corresponding published papers.
We characterized the reasons for, and the semantic types of information involved in, each change.
In 7 articles, we identified 206 changes that occurred for 5 fundamental reasons: too much information (29.6%), not enough information (46.1%), inaccuracies (6.8%), misplaced information (14.6%), and
structural errors (2.9%). We identified several subcategories; for example, “too much information” was either too detailed, redundant, or extraneous. Changes occurred in all sections of the articles: abstract (15.0%),
introduction (4.4%), methods (20.4%), results (29.6%), discussion (26.2%), appendix (2.4%), and moves between sections (2.0%). Individual reasons for change were selectively associated with certain sections and semantic
types of information. For example, too little information accounted for 28 (50.0%) of 56 changes to the discussion section. Limitations were frequently omitted.
As result of editorial and peer review, we observed substantial changes throughout all sections of submitted research articles. We propose an analytical framework for categorizing the reasons for these changes. This framework provides a foundation for comparing quantitatively the editorial and peer-review processes across authors, journals, and types of papers. We are confirming the findings of this pilot study in a larger sample of manuscripts.
Department of Surgery, Duke Hospital North, Durham, NC 27710; Annals of Internal Medicine, Philadelphia, PA, USA
A Survey of Journal Editors Regarding the Review Process for Original Clinical Research
Dorothy L Lebeau, William C Steinmann, and Robert K Michael
We conducted a survey of clinical journal editors to evaluate aspects of the review process that may have an impact on the validity of the medical literature. Editor’s perceived standards for manuscript acceptance would not be reflected in the qualifications for board member or reviewer selection or objective criteria included in review formats used for manuscript evaluation.
Descriptive mail questionnaire. We surveyed editors of the 119 clinical journals indexed in the Abridged Index Medicus to determine: specific criteria used in the selection of their editorial board members and reviewers of manuscripts; criteria to judge the acceptance of a manuscript for publication; and the standardized review formats used by reviewers to assess manuscript validity.
Seventy-three percent of the questionnaires were returned completed. Editors rated individual credentials,
such as “expertise in a given subject area,” as more important than expertise in research methods as criteria to judge the
selection of board members and reviewers of manuscripts. Most (97%) rated highly methodologic criteria, such as data validity
and scientific merit, as criteria used in the decision to accept or reject manuscripts. However, the actual formats used by the
journals for reviewers to evaluate the manuscripts included important methodologic criteria to evaluate the aspects of the study
design in only 33% of the formats; biostatistical issues in only 42%; and assessment of measurements in only 7%. There were no
differences found between responder and nonresponder editors according to the specialty content or the prestige level of their journals.
While the editors hold high criteria for acceptance of manuscripts that should help to ensure their validity at publication,
the criteria used to select editorial board members and manuscript reviewers, and the formats used by reviewers for review, do not emphasize the
important methodologic competencies and criteria that are most important to the critical appraisal of manuscripts.
Tulane University Medical Center, School of Medicine, 1430 Tulane Ave, SL-16, New Orleans, LA 70112-2699, USA
Consistency of Reviewer Ratings and Impact on Editor Manuscript Decisions
Michael L Callaham and Joseph Waeckerle
Whether quality ratings of peer reviewers by editors are consistent or can be replaced by readily available journal statistics.
Thirty-three editors rated the quality of each manuscript’s review on a subjective ordinal l to 5 scale. These ratings were examined
for interreviewer agreement, sources of variance, and relationship to other reviewer performance calculations.
The study included 2,220 reviews of 770 manuscripts by 526 reviewers. Forty-nine percent of reviewer recommendations matched editorial decisions.
Reviews received an average rating score of 3.8 (SD, 1.0). The weighted kappa for the acceptance decisions of reviewers vs editors was 0.30. The within-reviewer
intraclass correlation was 0.41 (P=.0001), indicating that reviewer attributes explained 41% of the variance not explained by manuscript and editor random effects.
By comparison, intraclass correlations for editor and manuscript were only 0.16 and 0.12, respectively. In a subset of 1,395 reviews by 151 reviewers with 6 or more
rated reviews, the average rating score for all reviews by each reviewer was not a strong predictor of rank correlation with final editorial decision (R=0.37; 95% CI, 0.22-0.49)
or the reviewer’s average acceptance rates (R=0.41).
Editor ratings of individual reviewers were only modestly consistent. They were not strongly related to easily calculated journal statistics such as reviewer
acceptance rates or concordance with editorial decisions (which is often determined by factors other than review quality), and thus cannot be replaced by them. Reviewer ratings
have nonetheless seemed useful in eliminating poor reviewers and in recognizing exemplary ones.
Annals of Emergency Medicine, University of California, San Francisco, Box 0208, San Francisco, CA 94143-0208, USA;
University of Missouri at Kansas City, School of Medicine, Kansas City, KS, USA
What Makes a Good Reviewer and What Makes a Good Review?
Susan van Rooyen, Fiona Godlee, Stephen Evans, Richard Smith, and Nick Black
To examine the characteristics of reviewers in relation to quality of reviews.
As part of a randomized controlled trial of blinded and “unmasked” peer review, editors, and authors were asked to assess the quality of reviews. Demographic data were collected using a postal questionnaire, and their contribution to the quality of reviews was assessed using multiple regression.
Data were available on evaluations of 286 pairs of reviews. Increasing age was associated with lower scores (P<.0005).
Scores fell substantially (1 SD) from age 30 to 50 but did not change thereafter. Training in epidemiology or statistics was associated with a rise in score of 0.25 SD (P=.014). North American reviewers had significantly higher scores (0.4 SD, P=.013) and were more likely (87.5%) than all other reviewers (54.6%) to have had training in epidemiology or statistics. Each of these effects remained after adjusting for the others. There was no detectable effect of sex, possession of a PhD or MD, academic position, number of journals reviewed for, or number of papers refereed per year. Membership of an editorial board tended to produce lower scores. The time taken to obtain a review also showed a nonlinear effect with age, those aged <40 and those aged >50 having shorter times. Inevitably, North Americans took longer (by 6 days on average). Data collection is continuing.
Older reviewers, those qualified in North America, and those with training in epidemiology or statistics produce better reviews.
BMJ, BMA House, Tavistock Sq, London WC1H 9JR, UK; London School of Hygiene & Tropical Medicine, London, UK
The Effect on the Quality of Peer Review of Blinding Reviewers and Asking Them to Sign Their Reports: A Randomized Controlled Trial
Fiona Godlee, Catharine R Gale, and Christopher N Martyn
To evaluate the effect on the quality of peer review of (1) blinding reviewers to the identity and affiliations of the authors and
(2) requiring reviewers to sign their reports.
With the permission of the authors, we modified a scientific paper accepted for publication by a weekly medical journal introducing 8 areas of weakness.
The modified paper was sent to 420 potential reviewers selected from the same journal’s database. Using a factorial design, reviewers were randomly allocated to 5 groups.
Groups 1 and 2 received manuscripts from which the authors’ names and affiliations had been removed, while groups 3 and 4 were aware of the authors’ identities. Groups 1 and 3
were asked to sign their reports, while groups 2 and 4 were asked to return their reports unsigned. Group 5 was sent the paper in the usual manner of the journal, with authors’
identities revealed and a request to comment anonymously. Group 5 differed from group 4 only in that they were unaware that they were taking part in a study. The main outcome
measure was the number of weaknesses in the paper that were commented on by the reviewers.
Reports were received from 221 reviewers; a response rate of 53%. Respondents did not differ significantly between the groups. The median number of weaknesses
commented on was 2 (range, 0 to 5). There were no statistically significant differences between groups in their performance.
In the context of a general medical journal, neither blinding reviewers to the authors and origin of the paper nor requiring them to sign their reports seems
likely to have a strong influence on the quality of peer review.
BMJ, BMA House, Tavistock Sq, London, WC1H 9JR, UK: MRC Environmental Epidemiology Unit, University of Southampton, Southampton, UK
The Effect of Blinding and Unmasking on the Quality of Peer Review : A Randomized Controlled Trial
Susan van Rooyen, Fiona Godlee, Stephen Evans, Richard Smith, and Nick Black
To see whether concealing authors’ identities from reviewers (blinding) and/or revealing the
reviewer’s identity to a coreviewer (unmasking) affects the quality of reviews.
Randomized controlled trial. Papers were sent to 2 reviewers and randomized as to whether the
reviewers were asked to allow their signed reviews to be sent to their co-reviewer. Reviewers were then randomized
to receive either a blinded or an unblinded paper. Two editors independently evaluated the reviews using a validated instrument.
To date 487 papers have been entered into the study. Of these, 26 (5.3%) were excluded after randomization.
Preliminary analysis showed that the mean overall quality score was 20.0 (possible range 7 to 35; 95% CI, 19.5-20.5).
There was no significant difference in overall quality, recommendation regarding publication, or time taken to review between
different groups. Data collection is continuing.
Blinding and masking made no difference in quality, reviewers’ recommendation, or time taken to review.
Decisions on whether to change established practice must be based on other considerations, and other interventions to improve
the quality of peer review need to be explored.
BMJ, BMA House, Tavistock Sq, London WC1H 9JR, UK; London School of Hygiene & Tropical Medicine, London, UK
Masking Author Identity in Peer Review: Does It Improve Peer Review Quality?
Amy C Justice,1 Mildred K Cho, Margaret A Winker, Jesse A Berlin, Drummond Rennie, Mike Berkwits, Michael Callaham, Phil Fontanarosa, Erica Frank, David Goldman, Steven Goodman, Roy Pitkin, Rohit Varma, and Joseph Waeckerle
To determine whether masking reviewers to author identity was associated with higher review quality at several biomedical journals and to determine the success of masking at these journals.
A randomized trial at: Annals of Emergency Medicine, Annals of Internal Medicine, JAMA, Obstetrics & Gynecology, and Ophthalmology. Two reviewers reviewed each manuscript, 1 randomized to masked
and the other to unmasked review. Editors (masked to reviewer randomization) scored review quality on a 5-point Likert scale. Quality differences between unmasked and masked review scores for each manuscript were
tested using paired t-tests. A difference of >0.5 was considered important.
Fifty-eight percent 74/128 of manuscripts had both reviews returned. Editors perceived no significant difference in quality between masked and unmasked reviews (mean difference 0.1, 95% CI: -0.3-0.4).
Differences did not vary significantly by journal (in order listed above, -0.2, 0.1, -0.2, 0.6, and 0.1). However, masking was not universally successful. Masking success varied by
journal (in order listed above): 88% (70%-98%), 59% (33%-82%), 50% (29%-71%), 71% (42%-92%), and 62% (32%-86%) of (P=.05). Analyzing only pairs for which masking
was successful (59%) yielded similar results (0.0, 95% CI: -0.4-0.4).
This multi-journal study did not detect an improvement in peer review quality of greater than 0.4 on a 5-point Likert scale. Poor masking success at most journals may have compromised potential
beneficial effects of masking if manuscripts most likely to benefit from masking were more likely to be unmasked.
1Case Western Reserve University, School of Medicine, WB-29, 10900 Euclid Avenue, Cleveland, OH 44106-4961, USA
Masking Author Identity in Peer Review: What Factors Influence Masking Success?
Mildred K Cho,1 Amy C Justice, Margaret Winker, Jesse A Berlin, Joseph Waeckerle, Drummond Rennie, William Applegate, Ken Rothman,
Mike Berkwits, Michael Callaham, Phil Fontanarosa, Erica Frank, David Goldman, Steven Goodman, Roy Pitkin, and Rohit Varma
To confirm differences in masking success observed at 7 biomedical journals and to generate hypotheses to explain these differences.
Reviewer questionnaires at 3 journals with a long-standing policy of masking author identity (Annals of Emergency Medicine, Epidemiology, Journal of the American Geriatrics Society) and 4 without such a policy (Annals of Internal Medicine, JAMA, Obstetrics and Gynecology, Ophthalmology). Reviewers of masked manuscripts (at all journals except Journal of the American Geriatrics Society) were asked if and how they could identify author(s) and about demographics and reviewing and research experience.
Masking success was significantly higher at Annals of Emergency Medicine: 83% (95% CI 72%- 94%, n=78) than at all other
journals (n=208) (in order listed above: 41% (35%-47%); 53% (42%-59%); 59% (33%-82%), 50% (29%-71%), 71% (42%-92%), and 62% (32%-86%)) (P<.0001).
There was no significant difference in masking success between journals with a policy of masking (62%) and those without (61%) (P=.77). Masking efficacy
was not associated with any demographic characteristics analyzed, eg, age (P=.15), academic rank (P=.14). However, in general, successfully masked reviewers
had fewer years of reviewing experience (P=.001), published fewer research articles (P=.0004) and spent less time in research (P=.0001).
Masking success does not appear to be related to a journal policy of masking, but could be affected by other characteristics of journals
or specialty. Using reviewers with greater research and reviewing experience, but not necessarily greater age or rank, could decrease masking success.
1Center for Bioethics, University of Pennsylvania, 3401 Market Street, Suite 320, Philadelphia, PA 19104-3308, USA
Friday, September 19
Publication Bias in Two Spanish Medical Journals
To assess the factors associated with publication status of papers submitted to 2 major Spanish medical journals (Atencion Primaria and Medicina Clinica)
between 1994 and 1996; and to investigate the existence of publication bias in both journals. The study hypothesis was that publication bias stems from authors, not from editors.
A case-control study based on a consecutive and representative sample of 100 published papers, half taken from each journal (cases), and 100 rejected manuscripts,
half rejected by each journal (controls). Papers reporting observational and experimental studies were included. The following risk factors for publication bias were blindly
assessed with regard to the journal and to publication status (outcome variable: published vs rejected): type of study, objective of the study, region of Spain where the study
was conducted, sample size, complexity of the analyses, statistical significance, and methodologic quality. Quality scores were blindly assigned to each paper by means of a
previously validated questionnaire. The association of each independent variable with publication status was assessed using bivariate and multivariate analyses.
Only 4 papers out of 200 were excluded because they did not report observational or experimental studies. Quality (high vs low) was the only risk factor
associated with publication status (Atencion Primaria: OR=17.3; 95% CI, 5.7-55.0); Medicina Clinica: OR=234; 95% CI, 0.6-241.6)). Out of 200 papers, 146 included hypothesis testing,
of which 143 were positive studies, and 3 were negative(2 were published and 1 was rejected).
Quality was the only factor associated with publication status in both journals. If publication bias exists, it stems from authors, not editors.
Gerencia de Atención Primaria de Baleares, c/ Reina Esclaramunda, 9, 07003 Palma de Mallorca, Spain
Papers Submitted From Outside and Inside the United States: An Analysis of Reviewer Bias
Ann M Link
The increasingly international character of journals introduces new issues regarding reviewer bias.
Gastroenterology receives two thirds of its submissions from outside the United States, while two thirds of its reviewers are from the United States. Moreover, the acceptance rate of non-US papers is 12% lower than that for US papers. Therefore, this study assessed whether domestic reviewers or international reviewers evaluate manuscripts differently, depending on whether the manuscripts are submitted from outside the United States or from the United States.
A retrospective analysis of all original submissions received by Gastroenterology in 1995 and 1996. Reviewers ranked manuscripts as accept, provisionally accept, reject with resubmission, or reject. Chi-square analysis was used to compare domestic and international reviewers and papers.
The percentage of international manuscripts placed in each decision category by domestic reviewers (n=2,355) and nondomestic reviewers (n=1,297) was nearly identical (P=.31). However, domestic reviewers recommended acceptance of papers submitted by US authors more often than non-US reviewers (P=.001): accept: 7% vs 4%; provisionally accept: 31% for both; reject with resubmission: 26% vs 22%; reject: 35% vs 44%. Nondomestic reviewers ranked US papers slightly more favorably than non-US papers (P=.09), while domestic reviewers ranked US papers much more favorably (P=.00l).
Reviewers from the United States do not evaluate papers submitted from outside the United States differently than non-US reviewers. Both US and non-US reviewers evaluate papers submitted by US authors more favorably, with domestic reviewers having a significant preference for domestic papers.
Gastroenterology, American Gastroenterology Association, 7910 Woodmont Ave, 7th Floor, Bethesda, MD 20814, USA
Assessing the Quality of Reports on Placebo-Controlled Trials Published in English or German Language
Christoph Junker1 and Matthias Egger2
To compare the quality of reports on randomized placebo-controlled trials published in German and English language.
Survey of 131 journal articles published by authors from German-speaking Europe between 1985 and 1994. All trials published in 5 German, Swiss, and Austrian general medical journals and trials published by the same authors in English were identified. Placebo-controlled trials with parallel design were assessed with 2 tools: a standardized 3-item tool described by Prendiville et al, and an 11-item tool assessing 4 different aspects of reporting quality: general description of study; reporting of design; reporting of statistics; and reporting of results.
Quality scores reached a mean (SD) of the maximum possible score of 51.7% (17.5%) on the 3-item scale and of 53.3% (22.3%) on the 11-item scale. The quality of reporting of design features was lowest. There were no differences between German-language and English-language reports on the 3-item scale, but on the 11-item scale reports in English-language journals scored higher 54.8% (22.3%) than those in German-language journals 49.2% (22.0%).
Among reports in English language, those in journals with low-impact factors showed a somewhat lower quality than those in high-impact journals, 52.4% (21.4%) compared to 57.6% (23.3%). There was also an increase over time in quality measured with the 11-item scale; 1985-1989: 50.1% (22.0%); 1990-1994: 56.7% (22.3%).
The small quality differences do not justify the exclusion of German-language reports of randomized controlled trials in systematic reviews. Independent of language there is much room for improvement in the conduct and reporting of randomized controlled trials performed in German-speaking Europe. The 11-item scale appears to be a more sensitive instrument.
1Department of Social and Preventive Medicine, University of Berne, Finkenhubelweg 11, CH-3012 Berne, Switzerland; 2Department of Social Medicine, University of Bristol, UK
Publication Bias and the Passive Smoke Research:Interviews With Investigators
Anastasia L Misakian and Lisa A Bero,
To determine whether editor and researcher bias prevents the publication of data concluding that passive smoke is not harmful to health (negative data).
Design: Semistructured telephone interviews of principal investigators (PIs) for original research projects on the health effects of passive smoke (excluding biochemical, mechanistic, genetic, and policy studies). Research was identified by contacting 12 federal and state agencies, 9 tobacco industry-affiliated organizations, and 67 private organizations. Data were collected on the extent, direction, and publication status
of any unpublished data; reasons for not publishing; and publications resulting from the project.
Sixty-five PIs (response rate = 83%) were interviewed about 84 research projects. Twenty-seven (32%) projects were published, 24 (29%) were partially published, and 33 (39%) were unpublished. The most common reasons for having unpublished data were ongoing data collection and/or analysis (38/57, 67%); ongoing manuscript preparation (9/57, 16%); and priority (passive smoke was a minor component of a larger project; 8/57, 14%). Two unpublished studies (1 with positive and 1 with mixed findings) were rejected by a journal; both have been resubmitted. The proportion of negative findings was similar in the published (28%) and unpublished (28%) data. There was no significant difference in the time to publication for negative and positive studies.
There is no publication bias against negative studies on the health effects of passive smoke. These findings do not support the tobacco industry’s claim
that risk assessments are incomplete due to an underrepresentation of negative data in the published literature.
Institute for Health Policy Studies, University of California, San Francisco, 1388 Sutter St, San Francisco, CA 94109, USA
Characteristics of Unpublished Research and Reasons for Failure to Publish
Ellen J Weber,1 Michael L Callaham,1 Robert L Wears,2 Christopher Barton,3 and Gary P Young4
We hypothesized that authors of research rejected from a scientific meeting are less likely to pursue publication than those whose studies are accepted.
Abstracts submitted to the major 1991 emergency medicine meeting and not published within 5 years were rated by 2 blinded reviewers for scientific characteristics, study quality (overall scientific solidity), and originality. Authors of the 279 unpublished papers were sent questionnaires. Data was analyzed using Fisher’s exact test and Student’s t-test with 95% CI for differences between means.
A total of 493 abstracts were submitted to the meeting and 179 were accepted for presentation. Thirty-four percent (104) of rejected studies vs 61% (110) of those
accepted were ultimately published. Fifty-four percent (151) of investigators answered the questionnaire: 107 (35%) of those with rejected abstracts and 44 (25%) of those with accepted abstracts.
Thirty-five (23%) of the responding authors had completed a manuscript: 21% (20) of those rejected, 38% (15) of those accepted (P=.05). Study quality for abstracts with completed manuscripts
was 2.41 vs 2.50 for those without manuscripts (0.09, -0.27 -0.44). Originality of studies with manuscripts was 1.58 vs 1.56 for those with no manuscript (0.02; -0.23 -0.19).
Seventy-seven percent of abstracts with manuscripts had positive results vs 60% of those without manuscripts (P=.34). Effect size was 0.16 for abstracts with manuscripts vs 0.33 for
those without manuscripts (0.17; -0.20 -.54). Twenty-eight percent of authors (30) whose abstracts had been rejected from the meeting felt their results were not important or unlikely
to be published vs 5% (2) of those whose abstracts were accepted (P<.001). Logistic regression will be performed when results from a follow-up mailing to nonresponding authors are complete.
Objective differences do not explain why research is not published. Most authors never wrote a manuscript. Rejection of abstracts submitted to scientific meetings may
discourage manuscript preparation and submission to journals.
1Division of Emergency Medicine, University of California, San Francisco, Box 0208, L-138, San Francisco, CA 94143-0208, USA; 2Division of Emergency Medicine, University of Florida, Gainesville, FL, USA;
3Department of Emergency Medicine, University of North Carolina, Chapel Hill, NC, USA; 4Department of Emergency Medicine, Highland General Hospital, San Francisco, CA, USA
Data on File Cited in Pharmaceutical Advertisements: What are They?
To estimate the frequency of citations of “data on file” in drug advertisements, to survey how companies respond to
requests for them, and to ascertain the nature of the data.
All drug ads in 9 consecutive issues of Hospital Doctor and 5 issues of Prescriber (aimed at UK General Practitioners)
published in the last quarter of 1996 were surveyed. In the first week of 1997 all companies citing “data on file” or other unpublished
material in any of these ads were asked by fax for copies. Eleven further ads from recent issues of other journals were investigated likewise.
The 14 issues contained 95 different full drug ads, each being repeated 4 times on average. About half cited one reference or more.
Twenty-five (26%) referred to unpublished material, mostly “data on file.” Thirty-eight different ads from 28 companies were investigated; 25 of
them cited on data on file as a reference, 10 ads cited 2 data on file references, 3 ads had 3 data on file references. Eighteen of the ads
referred to unidentified documents, the others gave a code number or indicated the nature of the document. One ad cited an unpublished paper
by author and title as “submitted for publication.” Initial responses to 19 of the 38 requests were received within 10 days.
Much of the material was incomplete, consisting of abstracts or summaries not permitting critical evaluation; further correspondence was often needed.
Three companies’ material was marked “confidential.”
“Data on file” are used to justify crucial promotional claims, but the documentation is often of low quality. Many claims thus
seem inadequately supported. Standards and remedies are needed.
UK Cochrane Centre, 9 Park Crescent, London N3 2NL, UK
Is There a Sex Bias in Choosing Editors? Epidemiology Journals as an Example
Kay Dickersin,1 Lisa Fredman,1 Katherine M Flegal,2 Jane D Scott,3 and Barbara Crawley1
To test the hypothesis that women are represented at a lower proportion than men in editorial
positions in US epidemiology journals and that this proportion is less than their representation as authors and reviewers.
To test a secondary hypothesis that representation of women in editorial positions has increased over time.
The proportion of editors, authors, and reviewers of each sex of 4 major US general epidemiology journals,
American Journal of Epidemiology, Annals of Epidemiology, Epidemiology, and the Journal of Clinical
Epidemiology (formerly the Journal of Chronic Diseases), were examined for 1982, 1987, 1992, and 1994.
Included in these issues were 2,415 reports associated with 8,005 authors names, 2,982 reviewer names, and 695 editor names.
Only 1 of 7 editors-in-chief was a woman, a position she shared with a man. For all journals,
the proportion of editors (covering 3 editorial tiers) who were women ranged from 6.5% in 1982 to 16.0% in 1994.
Disproportion existed at each editorial tier (eg, editor-in-chief, associate editors, and editorial boards).
Over all journals and all years, we found a higher proportion of authors who were women (28.7%) compared to reviewers (26.7%),
and editors (12.7%), and these proportions were significantly different overall for each year studied (all P values < 0.001).
The 2 journals that started publishing in the 1990s had a higher proportion of editors that were women than the other 2 journals.
In 1994, the journal with a woman co-editor-in-chief had the highest proportion of authors, reviewers, and editors who were women.
It is not clear why there was a lower proportion of editors who were women than authors or reviewers.
The fact that the proportion of editors who were women in 1994 is still considerably lower than the proportion of women who
were authors in 1982, as well as other factors, does not support a cohort effect as the sole explanation for the phenomenon
we have observed. Another possible explanation is a selection bias favoring men for editorial positions.
1Department of Epidemiology and Preventive Medicine, University of Maryland School of Medicine,
506 W Fayette St, Baltimore, MD 21201, USA; 2National Center for Health Statistics, Centers for
Disease Control and Prevention, Hyattsville, MD, USA; 3The Charles McC Mathias Jr National Study
Center for Trauma and Emergency Medical Systems, University of Maryland School of Medicine, Baltimore, MD, USA
Reviewing the Reviews: The Case of Chronic Fatigue Syndrome
Simon Wessely, John Joyce, and Sophia Rabe-Hesketh
To test the hypothesis that the selection of literature in review articles is influenced by the authors’ disciplines and nationalities.
Both retrospective review of data and bibliometric analysis were performed. A search was made for all overviews (reviews) of
chronic fatigue syndrome (CFS) between 1980 and March 1996 in English-language journals from 3 major databases. Record of data sources in
each overview was noted as was the departmental specialty of the first author and his/her nationality. The references cited in each index paper
were tabulated by assigning them to 6 specialty categories by article title and to 8 nationality categories by place of publication of journal.
Associations were tested for using repeated measures MANOVA and subsequently displayed using biplots.
Of 89 overviews, 3.4% reported on literature search and described the search method. There was a significant interaction between
choice of specialty to cite and authors’ discipline (P=.01, F12,185 = 2.84) and authors’ nationality (P=.02, F4,69 = 3.25), with authors from
laboratory based disciplines preferentially citing laboratory references and authors from psychiatry based disciplines doing the reverse.
The interaction between nationality of reference and authors’ nationality was also significant (P< 0.001, F1,72 = 145.3), with both US and UK authors
preferentially citing their own nationality.
Most overviews fail to fulfill basic criteria for scientific acceptability. Citation of the literature is influenced by the author’s discipline and nationality.
King’s College School of Medicine and the Institute of Psychiatry, King’s College and Maudsley Hospitals, 103 Denmark Hill, London SE5 8AZ, UK
Peer Reviewer Bias Against Unconventional Medicine?
Edzard Ernst1 and Karl-Ludwig Resch2
To test the hypothesis that there is reviewer bias against testing of an unconventional drug.
Randomized, controlled, double-blind study of peer review. From a convenience sample of 291 medical doctors from a
wide variety of specialties (drawn from a list of participants of an interdisciplinary, international conference), study participants
were randomly assigned to receive 1 of 2 versions of a manuscript. Version M dealt with an in-vitro experiment on a mainstream drug (metoprolol),
while the otherwise identical version V used a highly unconventional yet commercially available drug (beef spleen cell extract).
All participants were asked to complete a standardized evaluation sheet to provide ratings from “poor” to “excellent” on a visual analog
scale for a set of predefined quality criteria. Participants were debriefed after completion of the study by means of a letter explaining
the experiment and asking for permission to include each participant’s data in the analysis.
Overall response rate was 63% (n=183); 127 responses were deemed suitable for evaluation. This sample size provided an 80% chance to
detect a 10% difference in ratings between groups at the level of 5% (2-sided). No differences in ratings between the 2 versions of the manuscript
were observed (P values for all variables >0.2). Ratings covered the entire range of the visual analogue scales.
In this study, there was no reviewer bias against testing an unconventional drug. The low interrater reliability, however,
suggested inadequate validity of peer review.
1Department of Complementary Medicine, Postgraduate Medical School, University of Exeter, 25 Victoria Park Rd, Exeter EX2 4NT, UK;
2Forschungsinstitut für Balneologie und Kurortwissenschaft, Bad Elster, Germany
Does Peer Review Favor the Conceptual Framework of Orthodox Medicine? A Randomized Controlled Study
Karl-Ludwig Resch,1 Edzard Ernst,2 and John Garrow3
To test the hypothesis that experts who review papers for publication are prejudiced against complementary medicine.
A randomized, controlled, double-blind study. Two versions of a fake short report on treatment of obesity, identical except for
the nature of the intervention (orthodox: hydroxycitrate vs homoeopathic remedy: sulphur C50) were produced. According to a randomization list,
1 of the 2 versions was sent to each of a list of experts (n=398; identified by means of a MEDLlNE search) using the letterhead and evaluation
sheet of an established journal. Main outcome parameters were the reviewers’ rating of importance on a scale of 1-5 (trivial to major contribution)
and a visual analog scale (with rating ranging from “reject outright” to “definitively accept”). Reviewers were unaware of the fact that they participated
in a study, but are currently being debriefed.
Overall response rate was 41.7% with 141 assessment forms being suitable for statistical evaluation. After dichotomization of the
rating scale (ratings of 1 and 2 = negative; ratings of 4 and 5 = positive), a significant difference in favor of the orthodox version was observed (P=.03, chi square test)
with an odds ratio of 3.01 (95% CI, 1.03-8.25). This observation was confirmed by the visual analogue scale (P=.052, Mann-Whitney U-test). The respective medians and
interquartile ranges were 67% (51%-78.5%) for the orthodox version and 57% (29.7%-72.6%) of maximal acceptance for the unconventional version.
Despite a remarkably large within-group variation in both groups, there seems to be a relevant reviewer bias against papers dealing with unconventional medical concepts.
1Forschungsinstitut für Balneologie und Kurortwissenschaft, Lindenstr 5, 08645 Bad Elster, Germany; 2Department of Complementary Medicine, Postgraduate Medical School, University of Exeter, Exeter, UK;
3European Journal of Clinical Nutrition, Hertfordshire, UK
Medical Editors Trial Amnesty (META)
Ian G Roberts
Reports of properly conducted controlled trials are the foundation of safe and effective health care. However,
a substantial proportion of trials never contribute to this foundation because they are not submitted for publication.
This has important implications for patient care. First, underreporting of trials reduces the power of systematic reviews
to detect moderate but clinically important treatment effects. Second, because trials showing more promising treatment effects
are more likely to be submitted, research syntheses based on published studies can give misleading conclusions. Finally, patients
may be asked to participate in research studies designed to address questions that have already been answered. Because of the important
consequences of unreported trials, the editors of the BMJ, The Lancet, and other international medical journals are calling for an unreported
trial amnesty. Investigators with unreported trials will be invited to register these with the journals. Any unreported trial in which participants
were prospectively assigned to 1 of 2 or more alternative forms of health care using random or quasirandom allocation will be eligible for registration.
Registration information (contact details, number of randomized participants, type of participants, type of intervention) can be posted, faxed or e-mailed to the journal.
Trial information will then be listed on a dedicated Web site. If trial data are required, for example, by those conducting systematic reviews, the reviewer will be able to seek
this information from the trial list.
Department of Epidemiology, Institute of Child Health, University of London, 30 Guilford St, London WC1N 1EH, UK
Quality Criteria That Support Computer-Assisted Retrieval and Review of Scientific Data
Robert S DeWoskin
Three basic steps in a quality assurance program are: 1) developing quality criteria that define excellence,
2) evaluating quality by comparing the study to the standards defined in the quality criteria, and
3) implementing quality controls that either reject or remedy studies that fail to meet the standard. Of these 3,
developing quality criteria is the most challenging because quality criteria differ depending on how the study results are used.
Quality evaluations based on different criteria produce different results. In the Regulatory Toxicology Program at Research Triangle Park,
we summarize large amounts of published data on health effects to support risk assessments. The quality of the data incorporated into the
risk assessment is of utmost importance. The first quality criteria that we use is that the study be published in a peer-reviewed journal.
Other quality criteria we like include the US National Research Council criteria on what constitutes a good toxicology or epidemiology study,
the Good Laboratory Practice standards for non-clinical studies, and the Good Clinical Practice standards for clinical studies. Rarely, however,
do authors report compliance with these established criteria, and although the peer-review process focuses on scientific merit, the criteria used
by the peer reviewers is for the most part totally undocumented. This makes it extremely difficult and time consuming for those who review and summarize
the ever increasing amount of published data to evaluate the quality of the data either for scientific merit or for data integrity. This presentation makes
a case for the development of published quality criteria for data integrity (and at least for some level of scientific merit) with associated keywords that
authors could cite in their papers and that would also appear in the abstracted information of electronic databases for published literature. This would facilitate
automated review and retrieval of quality data. Nothing precludes the development of different criteria sets for different end uses, and some examples are given.
Research Triangle Institute, PO Box 12194, Research Triangle Park, NC, 27709-2194, USA
A Software Program to Aid in Peer Review
Alvar Loria,1 and Gladys Faba,2 for the ARTEMISA Selection Committee,
To characterize a personal computer-based software program developed as an aid to peer review of medical papers.
The software is a Windows-based application that records automatically a numeric score to a series of questions related to 8 sections of scientific papers (introduction, methods, results, and discussion, plus 4 other sections). The questions and sections vary according to type of paper (original reports, case reports, or reviews), and the final output is a score with a maximum of 100 for a “perfect” paper. The software was tested using a single reviewer to judge 289 papers (169 original reports, 50 case reports, and 70 reviews) from 44 Mexican medical journals. All statistical analysis of scores were done with nonparametric tests.
The paper scores ranged from 29 to 97 with slightly higher median and less dispersion of scores for reviews as compared with original reports and case reports, but these differences did not reach significance. Two observations suggest that the software operated reasonably well: a) there were some differences in the section scores by type of paper that agreed well with differences in their complexity; b) the journal scores showed an association with their number of original papers and their percentage of original papers (Kruskal-Wallis test, P=.06 and 0.07, respectively).
The software operated reasonably well when used to compare the relative quality of 289 papers. The validity of the program is restricted
in this study to the experience of 1 reviewer. An analysis of the raw scores helped in detecting some ambiguous and redundant questions that have been modified
in an improved version. The program has a potential as a training tool for inexperienced reviewers or as a scorekeeper for experienced peer
1Instituto Nacional de Nutrición Salvador Zubiran, 14000 Mexico City, Mexico; 2Centro Nacional de Informacion y Documentacion en Salud, Mexico City, Mexico
Peer-Reviewed Articles and Public Health: The "Mad Cow" Affair in Italian Newspapers
Nicola Petrosillo, Maria Stella Aloisi, Enrico Girardi, Lucilla Rava, and Giuseppe Ippolito
The aim of this hypothesis-generating study was to analyze the relative impact on the news media of
new medical information that is not subjected to peer review vs information first published in peer-reviewed journals.
We performed an analysis of reports on Creutzfeldt-Jakob disease (CJD) in Italian newspapers.
On March 20, 1996, the British Secretary of State for Health announced that a new variant of CJD had been identified in 10 people.
On April 6, 1996, an article describing these cases of variant CJD was published in The Lancet (Will RG, Ironside JW, Zeidler M, et al.
A new variant of Creutzfeldt-Jakob disease in the UK. Lancet. 1996;347: 921-925). We reviewed the 7 newspapers with the widest circulation in Italy,
and by hand search we identified all articles related to the “mad cow” published between March 21 and May 10, 1996. Each item was classified according
to date of publication, page of publication, and proportion of page occupied.
We collected 535 articles, of which 62 (11.6%) appeared on the front page. Following the first news released,
the number of articles published each day rapidly increased, peaking on March 26 with 48 items and 1 article on the front page
of all the newspapers considered. During the 7-week period considered, 72% of all the articles and 88.7% of those on the front page
were published in the first 2 weeks, before The Lancet publication. Median daily page proportion of articles on each newspaper
was 0.63, 0.43 and 0.10 in the first, second, and third weeks of the study, respectively.
Our analysis suggests that when peer-reviewed research is published after a health risk is disclosed to the public,
its impact on media is small. We think that in these cases, the peer-review process should be expedited as much as possible to give the
public opinion and the health decision-makers scientifically sound and timely information on problems of great relevance to public health.
Centro di Riferimento AIDS e Servizio di Epidemiologia delle Malattie Infettive, Via Portuense, 292, 00149 Rome, Italy
Randomized Controlled Trial to Determine the Effect of Press Releases From a General Medical Journal on the Quantity and Quality of Press Coverage
Diana Bozalis,1 Vikki Entwistle,2 and Richard Smith3
To see if press releases describing scientific papers from the BMJ increase the quantity and quality of coverage in the lay international press.
The BMJ each week sends a press release that covers 3 to 5 papers to more than 200 journalists worldwide. Each week for 28 weeks the description of 1 of
the papers was randomly withdrawn from the press release. A researcher blind to what had been in the press release searched 3 electronic databases (Datastar, Profile, and Dialogue)
covering 44 newspapers for stories emanating from the BMJ. The length and position of the news stories was recorded. In addition, 2 researchers blind to what had been included in the
press releases independently scored all news stories for quality using a validated instrument. The null hypothesis was that mention in a press release would have no effect on the number,
length, position, and quality of news stories. We made a comparison between the stories produced from those papers included in the press release and those withdrawn.
The BMJ articles in the 28 weeks of the study led to 348 stories in the newspapers. Data on the length, position, and quality of the news stories are currently being collected.
The results will be available for presentation.
1Lupus Genetics Study, 9817 Ashley Place, Oklahoma City, OK 73120, USA; 2University of York, York, UK; 3BMJ, London, UK
Saturday, September 20
Are Medical Journal Editors Singing More Loudly and Preaching More?
To test the hypotheses that (a) the editors of the main general medical journals are writing more editorials at greater length and that
(b) those editorials are more likely to be preaching than other editorials.
Retrospective review of editorials appearing in the Annals of Internal Medicine, BMJ, JAMA, and the New England Journal of Medicine from
1986 through 1996 and in The Lancet from 1993 through 1996 (when editorials and commentaries were first signed). The total number of editorials (and, for The Lancet, commentaries),
the total number of column centimeters devoted to editorials, the number written by editors and other members of the editorial team, and the total number of column centimeters written
by editors and their teams will be determined to test whether the number, proportion, and length of editorials written by editors and their teams is increasing. “Preaching” will be measured
by counting the number of “musts” and “shoulds” in the editorials and in the first editorial in each case that immediately follows those written by the editors and their teams. The proportion
of “musts” and “shoulds” in editorials written by editors and their teams will be assessed as well as whether that proportion is growing over time.
To be presented.
BMJ, BMA House, Tavistock Sq, London WC1H 9JR, UK
A Comparison of the Opinions of Recognized Experts and Ordinary Readers as to What Topics a General Medical Journal Should Address
George D Lundberg,1 Marshall Paul,2 and Helga Fritz1
Objective: To assess the extent of agreement between topics identified by experts and by JAMA readers as most important for publication.
A: Criterion standard of editorial board members and senior staff (ie, experts), using Delphi process. In 1996, 55 recognized
experts were asked to propose the topics most important for JAMA to deal with in 1997. Forty (73% response rate) proposed 178 topics.
Editing for crossovers and groupings left 73 topics. The same 55 persons were asked to stratify all 73 alphabetically arranged topics
on a scale of 1 to 5 and separately to choose the top 7 (85% response rate). The same 55 persons were given the results of this ballot
and asked to vote again (76% response rate). A discussion was held after all results were provided to the 40 of the 55 who attended the
annual editorial board meeting; 39 attendees voted on the final topics (98% response rate). B: Masked direct mail survey of a stratified
sample of JAMA readers. A single pass of the same 73 topics yielded a response rate of 45.8% (208 returns). Nonresponders were roughly equivalent to responders.
The final top 10 topics determined by the experts were (in order) managed care, death and dying, genetics, quality of care,
violence, aging, caring for the uninsured and underinsured, outcomes research, HIV/AIDS, and cancer. The final top 7 topics determined by the
experts were (in order) death and dying, aging, managed care, genetics, computers, cancer, and caring for the uninsured and underinsured. Reader
choices shared agreement with the experts on only 3 of the top 10 subjects, and on only 3 of the top 7 subjects.
Conclusion Expert opinion and the opinion of readers as to what JAMA should emphasize varies widely.
1JAMA, 515 N State St, Chicago, IL 60610, USA;2HCI, Inc, Princeton, NJ, USA
John Garrow,1 Michael Butterfield,2 Jacinta Marshall,2 and Alex Williamson2
To test the hypothesis that editors of medical clinical journals are active respected clinicians rather than respected editors.
We mailed 262 questionnaires to all editors of peer-reviewed clinical journals that had received at least 1,000 citations in the 1994 Science Citation Report.
The responses were to be anonymous.
Replies were received from 191 editors (73%). In 1994 the journals they edited had 6,060 citations, (27,300/1000 maximum/minimum), 234 (740/31) source items,
and an impact factor of 2.10 (l8.3/0.2). Nonresponding editors’ journals had 7,910, (63,200/1000) citations, 258 (979/21) source items, and an impact factor of 2.43 (13.8/0.3).
Of the responding editors, 95% were part-time, 69% treated patients, and 96% were male. The age of 69% was 50-69 years, and of 10% the age was greater than 69 years.
The editorial office of 50% was located in the United States, 28% in the United Kingdom, 20% in other European countries, and 1.5% elsewhere. The editors were usually
recruited by election by a scientific society (30%), nomination by the previous editor (25%), or response to an advertisement (18%). Only 5% were selected by the publisher,
3% were elected by the editorial board, and 1% had started the journal. Most (65%) had been on the editorial team before becoming editor; 38% had served for more than 5 years.
Some reported formal training in editing (21%), writing (27%), statistics (25%), or reviewing (27%). There was no evident association between formal editorial training and the
status of the journal. Fifty did not favor editorial training, but 129 wanted apprenticeship to an experienced editor and/or short courses by experienced editors on all aspects
of editing, including statistics, ethics, and management.
Clinical journals are usually edited by practicing clinicians who are self-taught, part-time editors, but willing to accept further training.
1European Journal of Clinical Nutrition, Dial House, 93 Uxbridge Road, Rickmansworth, Hertfordshire WD3 2DQ, UK; 2BMJ Publishing Group, London, UK
What is the Lifespan of a Manuscript After Submission?
The time a journal takes to decide on a manuscript and to publish the accepted papers (the manuscript’s lifespan)
will vary according to the denominator chosen. The total lifespan is made up of time with journal, with reviewers,
with author, and in production. Some papers are published quickly (eg, items that are not sent out for peer review),
and inclusion of these will distort the mean in the journal’s favor. For instance, the database of The Lancet for 1996
shows that the mean time from receipt to acceptance for all items was 86 days (ranging from research articles to reviews and poems),
and that for the research manuscripts was 126 days (a 43% difference). The mean times from acceptance to publication for the 2 categories
were closer 70 and 86 days, respectively). More insidiously, journals can weight summary statistics in their favor by ignoring outliers or
using dates to their advantage (eg, the actual date of decision as opposed to the date an author is sent a letter about that decision).
Some journals publish their manuscript-handling times, but the denominators used are not always clear. Journals should agree on a standard
definition, and display handling times in a standard form. It is important to separate research manuscripts from other types of submissions.
Authors are interested in the bottom-line figure, but editors may think it fairer to weight the data for differences between journals in numbers
of submissions, printed pages, and numbers of staff.
The Lancet, 42 Bedford Sq, London WC1B 3SL, UK
Reporting of Randomized Clinical Trial Descriptors and Use of Structured Abstracts
Roberta W Scherer and Kay Dickersin
To assess whether the use of structured abstracts has changed the overall reporting of design
and operational characteristics (descriptors) of randomized clinical trials (RCTs).
We hand-searched 2 ophthalmology journals for reports of RCTs published in the year preceding,
the year of, and the year following the requirement for structured abstracts (1992, 1993, and 1994 for
Archives of Ophthalmology; 1991,1992, and 1993 for Ophthalmology). We excluded reports stating that no design
characteristics were reported because they had previously been published. We extracted RCT descriptors, as described
in the Consort Statement (JAMA. 1996;276:637-639) from each report, and compared reporting of these descriptors in RCTs
using structured abstracts to RCTs not using structured abstracts.
We found 97 published reports of RCTs fulfilling our criteria, 46% (45/97) with a structured abstract
and 54% (52/97) with a nonstructured abstract. Those with structured abstracts appeared only in the year of (n=17)
or following (n=28) the journal requirement. Reporting of descriptors did not generally vary by abstract type,
except in a few cases. Reports with structured abstracts more frequently included either “random” or “trial” in the
title: 64% (29/45) vs 19% (10/52)); described their statistical methods more often: 87% (39/45) vs 67% (35/52)); and
used intention-to-treat analyses more frequently: 22% (10/45) vs 8% (4/52).
The presence of a structured abstract is associated with better reporting of a few RCT descriptors.
University of Maryland School of Medicine, 506 W Fayette St, Century Building, Baltimore, MD 21201, USA
Can the Accuracy of Abstracts Be Improved by Specific Instructions?
Roy M Pitkin and Mary Ann Branagan
To test the hypothesis that providing authors with specific instructions for preparing abstracts,
including identifying common errors, will result in improved accuracy.
Randomized controlled trial involving manuscripts reporting original research submitted to a peer-reviewed medical specialty journal and returned after review, with an invitation to revise. The intervention was an instruction sheet stressing the importance of an accurate abstract and identifying 3 common types of inaccuracies: (l) data inconsistent in abstract and body of the paper, tables, and figures; (2) data or information in the abstract that do not appear elsewhere; and (3) conclusions in the abstract not substantiated in the abstract itself. Revisions were examined specifically for accuracy of the abstract. Preliminary observations indicated an error rate of about 25%, leading to a projected sample size of 200.
One or more deficiencies were found in 28% (95% CI, 19%-37%) of manuscripts in which the author had been instructed and in 26% (95% CI, 18%-34%) of the uninstructed group. Of the 55 deficient abstracts, 28 had inconsistencies, 16 contained data not in text, 8 had both, and 3 included inappropriate conclusions. A spot check of 4 other journals (2 weekly and 2 monthly) found defective abstracts in 27%-65% of published articles.
Errors and inconsistencies in abstracts are common. Simply providing authors with instructions about abstract preparation is ineffective, and some other means of improving accuracy needs to be found. In the meantime, journals should pay particular attention to the accuracy of abstracts.
Obstetrics & Gynecology, 1100 Glendon Ave, Suite 1655, Los Angeles, CA 90024-3520, USA
The Perceived Value of Providing Peer Reviewers With Abstracts and Preprints of Related Published and Unpublished Papers
Christopher L Hatch1 and Steven N Goodman2
To assess the value that peer reviewers place on the provision of abstracts of related papers and of preprints of related unpublished manuscripts.
A questionnaire was designed to measure peer reviewers’ assessment of the actual or potential usefulness of provision of abstracts and preprints of related papers to their review and to the peer-review process in general. The questionnaire is being sent out to all peer reviewers as part of the normal review process for the Journal of the National Cancer Institute (JNCI). Similarly, the choice of reviewers to whom abstracts and preprints are sent is at the discretion of an editor managing a manuscript, a standard JNCI procedure.
Between February 24 and July 12, 1997, 366 questionnaires were sent to peer reviewers of submitted manuscripts. Of the 276 questionnaires distributed by June 12, 1997, 162 (59%) had been returned by July 12, 1997; about one-third of the reviewers indicated that they received abstracts and about one-eighth indicated that they received preprints of related manuscripts. More than two-thirds indicated that the provision of relevant abstracts or preprints helped (or could help) them in their evaluation of the originality of results reported in a manuscript; approximately three-quarters of all respondents felt that the provision of such materials would have a positive impact on the peer-review process in general. However, among those who said that they had actually received abstracts or preprints, only about one-third felt that their comments were directly affected.
1Journal of the National Cancer Institute, Room 557, 6011 Executive Blvd, Rockville, MD 20852, USA; 2The Johns Hopkins Oncology Center, Division of Biostatistics, Baltimore, MD, USA
External Refereeing of Protocols for Systematic Reviews
Lisa A Bero,1 Roberto Grilli,2 Jeremy Grimshaw,3 Emma Harvey,4 and Andrew D Oxman5
To determine whether external peer refereeing of protocols for systematic reviews decreases the time necessary for peer review and provides useful feedback to authors.
Randomized controlled trial. All protocols for systematic reviews submitted to the Cochrane Collaboration on effective professional practice were randomized to receive internal peer review (IPR) by the editorial team or external peer review (EPR) by 3 referees. After completion of the protocol, all reviews were externally refereed. We measured 1) time for refereeing protocol and completed review and 2) number of “major” comments (those that would be difficult or impossible to correct after the review is completed) and “minor” comments made by external referees. We also determined whether the suggestions made during protocol refereeing were followed.
External peer-review protocols were refereed in 56 ± 21 days (n=9) compared to 31 ± 37 days for IPR protocols (n=9). To date, 7 protocols have been completed as reviews. The time for refereeing of the completed reviews was 28 ± 20 for those with EPR protocols (n=2) and 72 ± 22 for those with IPR protocols (n=5). During protocol refereeing, 38% (10/26) of external referees contributed major comments. During review refereeing, 33% (2/6) of referees for reviews with EPR protocols made major comments, compared to 53% (8/15) of referees for reviews with IPR protocols. In 1 case, a major suggestion given by an external referee at the protocol stage was not followed.
External refereeing of protocols does not increase the time necessary for peer review and provides authors with substantive suggestions for improving their reviews.
1Institute for Health Policy Studies, University of California, San Francisco, 1338 Sutter St, 11th Floor, San Francisco, CA 94109, USA; 2Mario Negri Institute, Italy; 3University of Aberdeen, UK; 4University of York, UK; 5National Institute of Public Health, Norway
Characteristics of Meta-analyses Submitted to a Medical Journal
Donna F Stroup,1 Stephen B Thacker,1 Carin M Olson,2,3 and Richard M Glass3
To assess whether the methodologic quality of meta-analyses affects publication; and whether specific quality characteristics predict publication.
Case series of all meta-analyses submitted to JAMA, a weekly general medical journal with guidelines for submitting meta-analyses. One investigator identified meta-analyses according to predetermined criteria. Two other investigators, blinded to the authors’ identity, abstracted standard methodologic characteristics from the meta-analyses and rated their quality.
Seventy-six meta-analyses were evaluated from January 1, 1996, through June 30, 1997. The following percentages demonstrated the characteristic with high or medium quality: 100% addressed appropriateness of pooling data; 99% generalized conclusions appropriately and defined the problem or stated a hypothesis;
93% considered alternative explanations for results; 92% specified the basis for selecting and coding data; 90% stated explicit inclusion and exclusion criteria; 89% used an appropriate statistical analysis; 89% showed an effort to include all available studies; 73% tested for heterogeneity; 63% provided guidelines for future research; 60% documented how data were classified and coded; 51% coded objectively; 42% assessed publication bias; 34% assessed study quality; and 32% gave enough detail for a reader to replicate results. Only 13% assessed the comparability of cases and controls. Among the meta-analyses, 18% displayed all but 1 or 2 of the characteristics; only 9% had less than half the characteristics. To date, publication decisions have been made on 55 meta-analyses. Because of the small number accepted (6/55 or 11%), preliminary results associating quality and publication would be unreliable and are not presented.
Meta-analyses submitted to a weekly general medical journal meet most methodologic standards.
1Epidemiology Program Office, Centers for Disease Control and Prevention, Atlanta, GA, 30333, USA; 2University of Washington, Seattle, WA, USA; 3JAMA, 515 N State St, Chicago, IL 60610, USA
The Cochrane Collaboration: Its Impact on the Quality of Systematic Reviews
Alejandro R. Jadad,1,2,3 Deborah J Cook,3,4 Alison Jones,5 Terry Klassen,6 Michael Moher,7 Peter Tugwell,5,8 and David Moher5
To compare the methodologic and reporting aspects of meta-analyses published by the Cochrane Coflaboration with those published in paper-based journals indexed in MEDLINE.
Thirty-six completed reviews published in the Cochrane Database of Systematic Reviews (CDSR) in 1995 were compared with a random sample of 38 meta-analyses published in
1995 in 32 different journals and found by a refined MEDLINE search strategy. Using criteria defined a priori, the following information was extracted in duplicate from each report: number of authors,
trials and patients included in each review, description of the sources of trials, inclusion and exclusion criteria, language restrictions, quality assessment of priary trials, heterogeneity testing,
and quantitative effect estimates.
Meta-analyses found in MEDLINE included more authors (median: 3 vs 2, P<.001), more trials (median: 13.5 vs 5,P<.00l) and more patients (median: 1,280 vs 528, P<.00l) than those in CDSR.
More reviews in CDSR than in MEDLINE reported the inclusion and exclusion criteria(35/36 vs 18/39,P<.001) and assessed trial quality (36/36 vs 12/39, P<.00l). Fewer reviews in CDSR had language
restrictions (0/36 vs 7/39, P=.012). There were no differences in the number of sources of trials, in the frequency of heterogeneity testing or in the description of the quantitative estimates.
MEDLINE was the source of trials in less than two thirds of the meta-analyses.
Cochrane reviews appear to have greater methodological rigor than meta-analyses published in paper-based journals. The limitations of this study (including possible confounding factors)
and future research directions will be discussed in relation to the Cochrane Collaboration’s peer review system as compared with that used by journals.
1Health Information Research Unit, McMaster University, 1200 Main St, W, Hamilton, Ontario L8N 3Z5, Canada; 2Canadian Cochrane Centre, McMaster University, Hamilton, Ontario, Canada; 3Department of Clinical Epidemiology & Biostatistics, McMaster University, Hamilton, Ontario, Canada; 4Department of Medicine, McMaster University, Hamilton, Canada; 5Clinical Epidemiology Unit, Loeb Medical Research Institute, Ottawa, Canada; 6Department of Pediatrics, University of Ottawa, Canada; 7General Practice Research Group, University of Oxford, Oxford, UK; 8Department of Medicine, University of Ottawa, Ottawa, Canada
The Quality of Discussion Sections in Reports of Controlled Trials Published in 1997 by Five Leading General Medical Journals
Michael Clarke1 and Iain Chalmers2
Several health-care journals have adopted the CONSORT recommendations to make it easier to assess the quality of controlled trials, reflecting the practical importance of these studies in guiding decisions in health care. Previous research has highlighted deficiencies in descriptions of the materials and methods used, and in the analysis and presentation of results. By contrast, the quality of discussion sections in trial reports has
received little scrutiny, yet it is impossible to judge the relevance of a particular trial unless its results have been set in the context of all other relevant research. Ideally this should involve the presentation of an up-to-date systematic review, as was done in the 1986 report of the ISIS-1 trial.
We assessed the Discussion sections in all 26 reports of randomized trials published during May 1997 in Annals of Internal Medicine, BMJ, JAMA, The Lancet, and the New England Journal of Medicine.
In only 2 reports were the results of each new trial discussed in the context of an updated systematic review of earlier trials. In a further 4 reports, reference was made to relevant systematic reviews, but no attempt was made to integrate the results of each of the new studies in an updated review. One report appears genuinely to have been the first published trial to address the question studied. In the remaining 19 reports it does not appear that a systematic attempt had been made to set the new results in the context of previous trials.
These findings suggest that considerable scope remains for journals to help their readers to set the results of new trials in the context of preexisting evidence.
1Clinical Trial Service Unit, Harkness Building, Radcliffe Infirmary, Oxford OX 26HE, UK; 2Cochrane Centre, Oxford, UK
Closing Some Legal Loopholes in the Peer Review Process
Philip C Swain
In a US federal court case that was scheduled to go to trial in November 1996, one of the issues that would have been presented to the lay jury was whether there were any legal restrictions preventing peer reviewers from using unpublished data from a manuscript to further their own research. Cistron Biotechnology of New Jersey alleged that a reviewer from Immunex Corporation of Seattle purloined a valuable DNA sequence from an unpublished manuscript and claimed it as Immunex’s own in a series of patent applications. The case was settled on the courthouse steps, with Immunex agreeing to pay Cistron $21 million and transfer its patents to Cistron. The legal issues raised, but not definitively resolved, by the Cistron litigation suggest that authors, journals, and reviewers may need to modify the way manuscripts are handled so that valuable intellectual property is protected. If protection of intellectual property during peer review is not ensured, the integrity of research that the peer-review process is designed to maintain may be compromised, and the communication of research may be slowed dramatically. In addition, without intellectual property protection, private industry (such as biopharmaceutical companies) will be less willing to sponsor and invest in biomedical research. As the lead attorney for Cistron Biotechnology, I will examine the facts and arguments made in the Cistron vs Immunex case, and discuss the legal aspects of the peer-review process. I will then propose a series of reforms to help ensure that the intellectual property of authors is maintained.
Foley, Hoag, & Eliot LLP, One Post Office Sq, Boston, MA 02109, USA
Phenomena of Retraction: Reasons for Retraction and Citations to the Publications
John M Budd and Mary Ellen Sievert
The study’s purpose was to examine a set of publications identified as being retracted in the biomedical literature to ascertain why the publications were retracted (and by whom) and to what extent they continue to be incorporated in subsequent work through citation. Previous work on the latter has been inconclusive. We hypothesized that the citations to retracted publications continue and include positive citations.
A search of MEDLINE identified a set of 214 retractions. The retraction notice was examined to determine the reasons for retraction, such as unavoidable error and misconduct. The retracted publications were classified according to the reasons for retraction and persons retracting. Following this classification, a sample of available citations to each retracted article was examined so as to determine how the article was cited (whether the problem leading to the retraction was noted, the article was simply mentioned in a literature review, or the article was used as substantive background in the citing paper). A chi-square test indicates whether the number of positive citations exceeds that which might otherwise be expected.
Reasons for retraction were somewhat varied. In 80 papers some error was acknowledged; in 37 results could not be replicated; in 73 misconduct was evident; and in 24 no clear reason was given. A total of 173 papers were retracted by some or all of the authors; 41 were retracted by a person or organization other than the author(s). The 214 retracted articles were cited 1,765 times after the retraction notice.
It is evident from these data that there are serious problems with the majority of articles in biomedical science that are retracted. Error or misconduct affect the outcome of the published work. Even after retraction the articles continue to be cited in the literature. The majority of the citing works give no indication of awareness of the retractions and many of those citations indicate substantive use of the retracted work. The extent of continued citation of retracted articles is a cause of concern regarding the conduct of biomedical research.
School of Information Science and Learning Technologies. University of Missouri-Columbia, 104 Stewart Hall, Columbia, MO 65211, USA
Authorship: The Coins of the Realm, the Source of Complaints
The growing importance of authorship to funding and career advancement has led to speculation that authorship disputes are increasing. This is verified by the rising incidence of intellectual property disputes brought to the Ombuds Office over a 5-year period from faculty, trainees, and students of Harvard’s medical, dental, and public health schools. The Ombuds Office opened in 1991 and provides a confidential alternative to formal grievance procedures. Generic data are kept to provide upward feedback to the organization. Issues involving authorship, ownership, and professional misconduct are a rapidly growing portion of complaints, rising from 2% in 1991 to 11% in 1996 of the approximately 500 concerns reaching the Ombudsperson yearly. Overall, 53% of these types of complaints came from faculty, 26% came from trainees, while students and others comprised the remainder. Over the past 2 years, 100 people brought forward intellectual property issues. One fourth of these were from international visitors (faculty, fellows, and students). Their issues approximated those of the general population with the exception of concerns about ownership and plagiarism. The most often discussed issues are who owns what, what can be taken from a lab when leaving, and how to cite the works of others. Combining both US and international cases, most people (76%) are concerned about ownership, recognition for contribution of work, authorship inclusion, and authorship position. Common authorship dilemmas include “I’m leaving. How will I ensure credit for further work using my contribution?” “My paper has been sitting on my supervisor’s desk for 6 months. It needs to be read.” and “Though an author, I never inspected the article, never saw the reviews, and never signed off on it.” More serious professional misconduct concerns comprise the remaining 24% of complaints. Rudimentary institutional best practice would include alerting every potential author to the ethical expectations for the research institution, as well as institutional self-monitoring.
Harvard Medical School, 164 Longwood Ave, Room 304, Boston, MA 02115, USA
The Journal Ombudsman: A Precursor to a Scientific Press Council?
Altman et al were the first to call for the creation of an international medical and scientific press council (JAMA. 1994;272:166-67). The concern that editors may abuse the trust and power invested in them by authors, readers, and publishers had grown with the careful documentation of instances of unambiguous editorial misconduct. Altman et al invited the International Committee of Medical Journal Editors to “explore possible procedures for allowing authors’ grievances to be heard and for possible sanctions if complaints are upheld.” However, an obstacle seemed to be the logistic complexity of coordinating, at the international level, an appeals procedure through a single body. In an effort to open wider discussion about editorial accountability, The Lancet established an ombudsman in July 1996 (Lancet.1996;48:6). Clear criteria, modeled on the UK Parliamentary Commissioner for Administration, were drawn up. The ombudsman could investigate delays in handling manuscripts and letters; editorial discourtesy; failure to follow stated editorial procedures; failure to take reasonable account of representations by authors and readers; and challenges to the publishing ethics of the journal. Complaints about the substance of editorial decisions were ruled out of the ombudsman’s remit. After 6 months, the ombudsman had received 8 complaints. These, and others submitted during 1997, will be reviewed. The effects of this initiative on The Lancet’s editorial process will also be discussed. Does this journal’s experience support a case for wider implementation of the ombudsman concept among scientific journals?
The Lancet, 42 Bedford Sq, London WC1B 3SL, UK
Thursday, September 18
Peer Review in Bangladesh: An Analysis From a Journal Perspective
Hasan Sharef Ahmed
To assess the current status and process of editorial peer review of scientific journals published in Bangladesh.
Retrospective review of English-language journals available in 5 major libraries of Dhaka city and a face-to-face interview of 43 authors randomly selected from among research institutions and universities. Of them, 29 were university teachers and 14 were professional researchers who had published papers in journals.
Only 40 English-language journals were available. Of these journals, 10 were published irregularly, 14 had ISSNs, inconsistencies existed in 7, and only 2 were indexed. Thirty-one journals were published by institutions, and 9 were published by professional associations. Of the 43 authors interviewed, 38 believed that peer review helped improve their papers; 11 did not know if peer review was used in Bangladeshi journals. Twenty-five authors were satisfied with the editorial assessment of their papers, 16 were partially satisfied, and 2 were not satisfied. All the authors asked participated in the interview. Problems cited by the authors included poor review, harsh and difficult comments, unnecessary changes, omission of important points, not valuing authors’ opinions, non-cooperation, and longer review time. Authors suggestions to improve peer review include: mandatory review by professionals from similar discipline; panel of peer reviewers for each journal; and recognition, motivation, and incentive for reviewers.
Though the necessity of peer review is well understood to most authors, there are still some problems that need to be addressed. The reviewers should be careful and tactful, avoid too much editing and unnecessary changes, and should have respect for authors and their views. But at the same time reviewers need recognition, motivation, and incentive for their work. A panel of peer reviewers for each journal is widely recommended.
Research and Evaluation Division, BRAC, 356 Mohakhali C/A, Dhaka 1212, Bangladesh
The Nature of Statistical Input in Medical Research
Douglas G Altman,1 Steven N Goodman,2 Frank Davidoff,3 and Richard Smith4
To investigate the nature of statistical input in medical research projects and its relation to the statistical methods used.
As an initial study of the nature of statistical input in medical research, we are conducting a survey of authors who submit papers to the British Medical Journal and Annals of Internal Medicine. Authors of papers that report the results of original research are being asked to complete a short questionnaire about the statistical input to their study. They are assured that completion of this will not affect the editorial processing of their paper. Aspects being investigated include the availability of a statistical consultant, affiliation, the stage of the research project when the statistical consultant became involved, and their contribution to the paper, or the reasons why no statistical consultant was brought in at any stage. We intend to include several hundred papers.
Answers to the questionnaire will be tabulated, and studied in relation to which statistical methods were used in the paper. Different types of study (eg, randomized trials) will be investigated separately. The data are being collected during the first part of 1997.
This study will provide important information about the nature of statistical input into medical research projects. The study is a prelude to a fuller investigation of the relation between the nature of statistical input and both the quality of the resulting papers and their chance of acceptance for publication.
1ICRF Medical Statistics Group, Centre for Statistics in Medicine, Institute of Health Sciences, Old Road, Headington, Oxford OX3 7LF, UK; 2Division of Biostatistics, Johns Hopkins University, Baltimore, USA; 3Annals of Internal Medicine, Philadelphia, USA;4 and British Medical Journal, London, UK
Redundant Publication: Survey of Journal Editors and Authors
Deborah Barnes,1 Veronica Yank,1 Lisa Bero,1 and Drummond Rennie1,2
To examine the perspectives of journal editors and authors on redundant publication.
A pretested survey was mailed to the senior editor and 1 randomly selected author from clinical journals in the Abridged Index Medicus (n=99). Editors and authors were asked whether publication of manuscripts that overlap in various ways was justified and about journal review practices and policies. Consensus was defined as 67% or greater agreement within each group.
Fifty-two percent of editors and 49% of authors responded to the first mailing. There was consensus in both groups that publication of most types of overlapping articles was unacceptable; however, publication of 2 articles with overlapping data was considered more justifiable if their conclusions differed. In strong disagreement with editors, authors felt it was justifiable to publish segmented articles, and to publish 2 similar articles if 1 appeared in a non-peer-reviewed symposium proceeding. Forty-one percent of authors had notified an editor that a manuscript under their review overlapped with previously published work. On average, editors rejected 1 in 200 manuscripts due to substantial redundancy and found 1 in 500 papers to be redundant after publication. Forty-four percent of editors stated their journals did not publish a definition of redundant publication. Data related to reasons why redundant publication occurs and how it should be addressed will also be presented.
Editors and authors should work together to develop explicit guidelines that will clarify when it is or is not justified to publish manuscripts that overlap in varying ways.
1Institute for Health Policy Studies, University of California, San Francisco, 1338 Sutter St, 11th Floor, San Francisco, CA 94109, USA; 1JAMA, Chicago, IL, USA
An International Survey of Quality Control Procedures in Independent Drug Bulletins
Dominique Broclain,1 Pierre Chirac,1,2 and Ellen t’Hoen1
The International Society of Drug Bulletins (ISDB) is a membership organization of independent bulletins and journals on drugs and therapeutics. The Society was founded in 1986 to encourage development of drug bulletins free from commercial influences, promote exchange of reliable information, and facilitate international cooperation. A survey was undertaken to collect information on the quality control procedures of ISDB members.
A questionnaire was mailed to all ISDB members (n=51).
Data analysis covers 29 bulletins from 22 countries (response rate: 57%). Cumulated number of subscribers is 599 529. Readers are mostly medical doctors (72% 8%) or pharmacists (20% 7%). The bulletins are almost always written in the native language (n=27). Authors are frequently in-house (43% 13%). Half of the ISDB bulletins contain no spontaneous submitted articles (n=15). Most bulletins have a peer-review process (n=27). Conflicts of interest declaration from authors (n=6), editors (n=7), and reviewers (n=8) is required by few bulletins. Authors are blinded to reviewers in 6 bulletins, reviewers are blinded to editors or authors in 4 bulletins. The average number of reviewers per article is 7.66 3.5 (ranging from 1 to 50). Readers are sometimes used as reviewers (n=6).
Most ISDB bulletins, regardless of their size, resources, or location, have a quality control process including peer reviewing. Studies should be undertaken to explore the impact of these procedures on accuracy, validity, and clinical relevance of the information provided by ISDB bulletins.
1International Society of Drug Bulletins, BP 459, 75527 Paris cedix 11, France; 2La revue Prescrire, Paris, France
The Role of Statistical Review in a New Journal
Julia Brown,1 Janet Dunn,2 William Jones,3 David Machin,4 and Patricia Shevlin1
Clinical Oncology is a new journal (approximately 8 years old) covering all aspects of the management of patients with cancer. It is an international journal with contributions and subscriptions from many countries and an institutional subscription base of 202. It began a policy of statistical review of manuscripts in 1993. Three statisticians with experience in cancer research have led the review process. Articles that require epidemiologic input are referred to appropriate experts. All 3 statisticians are on the editorial board. All manuscripts are reviewed. There have been difficulties. One problem was how to decide at what level to set the statistical standards in the early days when the journal needed to have material to publish. A gradual process of raising standards was adopted. The review process has meant large volume of work, frustration at lack of time for detailed review, lack of recognition by employers of this contribution, and slow progress in achieving higher standards. However, over the 3 years improvements have been seen. In 1993, 35% of articles were either rejected or required revision. In 1996, this had risen to 70% mainly as a result of the statistical input. In 1994, statisticians accepted 65% of the manuscripts without revision and returned 29% for revision to be rereviewed. The corresponding figures in 1996 were 21% and 59%, respectively. The editor has final say but few statistical recommendations have been overruled. Statistical guidelines have been published.
1Yorkshire Clinical Trials and Research Unit, Arlington House, Hospital, Leeds LS16 6QB, UK; 2CRC Clinical Trials Unit, University of Birmingham, UK; 3United Leeds Teaching Hospitals, NHS Trust, UK; 4MRC Clinical Trials and Epidemiology Research Unit, Singapore
Relationship of Editorial Ratings of Reviewers to Reviewer Performance on a Standardized Test Instrument
Michael L Callaham,1,2 Joseph Waeckerle,1,3 and William Baxt1,4
Whether editorial ratings of peer reviewers are an accurate reflection of a reviewer’s ability to detect manuscript flaws.
Thirty editors at Annals of Emergency Medicine routinely rate reviews of submitted manuscripts on a subjective 1 to 5 ordinal scale. All active reviewers were separately sent a fictitious test manuscript to review in the fall of 1994 (blinded to its true purpose), which possessed 23 deliberate flaws. Reviewer ratings, reviewer performance calculations, and measures of reviewer experience were compared to reviewer ability to detect these test manuscript flaws.
Seventy-eight percent of those available to review in the fall of 1994 evaluated the fictitious manuscript; 127 reviewers detected a mean 3.4 (1.6) of the 10 major flaws and 3.1 (1.7) of the 13 minor flaws. These same reviewers reviewed a mean 7.5 submitted genuine manuscripts (SD 4.2) between January 1994 and August 1996 and were rated by 30 editors. The mean editorial rating for each reviewer was modestly correlated with the number of major or minor flaws they detected (R=.044 and 0.43, respectively). Each rating point equated to 1 more major error detected. Individual reviewer acceptance rate and congruence with editorial decisions were not associated with detection of errors (R=.22 and .16, respectively). Years of experience reviewing, volume of reviews for the journal, number of authored manuscripts, and combinations thereof did not predict performance on the fictitious manuscript or reviewer ratings.
A subjective ordinal rating scale applied by editors to reviews of submitted manuscripts was only modestly correlated with the ability of a blinded reviewer to detect deliberate flaws in a test manuscript. Measures of individual reviewer acceptance rate, reviewer congruence with editorial decisions, and reviewer experience were not correlated with error detection.
1Annals of Emergency Medicine; 2University of California, San Francisco, PO Box 0208, San Francisco, CA 94143-0208, USA; 3University of Missouri at Kansas City, School of Medicine, Kansas City, KS, USA; 4Department of Emergency Medicine, University of Pennsylvania, Philadelphia, PA, USA
Commercial and Scientific Conflict in Biomedical Publishing: The Perspective of an International Pharmaceutical Company
Paul M Woodcock and Hamish A Cameron
The rules on publication and the process of peer review have evolved to balance the competing forces surrounding the sponsorship, conduct, and reporting of clinical trials that have changed clinical practice and have been sponsored by pharmaceutical companies, although there continues to be concern among the medical profession about commercial influence of the publication process. Zeneca Pharmaceuticals is aware of these concerns but, as a research-based company, has a genuine scientific and educational need to communicate findings from sponsored research involving its products. For such publications to have credibility with peer reviewers and achieve acceptance by major journals, it is essential that the authors of company-sponsored research, whether or not they are company employees, produce manuscripts that are accurate, meet best ethical and professional standards, and protect the interests of patients while not compromising the intellectual property of the company. To guide employees faced with these complex issues, Zeneca has introduced a policy for external publication of scientific information. The policy has been developed in consultation with publishers, journal editors, and investigators and includes topics such as patient safety, publication standards, authorship, trademarks, ghostwriting, conflict of interest, and copyright issues. Thus, Zeneca intends to develop publications that meet accepted ethical standards while providing effective support for the presentation of Zeneca products to the scientific community. This presentation will highlight the dilemma facing those with both commercial and scientific objectives in biomedical publishing from the perspective of a major international pharmaceutical company.
Medical Affairs Department, 11F128, Zeneca Pharmaceuticals, Mereside, Alderley Park, Macclesfield, Cheshire SK10 4TG, UK
The Effect of Informing Referees That Their Comments Would Be Exchanged on the Quality of Their Reviews
Savitri Das Sinha, Peush Sahni, and Samiran Nundy
The quality of manuscript refereeing in developing countries is thought to be poor. As we send each of the original and review articles submitted to The National Medical Journal of India to an Indian as well as a non-Indian (usually Western) referee, we tested whether informing a pair of referees that their comments would be exchanged improved the quality of their reviews.
In a prospective, randomized, blinded study, we sent 100 manuscripts to pairs of referees of which 78 pairs of replies were suitable for analysis (the others were incomplete). Thirty-eight pairs of reviews were exchanged and 40 were not. The quality of the reviews was assessed by 2 editors who were unaware of the referees’ nationality and whether they had been given this information. Quality was scored out of 100 (based on a predesigned evaluation proforma) according to whether the review examined the importance of the question, targeted the key issues, assessed the validity of the methods, assessed the quality of presentation, and provided an overall assessment.
Overall non-Indian referees scored higher than Indians (mean scores, non-Indians first, 56.7 vs 48.6, P<.00l) especially those in the nonexchanged group (58.4 vs 47.3, P<.001) but not the exchanged group (54.8 vs 50.0, P<.06). Being informed that reviews would be exchanged did not affect the quality of reviews by non-Indians (54.8 exchanged vs 58.4 nonexchanged) and of reviews by Indians (50.0 exchanged vs 47.3 nonexchanged). The assessments of the 2 editors matched well (r=0.59, P<.001).
In this study, non-Indian referees are better reviewers than Indians and telling referees that their views would be exchanged does not seem to make much difference.
The National Medical Journal of India, AIIMS, Ansari Nagar, New Delhi 100 029, India
A Typology of Scientific Originality: Initial Validity Testing
A typology of scientific originality was derived from the following elements of the scientific article: Introduction, Methods, and Results, which describe, respectively, the hypothesis, methods, and results of the scientific work being reported. For any article, the content of each section is either previously published in the scientific literature and thus “established” (E) or newly published (N). A permutation of the 3 elements as E or N yields 8 idealized types of originality, idealized because much, if not most, of science consists of modifications of previously published work. As a preliminary test of the validity of the typology, experienced scientists were asked to use the typology in 2 exercises.
By mail survey, scientists who had authored highly cited, peer-reviewed biomedical articles (Citation Classics, Institute of Scientific Information) were asked to (1) rate the 8 idealized types of originality and (2) classify the hypothesis, methods, and results reported in their highly cited articles as E or N or “modified”; if modified, the scientists were asked to further characterize the work as slightly or substantially modified. For analysis, however, “modified” was transformed to E to highlight elements classified as N.
Of 301 scientists, 205 responded to the survey and reported on 230 Classics. Eighty-one percent (n=167) of the scientists rated the 8 idealized patterns of originality, and the scientists classified 91% of the articles (n=209).
The results of this study indicate that the typology may have validity as a measure of scientific originality; more than 150 experienced scientists agreed to and were able to use the typology in 2 different exercises. The scientists’ analyses of their articles’ originality, however, may be biased; therefore, these data should not be considered an indication of the articles’ originality. Because authors of scientific articles are required to clearly state in their articles what is new and what has been previously published (through citation), using statements with citations in each section of an article to analyze its originality may provide a consistent, objective measure of scientific originality. Therefore, as a test of reliability, the analyses of articles’ originality produced by the authors could be compared with analyses of the same articles produced by nonauthors blinded to authors’ analyses. If the typology is too complex to be useful for scientific journal peer review, this technique may at least prove to be a useful heuristic in studying the process of science.
Institutional Review Board, University of Florida, Health Center, Box 100173, Gainesville, FL, 32610, USA
Time Trends and Recent Acceptance Rates for Publication of International Manuscripts in Leading US Medical Journals
Timothy C Fagan
To determine current acceptance rates, and the percentage of original investigations from international (non-US) sources over a 20-year time span in 2 general medical and 2 internal medicine journals .
Time Trends: All original investigations published in 1972, 1982, and 1992 in the 2 leading US general medical journals, JAMA and New England Journal of Medicine, and the 2 leading internal medicine journals, Annals of Internal Medicine and Archives of Internal Medicine, were reviewed for country of origin. Rates were determined by country and geographic area. Statistics were by chi square for trend. Acceptance Rates: Calendar year 1995 total submissions and number accepted, or acceptance rate, and country of origin, were obtained from the 2 general medical journals and the 2 internal medicine journals. Mean acceptance rates were calculated by journal type.
Time Trends: For general medical journals, there was an increase in articles from non-US sources; 8.6% in 1972, 13.0% in 1982, and 19.2% in 1992, P<.001. For internal medicine journals, international articles increased from 8.4% in 1972 to 10.9% in 1982, and to 7% in 1992, P<.01. Canada had the highest percentage of total articles, and Europe ranked highest of non-North American geographic areas for both types of journals. Acceptance Rates: US mean acceptance rates were 16% for general medical journals and 23% for internal medicine journals. Corresponding mean acceptance rates were 17% and 35% for Canada, and 13% and 18% for Europe. For non-US, English-speaking countries, mean acceptance rates were 16% for general medical journals and 33% for internal medicine journals; for non-English-speaking countries mean acceptance rates were 6% and 13%.
The percentage of international manuscripts published in leading US general medical journals and internal medicine journals increased significantly from 1972 to 1992. Europe accounts for almost all of the increase. Potential reasons include improved science in non-US countries, improved ability to communicate in English, and the globalization of medicine due to improved and increased communication and dissemination of medical information. In 1995, except for Canada and Western Europe, acceptance rates for international manuscripts are lower than US general medical journals and internal medicine journals. Acceptance rates for English-speaking countries are more than twice the acceptance rates for non- English-speaking countries. Potential reasons include scientific quality, difficulty writing in English, bias against manuscripts from certain countries, and submission of best manuscripts to national, rather than US, journals.
Archives of Internal Medicine and Department of Medicine and Pharmacology, University of Arizona College of Medicine, Tucson, AZ, 85724, USA
Is Peer Review Technologically Obsolete?
Susan Feigenbaum1 and David M Levy2
To document trends in the statistical practices of researchers over the past 3 decades; link these trends to changes in the “price” of computing technology; and explore the implications for traditional peer-review processes as guarantors of research integrity and robustness.
A retrospective study was conducted based on all articles published in the 1960 through 1989 issues of The Review of Economics and Statistics. For each article, data were gathered on statistical technique, size of data set, number of control (explanatory) variables and results reported, number of coauthors, and research funding sources.
From 1960 through 1989, the median number of observations per article increased from 123 to 385; the median number of control (explanatory) variables increased from l to 11; the median number of reported results decreased from 7.5 to 6. In contrast, the maximum “frontier” values for data set size and number of reported results peaked (at 349,060 and 150, respectively) and fell during the period, while the maximum number of control variables (91) rose throughout. The percent of articles using a linear estimation technique fell from 100% to 22%; the percent of articles using nonlinear maximum likelihood estimation rose from 0% to 55%.
Statistical practices in published research have become more complex, thereby compromising the ability of peer reviewers to assess the integrity of the work at low cost. At the same time, a low-cost signal of research quality-the reputation of the author-has been diluted by the increasing frequency of multiple authorship. Peer review must develop new strategies to more finely match reviewers and submissions if it is to continue as a primary mechanism for assuring journal research quality.
1Department of Economics, University of Missouri-St Louis, St Louis, MO 63121, USA; 2Center for Study of Public Choice, George Mason University, Fairfax, VA, USA
Use of Reviewers By Clinical Journals
John Garrow,1 Michael Butterfield,2 Jacinta Marshall,2 and Alex Williamson2
To investigate the use of external reviewers by editors of clinical journals.
We mailed 262 questionnaires to all editors of clinical journals that received at least 1,000 citations in 1994.
Replies were received from 191 editors (73% response). Of responding editors 7% used 1 reviewer, 63% used 2, 25% used 3, and 4% more than 3. Sixty-four percent used checklists to aid reviewers. About half (46%) of editors personally reviewed every paper submitted. Only 20% of editors blinded reviewers to authors, but 93% blinded authors to reviewers. Most editors (69%) did not feel bound to accept the advice of reviewers. Among journals which maintained a database from which reviewers were drawn the number listed ranged from 30 to 12,000: some editors have no database but consult MEDLINE to select suitable reviewers. Most (86%) restrict the number of papers sent to individual reviewers so as not to overburden them. The proportion of editors who sent at least half the papers submitted to reviewers outside their own country was 8/88 (9%) for editorial offices in North America, 23/56 (41%) in the UK, and 33/45 (73%) for other countries.
Typically clinical editors use 2 or 3 reviewers, and 69% do not feel bound to follow the advice they receive, so editing at the “thalamic level” (Lock 1985) applies only to a minority of clinical editors. North American editors use few reviewers outside North America.
1European Journal of Clinical Nutrition, Dial House, 93 Uxbridge Road, Rickmansworth, Hertfordshire WD3 2DQ, UK; 2BMJ Publishing Group, London, UK
Reviewers’ Attitudes Toward an Enhancement of the Peer-Review process
Robert Goldberg1 and James E. Dalen2
Objective and Design We recently published an editorial in the Archives of Internal Medicine that was designed to enhance the peer review of scientific manuscripts (Arch Intern Med 157:380, 1997). To improve the reviews of manuscripts, each of the major study designs used in clinical and epidemiological studies were discussed and overviewed. These included case-control studies, cross-sectional studies, and randomized controlled trials. A checklist of factors to consider in reviewing each of these designs was provided. After publishing this editorial in February, 1997, we solicited comments from reviewers during the spring as to the usefulness of this article through a mailed survey.
Among survey respondents, 145 (72%) said that they would use the guide presented in the published editorial for reviewing observational studies and randomized trials while 57 respondents said that they either would not (n=19) use this guide or were unsure (n=38). The majority (87%) of individuals responding to this survey said that they would use the checklists provided to critique an article. A majority of survey respondents suggested that we enclose a copy of the editorial to all future reviewers of manuscripts for the Archives.
The results of this survey suggest a positive response to enhancing the peer review of submitted manuscripts to a major medical journal.
1Division of Cardiovascular Medicine, Department of Medicine, University of Massachusetts Medical School, 55 Lake Ave North, Worcester, MA 01655, USA; 2University of Arizona School of Health Sciences, Tucson, AZ, USA
Peer Review or Practical Reason? An Argumentative Approach
To limit the risk of unacknowledged bias in peer review, serious attempts have been made to systematize appraisal procedures. Although checklists provide a crude normative framework to regulate the conduct of peer review, they do not focus on what should be the central concern of any research report-namely, the validity of the core argument made by authors. The structure of arguments and the processes of reasoning have been studied by Stephen Toulmin (The Uses of Argument. Cambridge: Cambridge University Press 1958). He described the geometry of an argument in a way that is directly applicable to the scientific paper. According to Toulmin, the structure of any argument can be represented by a simple formula:
Message D ____________________________________ Q , I | | since W unless R | on account of B
Where D represents the primary data; I, the interpretation; W, a warrant; Q, a qualifier (a measure of the force of an argument); R, the conditions of rebuttal; and B, the information backing W. For Toulmin, the relation between D and I, the argument, is determined by the justification or the warrant. Symbolically, an argument proceeds thus; if D, then I, since W. The aim of any scientific investigation is to establish the warrant. The warrant authorizes the argument; it is the subject of the hypothesis being tested. In biomedicine, Q represents a probability function; B, the degree of random and systematic error; and R, the inclusion criteria for the study (generalizability). These 6 features are essential for evaluating the internal and external validity of the argument being made. This method of argument analysis, if applied to the scientific research paper, offers an opportunity for a systematic yet more logical approach to peer review.
The Lancet, 42 Bedford Sq, London WC1B 3SL, UK
Civil War and Scientific Activity in the Former Yugoslav Republics
The aim was to assess the number of scientific publications from the Yugoslav republics before, during, and after the civil war that lasted from 1991 to 1995.
The number of articles published in the journals indexed by the Science Citation Index (SCI) from Serbia, Croatia, Slovenia, Bosnia-Herzegovina, Macedonia, and Montenegro were counted for each year from 1988 to 1996.
In 1988, scientists and researchers in the former Yugoslavia published 2,197 papers in journals that are indexed in SCI, with an average annual increase of 18% over the next 2 years. Of these publications, Serbia produced 42%, Croatia 32%, Slovenia 20%, Bosnia-Herzegovina 3%, Macedonia 2.5%, and Montenegro 0.5%. During the years of war, the number of publications dropped significantly in 2 republics, Bosnia-Herzegovina and Serbia. The lowest production occurred in 1994, when the number of published papers dropped in 4 republics: Bosnia-Herzegovina (36% of prewar volume), Montenegro (36%), Serbia (69%), and Croatia (92%). In 1994, the volume of published papers was higher in Slovenia and Macedonia. In 1996, the first postwar year, there was slight decline or stagnation in the volume of published papers in comparison to the previous year in all republics. The majority of these papers were published in foreign (international) journals. In 1988 and 1989, 5 Yugoslav journals were indexed in the SCI (3 from Croatia, 2 from Serbia); from 1990 to 1993 3 journals from Croatia were indexed; and from 1994 only 1 journal from Croatia was indexed.
(1) Scientific output was hindered by the civil war. Scientific productivity dropped sharply in Bosnia-Herzegovina and Serbia due to the war. The first republic was an arena of the war, while Serbia was under the sanctions of the United Nations. (2) The civil war did not influence scientific productivity in Slovenia and Macedonia. (3) Scientific publication in Croatia was not reduced during the war years because the war zones were in Serb-populated regions of Croatia, where scientific activity was practically nil before the war. (4) In the first postwar year there was stagnation of scientific publication in all former Yugoslav republics. (5) These Balkan nations will peaceably move forward-both scientifically and culturally-with the help of international community (Andri I. The Bridge on the Drina. The University of Chicago Press: Chicago, Ill; 1977). All scientists, as Garfield states, “are one intellectual community,” and researchers from other countries worldwide should help to restore and upgrade research and publication in the affected regions (Current Contents. 1988;16:3-7).
Department of Anesthesiology and Pain Management, Cook County Hospital, Chicago, IL 60612, USA
Statistical Inaccuracies in Peer-Reviewed Articles
Shelley Johnson and Adel Mikhail
Clinical studies are published with the assumption that the statistics are valid. However, careful dissection of articles from peer-reviewed journals frequently expose simple statistical errors.
Review of MEDLINE identified 27 articles published from 1989 through 1993 reporting clinical follow-up of patients with prosthetic heart valves. Data dealing with morbidity and mortality were extracted from each report. This study focused on accuracy of the calculation of statistics.
Errors discovered were separated into 3 categories: addition/subtraction errors, incorrect calculation of simple percentages, and mistakes in calculating linearized rates. The majority of articles (16/27, 59%) contained no errors. However, 22% of the articles (6/27) were found to have errors in addition/subtraction, and simple percentages were inaccurate in an equal number of articles. Miscalculation of linearized rates were detected in 15% (4/27) of the articles. Five of the publications had errors in 2 categories.
Review of prosthetic heart valve articles published during a 5-year period frequently demonstrated statistics that are inconsistent with the data reported. Errors are widespread, and the current peer review system apparently does not detect them. Ironically, the errors are usually simple mistakes that could be eliminated through careful final review before publication. Unfortunately, when easily calculated rates are incorrectly reported, doubt is cast on the accuracy of the entire clinical investigation.
Medical Incorporated, 9605 West Jefferson Trail, Inner Grove Heights, MN 55077, USA
Evaluation of Three-Step Reviewing Process of Three Chinese Peer-Reviewed Journals
To ensure the academic quality of peer-reviewed journals of the Chinese Medical Association, we apply a 3-step review process (first review by editors, second review by experts, and third review by editorial board) to the submitted manuscripts. The objective of this study was to evaluate the importance and reasonableness of the 3-step review process.
We collected and analyzed the reviewers’ suggestions and comments for the Chinese Journal of Internal Medicine, Chinese Journal of Obstetrics and Gynecology, and Chinese Journal of Pediatrics. For those not accepted, we analyzed the number of manuscripts that were not accepted during the first, second, and third reviewing processes, respectively, and manuscript return rates. For published manuscripts, we analyzed the reviewing times, the reviewing duration, and the number of suggestions in the reviewing list. The coincidence rates of accepted and not accepted manuscripts were calculated to evaluate the reviewing quality.
A total of 2,837 manuscripts for Chinese Journal of Internal Medicine, 2179 manuscripts for Chinese Journal of Obstetrics and Gynecology, and 3,090 manuscripts for Chinese Journal of Pediatrics were received. Sixty-two percent of them entered the second reviewing process. Thirty-eight percent of the manuscripts were sent back during the first reviewing process, 24% during second reviewing, and 9% during third reviewing. Twelve percent of all manuscripts were published within a year. Taking the original article as an example, the first and second review processes each took 17 days. An average of 7 items of suggestions were raised in the review. The coincidence rates of the first, second, and third times of revisions during the second reviewing process were 96.8%, 99.4%, and 100%, respectively, for the published papers and 43.0%, 87.7%, and 99.8%, respectively, for the returned papers.
The first reviewing process provides fundamental information, the second reviewing is the key process, and the third reviewing makes the final decision. They are interdependent. The coincidence rate of accepted papers suggests that the second reviewing is not enough. For a purpose of selecting papers with high quality, repeated review is recommended. Appropriate proportion of manuscripts circulated in each step should be better controlled.
Chinese Medical Association, 42 Dongsi Xidajie, Beijing 100710, China
Peer Review and the Credibility of Scientific Biomedical Journals of Developing Countries
Guillermo J Padron and Jose V Costales
To evaluate the effect of improvements of the peer-review process on the quality and credibility of Biotecnologia Aplicada.
Changes introduced in 1995 were a) a body of referees was created according to previous referee experience, professional experience, and merits in scientific publications; b) referees were fully and regularly informed on peer review and journal policy and given feedback on the results of their work; c) a collaborative approach (double blind) between referees and authors was encouraged. The impact of improving peer review was evaluated by comparing number of submissions to the journal, rejected and published papers, subscriptions and advertisement orders in 1996 in contrast to the period of 1992 to 1994.
The number of rejected papers increased from 13 (1992-1994) to 28 (1996). The number of articles published in 1992-1994 was almost doubled in 1996. Contributions from other countries increased more than 5 times, while local articles remained at the same level. Accepted papers were returned to the authors for modifications 1.5 times during the period 1992-1994, while in 1996 they were returned 3.5 times. Referees felt stimulated by the editorial policy. The publication frequency of the journal increased from 3 to 4 issues per year. Subscriptions doubled, while paid advertisement orders increased 5 times.
The upgrading of peer review increased the quality and credibility of Biotecnologia Aplicada. The improvement of the peer-review process should be a strategic break-point of the scientific journals of developing countries in their quest for credibility and international acceptance.
Elfos Scientia, PO Box 6072, Havana 6, Cuba
A Structured Review of the Literature on Peer Review
Jean-Pierre Pierie, John H Hoogeveen, and John Overbeke
To investigate what original work is published on peer review.
Search for articles and references of articles using the following keywords in the title for selection: adviser, peer review, decision, quality, referee, acceptance, rejection. Obvious (1 page) comments and editorials were excluded.
The search of the primary sources retrieved 117 articles of which 47 turned out to be original articles on editorial peer review. Searches in MEDLINE, Social Sciences Citation Index, and Embase added 22, 14, and 3 original articles, respectively. Another 4 articles were retrieved from European Science Editing. Thirteen of the 90 articles were found to be the report of prospective studies, with various subjects concerning an aspect of the peer-review process.
Not too much research has been done on the editorial peer-review process. Most information (47/90) was retrieved from the primary sources and only a small proportion (13/90) are prospective studies. Additional (randomized) research of the editorial peer-review process is needed.
Nederlands Tijdschrift voor Geneeskunde, PO Box 7591, 1070 AZ Amsterdam, The Netherlands
Peer Review: Recommendation Is Based on Few Aspects
Karl-Ludwig Resch,1 Annegret Franke,1 and Edzard Ernst2
To determine which aspects a peer reviewer is commonly asked to comment on are vital, and which are possibly redundant.
As part of a study into peer review, a bogus letter to the editor was sent to 291 medically trained participants of an interdisciplinary international conference, who were debriefed only after the study. They were asked to comment on the following quality criteria by indicating their position on visual analog scales, ie, horizontal lines in between 2 given extremes (poor to excellent): relevance of subject, formulation of hypothesis, randomization, inclusion/exclusion criteria, sample size, statistical evaluation, choice of main outcome measures, follow-up, clarity of description, linguistic quality, overall quality of the study, and overall quality of the manuscript. By means of a multiple regression analysis, we tried to identify the optimal set of items (best prediction with least items) to predict overall quality of manuscript.
Response rate was 635 (183/291); 127 responses were suitable for evaluation. Significant but moderate bivariate correlations were observed between overall quality of manuscript and relevance of subject (r=0.51), formulation of hypothesis (r=0.71), statistical evaluation (r=0.54), choice of main outcome measures (r=0.66), clarity of description (r=0.71), and linguistic quality (r=0.52). Multiple regression analysis, however, revealed a multiple adjusted l:l2 of over 075 with a set of only 3 items (ie, clarity of description, choice of main outcome measures, and formulation of hypothesis). Stepwise inclusion of further items resulted in an only minute increase of R2.
In this setting, the peer reviewer’s recommendation was well predicted by a limited set of criteria. In other settings this may be different. Nevertheless, it seems promising to direct more research into the development of a standardized (and validated) assessment form.
1Forschungsinstitut für Balneologie und Kurortwissenschaft, Lindenstr 5, 08645, Bad Elster, Germany; 2Dept of Complementary Medicine, University of Exeter, Exeter, UK
The Educational Role of Peer Review in a Medical Journal From a Developing Country
Humberto Reyes, Ronald Kauffmann, and Max Andresen
Ours is a medical journal published monthly in a developing country, covering papers by local authors in topics of internal medicine, its subspecialties, and related experimental research. Articles are published in a non-English language, with titles and abstracts translated into English. Research articles (50-55 per year) and clinical reports (45-50 per year) represent 50%-60% of published manuscripts. Since 1980 the journal follows the recommendations of the International Committee of Medical Journal Editors. Every submitted manuscript is confidentially peer-reviewed, prior to the editor’s decision. Annually over 150 reviewers from a wide variety of activities-clinical investigators, learned specialists, biostatisticians, and basic scientists-participate in this process. About 23% of the manuscripts are accepted in their original version, 7% are rejected, and 70% require rewriting by the authors to comply with the editorial board criticisms. Of them, 90% are resubmitted in corrected version and accepted. We have assumed the role of guiding the authors in improving their manuscripts, instead of just deciding what is worth being published. This has meant shortening of the manuscripts (in 23% of them), a more adequate definition of objectives (16%), description of methods (17%), description of patients and selection criteria (9%), statistical analysis (17%), discussion (19%), and/or conclusions (18%). We propose that international standards of quality and the peer review system be applied by medical journals in developing countries with an educational perspective that will stimulate the diffusion of biomedical progress.
Revista M édica de Chile, Casilla 168, Correo 55, Santiago 9, Chile
Attribution of Credit by Contribution
Timothy C Smith and Peter V Scott
To determine the relationship between author placing and status in medical papers
Journals in our library of general or anaesthesia content, which were published in November 1996, were included. Single author papers and journals omitting author status were excluded. Status was assessed for medical and academic rank, using an ordered list.
Our library held 9 different journals in 1996 that are of general or anaesthetic content containing a total of 3,872 research articles. The bimonthly journal issued in October and December was excluded. Five journals did not document author status. Three journals (7 individual issues) satisfied the criteria. One non-clinical paper was excluded. Fifty-six papers were reviewed. The highest ranked author was the first author in 5 (9%), and definitely the final author in 28 (50%) papers. The final author was 1 of the highest ranked in 44 (79%).
Credit and responsibility for research is currently attributed to the authors who should meet the requirements of the International Commitee of Medical Journal Editors. Many authors do not meet these criteria , devaluing authorship in general and multiauthorship in particular. In our study, the final author is the highest rank and the first author is usually a trainee. Most work is seemingly done by the first author with only partial contributions from the others and the departmental head or other senior figure may simply “rubber-stamp” the finished paper. If this is so, then minimal contributions are rewarded and substantial contributions by authors other than the first are undervalued. We will propose a method of appropriately rewarding partial contributions to reduce gift authorship and improve the value of authorship.
Dept of Anaesthesia and Intensive Care, Alexandra Hospital, Woodrow Dr, Redditch, Worcestershire B98 7UB, UK
Earning Authorship:A Survey of Researcher’s Views
R Stacy, P H Pearson, E F S Kaner, B G Vernon, J M Rankin, R S Bhopal, E McColl, L H Thomas and H Rogers
To explore researchers’ perceptions of authorship issues.
Interview-based survey of a sample of academic staff in 1 British medical faculty, stratified by seniority and selected by computer-generated random numbers. Interviews were semistructured, incorporating both closed and open-ended questions. Qualitative analysis of responses was thematic using a grounded theory approach.
Sixty-six staff (70 contacted; 94% response rate) completed the interview. Respondents were both unclear about and critical of the criteria of the International Committee of Medical Journal Editors (ICMJE), but identified the need for some rules of authorship. They recognized that allocation of authorship is complex. They questioned the meanings and interpretation of words used in the criteria. Issues of entitlement, recognition, and responsibility were raised. The findings showed that authorship is used as a form of academic currency linked to power differentials and career advancement. Respondents thought authorship was related to contributing to a project, but questioned how to define a significant contribution. Researchers described this in terms of intellectual contributions, research work, practical assistance, writing, and revision. They identified the potential for confusion in ordering large teams of multi-disciplinary, multinational coauthors who have made different contributions.
Researchers’ perceptions of authorship are not congruent with editors’ criteria. Authorship is valued in 2 ways. Researchers attribute meaning to authorship in terms of academic value or currency. In contrast to the ICMJE criteria, medical researchers in this study value practical as well as intellectual research contributions. Meaningful debate on earning authorship needs to include the views of researchers.
Department of Primary Health Care, University of Newcastle School of Health Sciences, Framlington Place, Newcastle upon Tyne NE2 4HH, UK
Reference and Quotation Accuracy in the Major and Minor Infectious Disease Journals
Kelly J Warren,1 Neal Bhatia,1 Winnie Teh,2 Matthew G Fleming,1 and Michael Lange2
Objectives References and quotations in the medical literature are often inaccurate. We conducted a retrospective, comparative review of the accuracy of references in the 4 major clinical infectious disease journals and 6 minor journals.
A total of 240 references from the 4 major clinical infectious disease journals (Clinical Infectious Diseases, Journal of lnfectious Diseases, Scandinavian Journal of Infectious Diseases, and Pediatric Infectious Disease Journal) and 142 references from 3 minor clinical infectious disease journals (Infections in Medicine, Infections in Urology, and the AIDS Reader) and 3 minor clinical specialty journals that publish articles on infectious diseases (Complications in Surgery, Pediatric Annals, and Complications in Orthopedics) were reviewed for citation and quotation errors. Citation errors were defined as major if they prevented the rapid identification of the original source. Minor citation errors were those that did not impede the rapid identification of the original source. Quotational errors were defined as major if the citation contradicted, was unrelated to, or did not support the authors’ assertion. Minor quotational errors were those that, although inaccurate, generally supported the authors’ claims or were those that cited a secondary source. All data were entered into a computerized database and analyzed with chi-square and nonparametric statistics. Reference accuracy rates for the major and minor journals were determined. Additionally, correlations were sought between accuracy of references and number of references cited, number of authors, country of origin of authors, language of authors, and impact factor of the journal in which the article was published.
Of the examined references, citation errors were found in 26% from the major and 32% from the minor journals; quotation errors were found in 15% from the major and 20% from the minor journals. The combined error rate of references from the minor journals was significantly greater from that of the major journals (P=.059). For individual articles, there was a significant positive correlation between the total number of authors and the likelihood of a reference error (P=.019). There was also a significant negative correlation between the impact factor of the journal in which the article was published and the likelihood of a reference error (P<.001). There was a positive correlation between the total number of references and the likelihood of a reference error (P=.032). Neither the country of origin nor language of the authors correlated with the likelihood of a reference error.
The reference accuracy rate of infectious disease journals is similar to that reported in other specialty journals. The data support the hypothesis that references in the minor journals tend to be less accurate than references in major journals. The data also show that individual references from articles containing a large number of references are less likely to be accurate than references from articles with few references. Individual references from articles by a large number of authors are more likely to be accurate than references from articles by a small number of authors. A journal’s impact factor is a reliable indicator of the overall accuracy of references within the journal.
Dept of Dermatology, Medical College of Wisconsin, 9200 West Wisconsin Ave, Milwaukee, WI 53226, USA; 2Division of Infectious Diseases and Epidemiology, St Luke’s-Roosevelt Hospital Center, New York, NY, USA
Selective Publication of Advanced Research from Eastern Europe in International Journals
To find whether results of advanced research from Eastern European countries are selectively published in USA/UK-based journals.
MEDLINE search for human-related studies in years 1988, 1990, and 1995 originated from Eastern European countries: USSR/Russia, Poland, and Hungary. Australia and India were controls as countries with Western-style medicine and no language barrier. France was a control for non-English-language countries. For control countries the publications were located using address field of MEDLINE, and for Eastern European countries, national languages were used. All types of advanced design research were located using key words in MeSH: randomized controlled trials, controlled clinical trials, double blind, cohort, prospective, longitudinal. Frequency of advanced research in domestic journals’ was compared for every country with frequency of advanced research in USA/UK journals.
In all countries, publication frequency of advanced research increased during the years 1988-1995. In Indian journals in 3 years were published 8.1%±2.4% of all Indian-based advanced research reports vs 58.5%±4.4% for reports published in the USA/UK-based journals. Frequency of advanced research in Indian journals was 3.2%±1.0% vs 5.1%±0.6% of Indian articles published in USA/UK-based journals. In Australian journals there were published 20.3%±2.2% of Australian origin advanced research vs 68.9%±2.6% in USA/UK journals. Frequency of an advanced research in Australian journals was 6.2%±0.7% vs 4.5±0.3% of Australian articles published in US/UK journals (P<0.0l). Both Australia and India demonstrated a tendency to increase frequency of publication of advanced research in domestic journals. In journals of the Eastern European countries there were published 92.8%±0.04% of all articles originating from these countries and 73.8%±3.6% of advanced research. While only 2.9%±0.09% of articles originating from these countries were published in USA/UK journals, 14.8%±2.9% of advanced research was published in these journals. Frequency of advanced research in Eastern European publications in USA/UK journals was 2.2%±0.4% vs 0.35%±0.03% in domestic journals (P<.001 in 3 years). In 1995 the difference between international and domestic publications by Eastern European scientists increased: 4.25%±1.2% in USA/UK journals vs 0.84%±0.2% in domestic journals (P<.01). French origin publications demonstrated the same trend to publish advanced research in USA/UK journals. While in 1990 and in 1995 frequency of advanced research in domestic publications of French scientists was stable (10.7%±1.2%), the frequency of advanced research in publications of French scientists in USA/UK journals increased from 13.5%±0.6 to 18.9%±0.7%.
An increasing tendency among authors from Eastern European countries to publish advanced research results in USA/UK-based journals is a reasonable selection of the international journals for publication of the best research. For Eastern European countries, where international journals are not affordable, such practice means a growing segregation of Eastern European medicine from not only international medical research, but also from the best domestic studies.
Saratov State Medical University, PO Box 1528, Saratov, 410601, Russia
Friday, September 19
Effect of Attendance at a Training Session on Peer Reviewer Quality and Performance
Michael L Callaham,1 Robert L Wears, and Joseph F Waeckerle
To determine whether peer reviewer attendance at a brief journal-sponsored training workshop changes review quality.
Subjects included all 340 peer reviewers for Annals of Emergency Medicine, 45 of whom decided to attend a voluntary workshop on how to perform peer review, open to all reviewers.
Methods Reviews were routinely rated by editors on a subjective ordinal l to 5 global quality scale, details of which are reported in another abstract. Comparisons were made between reviewers who chose to attend a 4-hour workshop on peer review sponsored by the journal in 1995, and 2 groups of reviewers who did not attend: controls matched for average quality rating and experience, and unmatched controls. Reviewer performance calculations and quality ratings in the 20 months before and 14 months after the workshop were compared.
Of the 45 reviewers who attended the workshop, 40 had sufficient rated reviews. The 40 matched controls were optimally matched to these attendees. The remaining 260 nonattending reviewers (unmatched controls) had completed fewer reviews (5.4 vs 7.4, P=.009) and accepted more papers (23% vs 16%, P=.12) than attendees. Attending reviewers had a larger ratings improvement compared to matched controls or unmatched controls, and improved congruence with editorial decisions compared to unmatched controls, but differences were not significant.
Reviewers attending a voluntary workshop were more experienced than non-attendees, and performance improved for all reviewers during the study. Although attendees had more improvement in quality ratings than either control group, and greater improvement in congruence than unmatched controls, none of these differences was significant. It is unknown whether the rating system can detect differences produced by this sort of training, and improvement may have been better detected had the attendees been inexperienced or of lower quality. The study population was limited to reviewers with necessary data, and the power to detect small changes was low.
1Annals of Emergency Medicine; University of California, San Francisco, Box 0208, San Francisco, CA 94143-0208, USA
Positive Outcome Bias and Other Limitations in the Selection of Research Abstracts for a Scientific Meeting
Michael L Callaham,1 Robert L Wears, Ellen Weber, Christopher Barton, and Gary Young
to determine features influencing the outcome of research abstracts submitted to a national meeting.
Abstracts of all research submitted for consideration to a national research meeting in emergency medicine (n=493) were classified and rated by a minimum of 2 blinded reviewers, study-effect size calculated for each, and MEDLINE search with 5-year follow-up (and author questionnaire) was conducted to determine publication as a full manuscript in a peer-reviewed journal. The characteristics of the meeting and the submitted research were similar to those of 31 other US medical specialty societies
Of the total 493 submitted abstracts, 179 (36%) were accepted for presentation, and 214 (43%) were published in 41 journals. Abstract scientific quality did not strongly predict either outcome or journal prestige. The best predictors (by logistic regression) of meeting acceptance were subjective “excitement” factor assigned by the blinded reviewers (OR=2.8) and positive results (OR=1.9); and for publication, meeting acceptance (OR=2.6) and large sample size (OR=2.0). Despite a mandatory structured format, much key information on methodology was absent; 48% of abstracts did not report on blinding, and 23% did not report on randomization. Funnel plots of effect sizes showed the classic distribution of positive-outcome (“publication”) bias, favoring studies with positive effects regardless of methodologic rigor. Represented studies had 104% more positive effect sizes than those rejected, and those published had 113% more than those not published (P=.03). Meeting acceptance predicted publication with a sensitivity of only 51%, specificity of 71% (positive predictive value of 57% and negative predictive value of 66%).
Despite a mandatory structured format, primary abstracts provided insufficient information about design and results to allow meaningful peer review. Positive-outcome bias was evident when studies are first submitted for consideration to the meeting, and was amplified both in the selection of abstracts for presentation and in selection for publication as a full manuscript in a journal, neither of which was strongly related to study design or quality or publishing journal prestige. Better methods for selection of abstracts presented at meetings are needed.
1Annals of Emergency Medicine; University of California, San Francisco, Box 0208, San Francisco, CA 94143-0208, USA
Press Releases From Biomedical Publications and Their Impact on Mass Media
Vladimir De Semir, Cristina Ribas, and Gemma Revuelta
This study evaluates the influence press releases from biomedical journals have on the popularization of scientific papers reaching general publications.
Four biomedical publications, namely British Medical Journal, Nature, Science and The Lancet and their corresponding press releases were reviewed during a 3-month period (December 1996 through February 1997). Covering the same time span, all scientific articles (except for political and legal articles) appearing in the following newspapers were collected: New York Times, Herald Tribune, Le Monde, Le Figaro, El Pais, La Vanguardia and La Republica. For the study the following variables were used: 1) publication, does the article make reference to biomedical publication?; 2) sample, is the biomedical publication included in the sample?; 3) press, does the article make reference to a certain abstract included in a press release?; 4) order, where is the abstract located within the press release (first, second, third, etc.); 5) cover, does the article appear on the cover of the newspaper?; 6) cover of section, does the article appear on the cover of a section?; 7) length, what is the length of the article compared to the entire newspaper?; and 8) type of journalism, to what type of journalism does the article belong?
In total, 1,060 articles were collected. Of these, 263, quoted biomedical publications, of which a 164 were included in the sample. Of these 164 articles, 119 (73%) quoted papers obtained from press releases, and only 23 quoted papers not included in press releases (14%). In 22 articles (13%), the press release could not be identified. The quotation rate (% of articles referring to papers included in press releases) in relation to the order of a paper in the press release was as follows: first: 62, second: 17, third: 5, fourth: 9, others: 9. Article length and cover of section publication did not prove to be related to press releases.
The fact a scientific paper published in a biomedical publication is included in a press release has a positive influence on its later diffusion in the general press. The order in which a paper appears in a press release also affects its future diffusion; the papers first mentioned get more diffusion. Article length and location in the newspaper do not prove to be related with the inclusion of a paper in a press release.
Observatori de la Comunicació Científica, Universitat Pompeu Fabra, La Rambia 30-32, 08002, Barcelona, Spain
Are Scientific Publishers Amending Permissions Policies for the Digital Era?
Eamon T Fennessy
To survey how scientific publishers are reacting to the digital environment by modifying rights and permissions policy.
Over 140 international scientific publishers have been contacted during this survey which began in October, 1996. These institutions (academic, private and public, for-profit and not-for-profit) published more than 1,300 scientific, technical, and medical titles. Publishers were selected whose works were subscribed to by 2 international intellectual property organizations. Contacts were made by telephone, fax, e-mail, and in person. The survey questioned current and/or future electronic availability of the works as well as policies for electronic scanning, storage, and distribution for educational purposes.
To date 65% of selected publishers have responded. Data will be reported as to publishers’ nationality, the number of titles involved, and the requirements users must meet to scan, store, and/or distribute copyrighted articles.
Scientific publishers’ policies differ widely in addressing the digital media impact. Publishers, large and small, are slow to adapt except in rare instances. Non US publishers appear to be more progressive in adjusting policies.
The Copyright Group, PO Box 5496, Beverly Farms, MA 01915, USA
Publication Habits of Women and Men at Stanford University School of Medicine
To help determine whether and why women academicians in medical schools publish less than their male colleagues.
A 100% census (sample n=491; response rate=62.1%) of active clinical and research faculty at Stanford University School of Medicine was sent up to 3 mailings of a 1-page questionnaire in 1992.
Women did not have significantly fewer first-authored publications in the past 5 years than did men (6.6 vs 8.0, P=.2), but had fewer co-authorships (13.2 vs 19.8, P=.02). Women had a 2.4 and men had a 2.6 co-authorship; first authorship ratio (P=.02). Women estimated a mean of 1.4 submissions per acceptance, men estimated 1.5 submissions per acceptance (P=.2). Women were significantly less likely to have achieved more senior faculty status (P<.001). Among 30-39 year olds, there were twice as many male assistant professors (n=53 vs 24), and 5 times as many male associate professors as female (5 vs 1). Among 40-49 year olds, there were 4 times as many male assistant (14 vs 4), 5 times as many male associate (41 vs 9), and 18 times as many male full professors as female (36 vs 2).
These women had marginally fewer publications than their age-matched men colleagues, but were significantly less likely to have achieved high academic status. Their lesser publication rate does not appear to be because they are more likely to be rejected by the journals to which they submit; it may be because colleagues are less likely to offer co-authorship or because of other academic impediments.
Department of Family and Preventive Medicine, Emory University School of Medicine, 69 Butler St SE, Atlanta, GA 30303-3219, USA
Peer Review of Commissioned State-of-the-Art Papers
To investigate whether peer review of commissioned review articles differed from peer review of spontaneously submitted review articles in the Danish national medical journal, Ugeskrift for Læger. The hypothesis was that the invited papers caused more trouble.
A comparison study of the rejection rates and the “trouble rates” (defined as correspondence by the authors critical to the peer-review process) for all invited review articles in 1995 and 1996 compared to all spontaneously submitted review articles in the same period. A systematic commissioning of articles was initiated in 1995, when a PC-based system of authors and subjects was established in collaboration with the Danish scientific societies. Both types of papers were subject to the same peer review, as the reviewers were unaware whether the articles were commisioned or not.
We received 220 spontaneously submitted review articles in 1995 and 1996, and 93 commissioned review articles in the same period. There were no differences in the type or quality of the peer reviews of the unsolicited and commisioned articles, apart from the latter being more favorable. The rejection rate was 15% for the spontaneously submitted articles and 2% for the invited papers (P< 0.05). The “trouble rates” were 2 of 220 (0.9%) for the spontaneously submitted articles, and 8 of 93 (8.6%) for the commissioned papers (P<.05).
Conclusion As expected the rejection rate was significantly lower for the commissioned articles compared to the spontaneously submitted review articles. However, as hypothesized, the “trouble rate” was significantly higher for the commissioned articles.
Ugeskrift for Læger, Trondhjemsgade 9, DK-2100 Copenhagen, Denmark
The Scientific Paper: Fraudlent or Formative?
In 1963, Peter Medawar asked whether the scientific paper was a fraud. He argued that the research article was a “travesty… which editors themselves often insist upon” because it gives “a totally misleading narrative of the processes of thought that go into the making of scientific discoveries.” A paper’s fraud, Medawar insisted, lay mainly in its form. The importance of the form in which research is communicated, rather than its specific content, remains a neglected area of inquiry. Roman Jakobson described 6 components of any communicative event (Hawkes T. Structuralism and Semiotics. London: Routledge, 1977).
Message Addresser___________________________________ Addressee Context Contact Code
The addresser is the author; the addressee, a reader; the message, a text’s content; the context, its setting (eg, journal); the contact, its mode of delivery (eg, electronic); and the code, its language. But the roles of author, reader, language, setting, and mode of delivery-mostly issues of form-may have a more important bearing on our interpretation of a scientific paper than hitherto recognized. Form may be especially important as we adopt digital media to distribute research data (Van Alstyne M, Brynjolfsson E. Could the Internet balkanise science? Science 1996: 274:1479-80.). As research reports become even more structured and our approach to research increasingly ordered (eg, CONSORT), the non-content-related elements of a research communication could mould our interpretations in ways we have yet to explore. The scientific paper, and our understanding of it, is now entering a highly unstable period. An appreciation of the importance of form when interpreting research might lead to a reappraisal of the role of the reader in actively creating a meaning for a scientific text.
The Lancet, 42 Bedford Sq, London WC1B 356, UK
The Use of Systematic Reviews for Peer Reviewing
Tom Jefferson,1 Vittorio Demicheli, and John Hutton
The best introduction to a topic of research is represented by a systematic review of that topic. If the results of the systematic review are included in the research protocol, readers, researchers, editors, and peer reviewers can then use the evidence presented to judge the importance of the topic and the contribution of the “new” paper to understanding of the subject (Chalmers, 1991). Equally, a peer reviewer can use systematic reviews to reinforce subjective knowledge of the background to an individual candidate paper and to assess the candidate paper against a population of its peers, thus highlighting the contribution and general soundness of the new comer to the existing body of knowledge. This study was undertaken to test the feasibility of this novel approach to the use of systematic reviews in peer review.
The pilot study was based on 2 examples involving systematic reviews of hepatitis B vaccines. In the first example, a new cost-benefit analysis of hepatitis B vaccine was tested for methodological consistency against similar papers in a review of economic evaluations of hepatitis B vaccine. In the second example, a Cochrane review of hepatitis B vaccines was used to check candidate papers for duplicate publication.
In the first example, the new study was found to be an outlier in terms of the relationship between incidence of hepatitis B and the benefit-cost ratio, as it produced a high benefit-cost ratio with a low incidence rate. The results of the second exercise revealed that of 59 identified trials, 15 showed evidence of multiple publication, 4 of which were fraudulent (the duplicate was not cross-referenced) and 3 non-fraudulent.
The use of systematic reviews for peer review has limitations because of their format and availability. However, this pilot study showed that important issues can be brought to the attention of peer reviewers by comparison of candidate papers with the results of systematic reviews. The use of systematic reviews to test methodological robustness of candidate papers and prevent multiple publication is worth further testing.
1Army Medical Directorate, Keogh Barracks, Ash Vale, Hants GU12 5RR, UK
Nursing Home Research: The Positive Effect of Instructions for Authors on the Quality of Reported Research Ethics
Jason HT Karlawish,1 Gavin W Hougham, Carol B Stocking, and Greg A Sachs
To determine if an association exists between the quality of a published paper’s reported human subjects research ethics and the research ethics requirements contained in journals’ instructions for authors.
Structured review of published papers identified by MEDLINE search (criteria: January 1992 through June 1996; clinical trial enrolling nursing home residents in the United States), and the instructions for authors of the journal that published the paper. The review instrument synthesized and operationalized published guidelines for research involving nursing home residents. A panel of knowledgable colleagues assessed the face validity of the instruments.
We identified 45 papers and their corresponding instructions for authors from 22 journals. Quality measures of reported research ethics showed: justification of use of nursing home residents (45), informed consent obtained or waived (36), institutional review board review (18), nursing home committee review (6). Summed as a 4-point quality score: 6 papers scored 4; 12 scored 3; 18 scored 2; and 9 papers scored 1. The research ethics instructions of each paper’s instructions for authors ranked into 1 of 4 categories: none; less than “Uniform Requirements for Manuscripts Submitted to Biomedical Journals “(7); the “Uniform Requirements” (24); the “Uniform Requirements” plus additional instructions (5). An association existed between research ethics instructions category and research ethics quality score (Kruskal-Wallis x2=12.38, P=.006). This association existed between categories of instructions(P<.03). That is, the more detailed the instructions, the greater the quality score.
Although basic elements of proposed guidelines for research involving nursing home residents are not routinely reported, the positive association between research ethics instructions category and research ethics quality score suggests that a journal’s instructions for authors can affect the quality of reporting research ethics.
1Department of Medicine, Section of General Internal Medicine, University of Chicago, 5841 South Maryland Ave, MC 6098, Chicago, IL 60637, USA
Information From reading a Journal Is Not Well Retained and Depends on the Motivation of the Reader
Catherine Kellett,1 Alisa Hart,1 Christopher Bulstrode,1 and Philip Fulford2
Objectives (1) To ascertain the recall performance of journal-browsing doctors. (2) To determine whether the motivation of the reader affects the information retained.
A group of 140 doctors representing all medical specialties was recruited from a teaching hospital. They sat, under exam conditions, a negatively marked multiple-choice questionnaire on a recent issue of the British Medical Journal. In addition, a group of 21 specialists was sent true/false questions on the most recent issue of the Journal of Bone & Joint Surgery. Candidates for the specialty qualifying exam were advised that the written paper would be based on critical evaluation of recent articles appearing in the Journal of Bone & Joint Surgery. The questions required in-depth reading of the papers and tested retention only of those results presented in scientific papers that challenged current dogma. Five of the editors of Journal of Bone & Joint Surgery were sent the same exam as the candidates without prior warning.
Seventy-five percent of doctors claimed to have read the papers in the British Medical Journal. However, only 48% of these readers could correctly answer the questions sent. In the group of exam candidates, retention of information and understanding were good with a mean score of 44% (range, 12%-63%) despite negative marking. The editors of the Journal of Bone & Joint Surgery also scored well, achieving an average score of 63%. However, only 1 of 21 of the journal-browsing doctors of the Journal of Bone & Joint Surgery could recall any information presented in the articles.
Information was poorly retained by journal-browsing doctors. However, readers with high motivation and incentive recalled information well.
1Nuffield Department of Orthopaedic Surgery, The Trauma Service, John Radcliffe II Hospital, Headington, Oxford OX3 9DU, UK; 1Journal of Bone and Joint Surgery, London, UK
Use of Bibliometric Measures to Assist the Peer Review Process in Neurosciences
Grant Lewison and Wendy Ewart
To determine whether detailed bibliometric indicators of applicants’ publication track records would assist a grants panel in making judgements on major awards in neurosciences, and if so, which indicators would be most useful. This panel, of up to 20 experts in mental health and neurosciences, also sees at least 3 referees’ reports on each application.
The publications in the Science Citation Index (or the Social Sciences Citation Index) of some 30 applicants for long-term grants were analysed to show their number, potential impact (from the rating of the journal), actual impact (from counts of citations, compared with those in the applicant’s specific field) and the fractional contribution of the applicant, taking account of the overall number of his/her co-authors. They were compared with similar indicators for about 5 world leading scientists in the field, “peers,” selected by the program manager but usually taken from a list suggested by the applicant.
After 1 year of trialling, some 13 panel members were surveyed and felt that the provision of bibliometric data, especially citation data, put peer-review on a sounder basis. There was a strong correlation between the average scores on several different indicators and combinations thereof and the results of the panel’s deliberations, but there were some exceptions, such as when a proposal was rejected because of its poor design despite the applicant’s reputation.
Bibliometric data assist award panels and multiple indicators are particularly helpful, but only for candidates with substantial numbers of publications (about 20 or more). The use of bibliometric indicators has continued and been extended to other committees of award within the Wellcome Trust.
The Wellcome Trust, 183 Euston Road, London NW1 2BE, UK
Peer Review of Domestic and International Manuscripts in a Journal From the Scientific Periphery
Ana Marusic, Tomislav Mestrovic, Mladen Petrovecki, and Matko Marusic
To compare the peer-review process for national and international manuscripts processed by Croatian Medical Journal from 1992 to 1996.
Retrospective analysis of review forms for 286 manuscripts. Reviewers were asked about manuscript’s structure (7 questions), its scientific value (7-item scale), clarity and length, and final recommendation (5-item scales). International manuscripts had at least 1 author from a non-Croatian institution.
The overall rejection rate of manuscripts was 23%. National and international manuscripts had similar rejection rates except for original articles in clinical sciences (35.2% vs 17.6%, P=.037; Fisher’s test). International manuscripts had shorter median review time (from receipt to decision) and publishing time (from acceptance to publication) than national manuscripts (57 vs 108 days (P<.001), and 54 vs 104 days (P<.009), respectively; Fisher’s test). National articles were more often sent to international reviewers (P=.021; chi-square test). Reviewer’s response rates and judgment of manuscripts did not differ for national and international manuscripts. The agreement between reviewers ranged from 33.3% (scientific value) to 89.1% (reference citations). Reviewers disagreed on the recommendation on national vs international manuscripts (40.0% vs 81.8% agreement, respectively, P<.001, chi-square test). Kappa for interrater agreement was poor to moderate, ranging from 0.08 for the originality to 0.59 for soundness of conclusions; there was no difference between national and international manuscripts.
Conclusion The peer-review process in a small journal from the scientific periphery can be fair both to national and international manuscripts. Kappa for interrater agreement may not be the best measure of agreement between reviewers.
Croatian Medical Journal, Zagreb University School of Medicine, Salata 3, 10000 Zagreb, Croatia
The Role of Grey Literature in Meta-Analysis
Laura M McAuley, David Moher, and Peter Tugwell
Objectives (l) To characterize the sources of grey literature, defined as difficult to find literature, and the prevalence of its citation in a random sample of meta-analyses in the medical literature. (2) To assess and compare the quality of (a) meta-analyses that include grey literature to those which do not, and (b) traditional and grey documents included in meta-analyses. (3) To determine if the inclusion of grey literature in meta-analysis influences the point estimate, the precision, or the heterogeneity of the reported results.
A random sample of 135 meta-analyses was drawn from an existing database of meta-analyses published between 1983 and 1995. For each meta-analysis, data was collected a variety of parameters, including quality. For the subset of meta-analyses with grey literature, the included studies were retrieved, and the analyses repeated and replicated.
To date, 132 meta-analyses have been retrieved. Of those, 28% include some form of grey literature. Twenty-two percent include grey literature from more than 1 source. Abstracts comprise the most frequent source of grey literature (56%).
There is much variability in the literature classified as meta-analyses. It ranges from publications with no formal literature search, to those with an explicit and finely detailed search of the literature. There does not appear to be any trend toward the inclusion or exclusion of grey literature in meta-analyses. This research will provide evidence to those involved in the peer-review process as to the relative importance of grey literature in meta-analysis.
Clinical Epidemiology Unit, Ottawa Civic Hospital, C4, 1053 Carling Ave, Ottawa, Ontario K1Y 4E9, Canada
Realism and Idealism in Instructions to Referees
Sheila M McNab
By analyzing instructions to referees for 5 scientific journals to assess whether the demands made on referees are reasonable (realistic) or unreasonable (idealistic) and thereby to influence journals to modify their instructions.
Five sets of instructions (4 in physics, 1 in geophysics) were compared by an experienced authors’ editor. The journals originated in the US(2), the UK (1), and Sweden (1). The 3 selection criteria were: importance of journal, instructions covered at least 9 items, and were available from referees working the same department as the authors editor.
All journals request timely return of reviews (range: 1-3 weeks). Other items cover: clarity of tables and diagrams (5 journals), originality and soundness of science (4), adequacy of title and abstract (4, only 1 referring to suitability for inclusion in an abstracting service), overlap with published work (3, 1 requesting appropriate references), need to shorten/lengthen certain parts (3, 1 asking the referee to specify what should be omitted), suitability for journal (3). All 5 journals ask that proper credit be given to related work, 1 stating specifically that authors (not referees) are responsible for awareness of published research and for deficiencies in references. All 5 journals ask about readability and acceptability of the language; only 1 requests the referee not to correct grammar and punctuation unless the science needs clarification. Although 5 journals stress confidentiality, 4 permit referees to pass the paper to a colleague.
Reviews should be tackled promptly, but in academia reuesting return of the review within a week is unrealistic. It is reasonable to ask a referee to assess the soundness of the science and the adequacy of the title and abstract. Advising the author to shorten/extend parts of the paper should also be part of the referee’s remit. However, it is questionable whether the average referee is really capable of judging originality and identifying overlaps with published work. Since the editor is responsible for journal policy, it is the editor’s task to decide on paper’s suitability for a particular journal. Checking that credit is given to related work and that references are balanced is more realistic than being held responsible for missing references. The interpretation of readability will depend on the reviewer’s mother tongue. Reviewers who speak English as a second language should not be required to correct the English in a manuscript; a paper in poor English should be submitted to an Anglophone for a language review. Confidentiality could be breached if a referee is allowed to pass on a paper to a colleague. A reference to the minimum time to be spent on a review would increase the credibility of comprehensive instructions.
Buys Ballot Laboratory, Universiteit Utrecht, Princetonplein 5, Postbus 80.000, 3508 TA Utrecht, The Netherlands
Gender Representation in the Editorial and Peer Review Processes
Anastasia L Misakian, Renee Williard, and Lisa A Bero
To determine whether the proportion of female editors, peer reviewers, and authors in biomedical journals is representative of the proportion of women in biomedical research.
Cross-sectional survey of Abridged Index Medicus journals (n=120). The gender of editors, peer reviewers, and authors for 1996 was obtained from published lists of their names. In the case of androgynous names, information was provided by the journal office. Journal characteristics such as 1996 impact factor, year of journal inception, and journal topic were also collected.
While women represent 26% of faculty at US medical schools (AAMC, 1996 data) and 26.5% (OR: 17.5%-35%) of authors in biomedical journals, they constitute 14.5% (OR: 8%-27%) of editors. Although almost half of the doctoral degree recipients (Digest of Education Statistics, 1994 data) in the biological/life sciences (41%) and health sciences (59%) are women, women comprise only 15% (OR: 10%-21%) of peer reviewers. Journals with higher proportions of female editors have higher percentages of female peer reviewers and authors (P<.0001). There was no association between impact factor or year of journal inception and the proportion of female editors, peer reviewers, or authors. Women are better represented as editors, peer reviewers, and authors in general than in specialty journals (P&.0001).
Because female representation on editorial and peer-review boards is lower than in the medical and research fields, women have less opportunity to influence research and policy agendas through the journal editing and peer-review processes.
Institute for Health Policy Studies, University of California, San Francisco, 1338 Sutter St, 11th Fl, San Francisco, CA 94109, USA
Ethical Problems in a Prospective Study of Submissions to a Medical Journal
Carin M Olson,1,2 Richard M Glass,1 Donna F Stroup,3 and Stephen B Thacker3
We are prospectively abstracting characteristics of meta-analyses submitted to a weekly medical journal to correlate with their acceptance for publication. In beginning this study, we encountered several ethical problems. When submitting manuscripts for publication, authors do not consent to have their work used for research. Authors must be free to refuse to participate, without its affecting their chances for publication. Meta-analysts outside the editorial staff who are not peer reviewers are extracting the manuscripts’ characteristics, which breaks the confidentiality of the author-editor-reviewer relationship. We added a statement to our journal’s instructions for authors: “Information from submitted manuscripts may be systematically collected and analyzed as part of research to improve the quality of the editorial or peer-review process.” We mail authors a letter requesting their consent to participate. Authors send their response to an off-site editor. As the only 1 aware of authors’ participation, that editor is not handling any meta-analyses during this study. The meta-analysts are blinded to each manuscript’s author and institution. During the first 18 months, 76 of 78 authors submitting a meta-analysis agreed to participate. We cannot determine whether the study affects turnaround times, since meta-analyses were classified with general reviews before the study. Through obtaining authors’ active consent to participate, keeping editors handling meta-analyses unaware of authors’ participation, and de-identifying manuscripts, we addressed ethical issues encountered in studying manuscripts submitted to a medical journal.
1JAMA, 515 N State St, Chicago, IL 60610, USA; 2University of Washington, Seattle, WA, USA; 3Centers for Disease Control and Prevention, Atlanta, GA, USA
Evaluation of the Peer Review Process in Medicina Clinica (Spanish Journal of Internal Medicine)
J M Ribera, F Cardellach, M Belmonte, F Lozano, E Feliu, C Rey-Joly, F Nonell, M Foz, C Rozman, J Ruiz, and J A Dotu
Peer review is a basic tool for the decision process in biomedical journals. The aim of this study was to evaluate the main characteristics of the peer-review process in Medicina Clinica in 1996.
Review of the database of original articles, case reports, and letters to the editor received by Medicina Clinica in 1996. Parameters evaluated: number of articles submitted to peer-review process, the use of reviewers not included on the official reviewer committee of Medicina Clinica , the reviewer work-load, the rate and delay of response, and the final decision by the editorial committee.
Of the 1,177 articles received in 1996, 424 were original articles, 175 were case reports, and 518 were letters to the editor (48% related with previous articles published in Medicina Clinica ). After initial evaluation of all articles by the editorial committee, 156 (7 original articles, 5 case reports and 144 letters to the editor) were accepted, 260 were rejected, and 761(68%) were submitted to a peer-review process (77% of original articles, 63% of case reports and 41% of the letters to the editor). Of the 424 original articles, 121 (29%) were also evaluated by the reviewers of methodology and statistics; 386 different reviewers (all but one from Spain) were used, 217 (56%) being members of the official reviewer committee of Medicina Clinica. The mean (SD) number of papers reviewed by each expert was 1.97 (3.8), with a median of 2 (range 1-8). Fifty referees evaluated 5 or more articles. At the time of this study 71 articles are currently being reviewed. The delay (mean [SD]) in response was 44 (25) days (range 11-52) for original articles and 33 (22) days (range 7-147) for case reports and letters to the editor. In 20% of the referees who reviewed original articles and in 8% of the reviewers of case reports and letters to the editor there was a delay in response greater than 2 months. On 31 December 1996, 310 out of 1,177 articles were currently under reviewing process and 335 (39%) of the remaining 867 articles had been accepted.
A high number of referees not belonging to the official reviewer committee of Medicina Clinica have been used in the peer-review process, making enlargement of the committee necessary. Despite the wide use of referees outside of this committee, the work load of a substantial number of referees has been excessive. The number of papers with no response from the reviewers is disappointingly high. The delay in response is great in a large proportion of original articles, with a negative impact in the speed of the editorial process.
Medicina Clinica, Ediciones Doyma, SA, Traversa de Gracia, 17-21, 08021 Barcelona, Spain
Understanding the Process of Critical Reading
Patricia M Shevlin, Julia M Brown, A Vivenne Webb
There may be differences in the depth of critical reading undertaken of funding applications and papers in published literature. At the funding application stage a reviewer should appraise the design critically and identify possible biases and problems within the study. In the results presented and pays little attention to other aspects of the study. If this is the case, then peer review of biomedical publications for scientific validity is of prime importance.
Two published papers with design and reporting flaws were selected from the literature. Using only information from these papers, a standard application form for funding was completed as though funding were being sought. All 16 participants on a research methodology course were randomized, using a simple randomization schedule, into 1 of 4 equally sized groups. Each group discussed and reviewed the published paper from 1 study and the grant application from the other, in a predefined order using a semi-structured form.
There appeared to be more emphasis placed on the design of a study when a funding application was being reviewed. In addition the study design and choice of end points were more likely to provoke questioning. However, issues such as inappropriate methods of randomization, justification of sample size, and potential biases, were identified in both funding application and published paper.
Yorkshire Clinical Trials and Research Unit, Leeds, Arthington House, Hospital Lane, Leeds LS16 6QB, UK
Computer-Based Structured Reporting of Randomized Trials
Ida Sim,1,2 D K Owens,1,2 and G D Rennels2
Methodologists want structured and standardized reporting of randomized controlled trials (RCTs), but non-methodologists dislike reading structured text reports. We propose that RCTs be reported as both text, and as entries into structured databases. Such dual reporting already exists: results of genomic sequencing are published in databases (eg, GenBank) while discussions are published in prose. For integrating RCT databases worldwide, we specified a core set of trial information that all RCT databases, or trial banks, should contain and share.
We based this core data set on trial-reporting recommendations from the literature, and on our analysis of 24 randomized trials for meta-analysis. We specify this core data set as an object-oriented database schema of 219 concepts (eg, endpoint, or allocation concealment) that can be modified incrementally to suit future needs. We used this schema as the blueprint for a Web-accessible trial bank containing 2 randomized trials from the Veterans Affairs Cooperative Studies Program. We hyperlinked the CONSORT reporting checklist to the appropriate information about each trial.
Using the original trial execution records, it took about 15 hours to enter 1 study into our trial bank. The core database schema was able to express all trial information needed for meta-analysis and critiquing, as determined by published trial scoring and reviewing instruments.
Direct reporting of RCTs into trial banks is not prohibitively time-consuming, and may assist peer review and meta-analysis.
Section on Medical Informatics, MSOB X-215, Stanford University School of Medicine, Stanford, CA 94305, USA
Serial Journals of the Chinese Medical Association: The Present Status and the Improvement
To assess the status of peer review in 13 journals published by the Chinese Medical Association.
A retrospective review of the characteristics of 780 peer reviewers of the 13 journals and an analysis of a random sample of peer review records of 100 articles published in the Chinese Journal of Internal Medicine.
Of the 780 reviewers, the mean age is 62.5 years, those aged 45 years or less accounted 2%, and those aged 60 years or more for 70%. Eighty percent of the reviewers are male, all of them are professors. The Chinese Journal of Internal Medicine received 2,600 manuscripts in 1995, and used 170 reviewers. The time required for peer review of 100 articles ranged from 10-82 days, with a mean of 43 days. The journal asks reviewers to return the reviewed articles in 14 days, but only 2% of them were reviewed in the set interval, 45% in l5-30 days, 35% in 31-50 days, and 18% in 50 days or more. There was no significant difference in the quality of articles reviewed more than 50 days and those reviewed in less than 14 days. Compared with those (age <60y) who reviewed articles less than 14 days, most older reviewers (age >60y) took more than 50 days. The gender and academic degree of reviewers were not significantly different.
The results indicate that the relatively older peer reviewers of the medical journals published by the Chinese Medical Association are unable to fulfill the heavy task of peer review of articles of varied fields in the required days. It is necessary to select more young scientists as peer reviewers so as to increase the rapidity of publishing of articles.
Chinese Journal of Internal Medicine, Chinese Medical Association, 42 Dongsi Xidajie, Beijing, 100710, China
Selection of Abstracts for an Otolaryngology Meeting: Correlation of Blinding With Subsequent Publication
N Wendell Todd
To determine whether inclusion in a program was associated with subsequent publication in a MEDLINE-extracted journal.
Retrospective cohort study of 118 abstracts submitted for presentation at a regional otolaryngology meeting. Three practicing otolaryngologists scored each abstract. The abstracts’ author(s), institution and geographic location were unknown to 2 scorers. Inter-scorer agreements were poor or nil: kappa = 0.24 between the blinded scorers; k = 0.08 and -0.07 for the un-blinded scorer relative the blinded scorers. All 23 high-scoring abstracts were chosen for presentation at the meeting. Of the next 51 abstracts scored as nearly equivalent, inclusion on the program was determined by (1) balance among the various divisions of otolaryngology, (2) whether the abstract came from within the geographic region, and (3) avoiding multiple presentations from an author. Five years later, a search was conducted for publications by the authors.
Of the 53 abstracts on the program, 35 were subsequently published as full articles in peer-reviewed journals. In contrast, only 8 of the non-selected abstracts were published. Of the near-equivalent abstracts that were included on the program, most (16/28) were published. Conversely, the proportion published for those not on the program was only 4/23 (P<.01). Otolaryngology journals accounted for all but 1 of the published articles. The meeting sponsor’s journal, the third most-distributed US otolaryngology journal, accounted for 24 articles, of which 3 had not been on the program.
Arbitrary program inclusion was associated with subsequent publication. Un-blinded scores did not agree with blinded scores.
Department of Surgery and Pediatrics, Emory University School of Medicine, 1365 Clifton Road NE, Atlanta, GA 30322, USA
Research Designs and Statistical Methods Used in Chinese Medical Journals
Wang Qian and Zhang Boheng,
We assessed statistical methods and research designs employed in Chinese medical journals over the last decade.
The types of statistical tests and study design used in all original articles published in 5 leading journals (sponsored by the Chinese Medical Association) in 1985 (n=640) and 1995 (n=954) were evaluated. Independent assessments of the design employed in 100 studies showed overall agreement of 84.7% (kappa=0.79).
Among the 573 articles that used statistical methods in 1995, the most commonly employed methods were t-tests (63.0%), contingency tables (38.6%), analysis of variance (16.8%), and Pearson correlation coefficients (16.8%); the most common statistical problems were presentation of probability values without specifying the test used (35.0%), use of multiple t-tests instead of analysis of variance (10.5%), and use of unpaired t-tests when paired tests were required (3.8%). Of the 822 clinical studies published in 1995, 40 (4.9%) were randomized control trials, 7 (0.9%) were case-control studies, 452 (55.0%) were cross-sectional studies, 72 (8.8%) were case reports or case series, and 251 (30.5%) employed other designs. Significant improvement was seen from 1985 to 1995: the proportion of papers using statistical tests increased from 40% (257/640) to 60% (573/954) (x2=60.9, df=1, P<.001); the proportion that had statistical problems decreased from 78% (201/257) to 53% (307/573) (x2=45.3, df=1, P<.001); and the proportion of clinical studies using RCTs increased from 1.4% (8/588) to 4.9% (40/822) (x2=12.8, df=1, P<.001).
We conclude that the quality of research designs used in Chinese medical research is gradually improving but the lack or inappropriate use of statistics remains a serious problem.
Clinical Epidemiology Unit, Shanghai Medical University, Hua Shan Hospital, Shanghai, China
Getting the Most Out of Peer Review: Comparing Scientific and Lay Readers of Clinical Review Articles
To determine to what extent scientific knowledge is a determining factor in evaluating primary care clinical review articles.
Clinical review articles submitted to American Family Physician were sent out in routine fashion to scientific reviewers. For a 3-week period, any outgoing reviews were also sent out to 1 additional humanities reviewer-selected among professors and graduate students in humanities departments at several universities. A total of 16 humanities reviews were returned; for the corresponding 16 manuscripts there was a total of 39 scientific reviewers (2-3 per manuscript). Response to a standard editorial checklist were tallied for the science reviewers (also subtallied to distinguish among these between family physician and specialist reviewers); the editor’s initial impression; and the humanities reviewer. Special attention was given to 2 questions: 1 regarding editorial disposition and the second regarding overall evaluation of the manuscript. Responses were calculated in terms of absolute number of times there was agreement between individual reviewers as well as between reviewers and editors. Gradations of differences were also calculated using the checklist’s rating scale of 1 to 4, with 1 being highest in any given category (importance, authoritativeness, depth, writing quality, editorial recommendation, and overall evaluation).
Note: Not all reviewers responded to all questions; percentages are given in parentheses. In answering the question of editorial recommendation, humanities reviewers agreed with scientific reviewers 6/29 (21%) of the time. Scientific reviewers agreed among each other 10/32 (31%) times. In overall evaluation of manuscript, humanities reviewers agreed with the scientific reviewers 16/26 (66%) of the time, whereas scientific reviewers agreed 14/20 (70%) of the time. Regarding editorial recommendation, humanities reviewers agreed with the editor 4/13 (31%) of the time; scientific reviewers agree with the editor 15/36 (44%) of the time. Family physicians agreed less often with the editor, at 6/19 (32%) of the time, while specialty reviewers agreed with the editor 7/17 (71%) of the time. Regarding editorial recommendation, scientific reviewers had a concurrence rate of 58/90 (64%). Humanities reviewers and the editor had a concurrence rate of 25/39 (64%), whereas scientific reviewers concurred with the editor at 77/105 (73%). Scientific reviewers tended to give the most negative evaluations, giving lower scored more frequently in nearly all categories than any other reviewer. Family physician reviewers were less critical overall than specialists, but still assigned lower scores than did the editor or the humanities reviewers. The humanities reviewers gave higher scores overall than any other type of reviewer.
Although in all methods of calculating agreement, humanities reviewers disagreed more often, and more markedly, with other evaluations than did the scientific reviewers, then percentile differences between types of reviewers were not great. This indicates that while there may be a trend towards objective agreement among scientific peers, there remains even among informed reviewers a considerable amount of disagreement. Perhaps most interesting is the observation that the closer the reviewer is to the scientific content of the manuscript (ie, specialist vs family physician vs editor vs humanities reviewer), the more negative the evaluation tends to be. Given the real but still small difference between scientific and humanities reviewers in this study, it is likely that for clinical review articles, the advantages of scientific knowledge are not sufficiently exploited in the peer-review process. This could be corrected by requesting greater emphasis on scientific content when sending out clinical summaries for review.
American Family Physician, Georgetown University, 3800 Reservoir Road, Washington, DC 20007, USA