2001 Abstracts

FRIDAY, SEPTEMBER 14

New Research on Authorship and Contributorship

A Multi-Journal Authorship Database: Variation in Author Contributions by Byline Position

Christine Laine,1 Anthony Proto,2 and Frank Davidoff1

Objective

To develop a multi-journal database of author contributions and use it to explore contributions and fulfillment of International Committee of Medical Journal Editors (ICMJE) authorship criteria.

Design

Annals of Internal Medicine and Radiology developed a taxonomy of author contributions and collected data for a sample of consecutive research reports in each journal (n=106 Annals of Internal Medicine, n=100 Radiology). Contributions by byline position were examined. At the time of data collection, the ICMJE stated that authors should contribute to study conception and design, analysis and interpretation, and manuscript writing.

Results

The 206 publications included 1389 authors (median n=6, range 1-19). Compared to second or third authors, more first authors reported contributing to study conception/design (97% first vs 77% second and 55% third authors), data analysis/interpretation (90% vs 68% and 48%), and drafting (95% vs 51% and 27%), revising (92% vs 83% and 62%), and approving the manuscript (93% vs 84% and 70%). Authors in other positions least often reported contributions in all categories, but frequently (53%) provided study materials or patients. Last authors made fewer contributions than first authors, but generally reported contributions in each area more often than authors in positions other than first through third or last on the byline. Most first authors satisfied ICMJE criteria (93%) compared to only 72% of second authors, 51% of third authors, 54% of last authors, and 33% of authors in other positions. Somewhat higher percentages of authors satisfied a more relaxed ICMJE definition of authorship criteria (98% of first authors, 86% of second authors, 62% of third authors, and 66% of last authors).

Conclusions

It is feasible for journals to combine authorship data. Contributions vary by byline position. Most first authors satisfy ICMJE criteria, but substantial proportions of authors who are not first on the byline do not.

1Annals of Internal Medicine, 190 N Independence Mall West, Philadelphia, PA 19106, USA, e-mail: claine@mall.acponline.org; 2Radiology, Richmond, VA, USA

Back to Top

Authors Define Their Own Contribution: Who Fulfills the Criteria for Authorship?

Lydia E Vos and A John P M Overbeke

Objective

To study the definition of the contributions of authors to articles in the Nederlands Tijdschrift van Geneeskunde (NTvG; Dutch Journal of Medicine) within different headings. The contributions are compared with the authorship criteria of the International Committee of Medical Journal Editors (ICMJE).

Design

From January 1 to May 1, 1999, all authors of accepted articles were asked to define their contribution to the study design, the performance, and the reporting. Before publication of the article, the written contributions attended by the signature of the author had to be present at the editorial office. The first study investigator scored the contributions of the authors on 23 keylist words and obscure contributions were scored by both study investigators. The contributions were compared with the ICMJE criteria. Conception, design of the study, collecting data, and (statistical) analysis belonged to criterion 1. Writing the first version, rewriting, and critical revision of the paper belonged to criterion 2; and approving the last version to criterion 3.

Results

Of 91 accepted articles, 83 (91.2%) could be analyzed. The 310 authors wrote 690 contributions down, mostly by the first author with an average of 2.6. Thirty-one authors (10.0%) fulfilled criterion 1, 151 authors (48.7%) fulfilled criterion 2, and 111 authors (35.8%) fulfilled both 1 and 2. Only 6 authors (1.9%) reported approving the last version (criterion 3). Sixteen (5.2%) of authors fell outside the criteria. The contribution patient care does not belong to the ICMJE criteria, but was nevertheless written down by 42 authors (13.5%).

Conclusion

In 35.8% of the self-defined contributions of authors the contributions are congruent with the ICMJE criteria 1 and 2. Although most of the authors read the definitive version, it is reported rarely. Patient care is often mentioned. The order of the authors show that the first authors write the first version of the paper and collect data and that the other authors critically revise.

Nederlands Tijdschrift voor Geneeskunde (Dutch Journal of Medicine), PO Box 75971 1070 AZ Amsterdam, the Netherlands, e-mail: overbeke@ntvg.nl

Back to Top

What Makes an Author? A Comparison Between What Authors Say They Did and What Editorial Guidelines Require

Susan van Rooyen,1 Sandra Goldbeck-Wood,1 and Fiona Godlee2

Context

Authorship of research articles is a central currency in biomedical science. But the rules that are supposed to govern who can claim authorship (the so-called Vancouver Group guidelines from the International Committee of Medical Journal Editors [ICMJE]) are considered by many to be unrealistic and unnecessarily restrictive. Studies have found that these guidelines are unknown to many researchers, are often disregarded, and do not reflect the actual contributions as declared by authors.

Objective

To produce a taxonomy of authors’ declared contributions to their research, to assess whether these contributions conform to the ICMJE guidelines, and to recommend changes to the current requirements for authorship.

Design

Observational study of consecutive research papers published in the BMJ from July to December 1998 and the extent to which declared contributions conform to the ICMJE guidelines and to our view of an ideal paper.

Results

One hundred twenty-nine research articles were included in the analysis, with 588 authors. The number of contributions per author ranged from 1 to 9 (mean 3.8). First authors declared most contributions (mean 5.0). The most commonly declared contribution was writing/editing the paper. Since we had no data on whether authors had approved the final version of the paper, we used 4 proxy measures for this criterion. The proportion of authors conforming to the ICMJE guidelines ranged from 24% to 71%, depending on which of these 4 definitions was used. We defined an ideal paper as one in which key contributions were declared by at least 1 author. Only 17% of papers conformed to this definition. The contributions and the proportion of papers for which the contribution was not declared were as follows: initiation (26%), design (21%), data collection or data entry (26%), analysis (6%), interpretation of results (60%), writing/editing (6%), and being a guarantor (18%).

Conclusions

This study confirms the view that the ICMJE guidelines do not reflect current practice among authors. It suggests that we should move away from a prescriptive approach to authorship to a more descriptive approach, using the declared contributions of authors as a basis.

1BMJ, BMA House, Tavistock Sq, London WC1H 9JR, UK, e-mail: svanrooyen@kscable.com; 2BioMed Central, London, UK

Back to Top

Survey of Contributorship in Cochrane Reviews

Graham Mowatt and Jeremy M Grimshaw, for the Survey of Contributorship in Cochrane Reviews Project Team

The Survey of Contributorship in Cochrane Reviews Project Team includes Graham Mowatt, Liz Shirran, Jeremy Grimshaw, Graeme MacLennan, Phil Alderson, Lisa Bero, Iain Chalmers, Annette Flanagin, Peter Gøtzsche, Adrian Grant, Melissa Ober, Drummond Rennie, and Veronica Yank

Objective

A previous survey by Flanagin et al found that 26% of review articles in peer-reviewed medical journals had evidence of honorary authorship and 10% had evidence of ghost authorship (Flanagin et al. JAMA. 1998;280:222-224). Jadad and colleagues observed that Cochrane reviews had fewer authors than reviews published in paper-based journals; this may suggest that honorary authorship is less of a problem in Cochrane reviews (Jadad et al. JAMA. 1998;280:278-280). The objective of this survey was to determine the prevalence of honorary and ghost authorship in Cochrane reviews.

Design

In March 2000, primary contacts for 577 Cochrane reviews published in issues 1-2 1999 of the Cochrane Library were invited to complete, on behalf of their co-reviewers, a 29-question, Web-based questionnaire. Data were analyzed using SPSS.

Results

Primary contacts for 362 reviews (63% response rate) completed the questionnaire. Thirty-nine percent of reviews (141) had honorary authors (based on the International Committee of Medical Journal Editors’ authorship criteria, March 2000); in 31 reviews (9%) primary contacts reported that not all authors would feel comfortable explaining the major conclusions of the review in an oral presentation. Nine percent (32) had ghost authors (defined as individuals not listed as reviewers who made contributions that merited authorship or assisted in drafting the report of the review). The group of authors (31%) or lead author (25%) decided authorship in the majority of reviews. There was no formal mechanism for deciding authorship in 28% of reviews. Authorship order was assigned according to contribution in the majority (75%) of reviews.

Conclusions

The prevalence of ghost authorship in Cochrane reviews was broadly similar to that reported for review articles in peer-reviewed medical journals; the prevalence of honorary authorship was greater in Cochrane reviews (this may be partly explained by the way in which honorary authorship was defined). Honorary authorship was more prevalent than ghost authorship. The Cochrane Collaboration and Cochrane Review Groups need to develop mechanisms to address these issues.

Health Services Research Unit, University of Aberdeen, Polwarth Building, Foresterhill, Aberdeen AB25 2ZD Scotland, UK, e-mail: g.mowatt@abdn.ac.uk

Back to Top

Corporate Authorship: Problems With Current Systems for Indexing and Citation

Kay Dickersin,1 Roberta Scherer,2 Michelle Gil-Montero,3 and Eunike Suci3

Objective

To examine the ways in which reports of controlled trials with corporate authorship (ie, authors listed by research group name) are indexed and citations counted in bibliographic databases. We were interested in whether reports with corporate authors are difficult to identify and whether this may lead to problems with citation. Corporate authorship has been increasingly used, especially for multicenter clinical trials and other types of studies (eg, genomics) with many contributors.

Design

Cross-sectional, descriptive study. All 47 controlled trials funded by the National Eye Institute and their associated reports were identified. MEDLINE and Science Citation Index (SCI) were searched and citing practices were recorded.

Results

A total of 285 published reports were identified; 44% had corporate authorship, 36% had modified corporate authorship (individual names plus name of research group), and 20% had named authors only. MEDLINE listed no corporate authors in the author field; in SCI, corporately authored reports generally were incorrectly listed with named authors (first name on investigator list [35%], first name on writing committee [25%], contact name [17%], other [23%]). Using the SCI “general search,” citations to 17% of corporately authored reports were identified, compared to 97% with modified corporate authorship, and 94% with named authors. Other search methods revealed this was a large undercount of actual citations to corporately authored reports (ie, >_ 73% of corporately authored reports actually had been cited). Furthermore, corporately authored reports were cited fewer times than reports with other types of authors. SCI listed most citations to corporately authored reports using an abbreviation of the corporate name.

Conclusions

Although corporate authorship allows investigators to share credit, indexing systems are not currently adapted to this approach. This may result in user confusion and fewer citations. Modification of indexing systems to list corporate authorship in the author field, as is currently under way for MEDLINE, may help to correct the problem.

1Department of Community Health, Brown University School of Medicine, 169 Angell St, Box G-S2, Providence, RI 02912, USA, e-mail: kay_dickersin@brown.edu; 2University of Maryland Medical School, College Park, MD, USA; 3Brown University, Providence, RI, USA

Back to Top

The Hidden Research Paper: Evidence of Suppressed Opinion, Censored Criticism, and Serious Bias Among Contributors

Richard Horton

Objective

Contributorship has revealed the range of intellectual and practical input into published medical research. Are the views expressed in a research paper accurate representations of contributors’ opinions?

Design

For this analysis, 10 papers published in the Lancet during 2000 were selected. Articles selected included varying numbers of contributors (range 2-11), subject areas, and methods. Contributors were asked 6 questions about their study: the key findings, strengths, weaknesses, interpretation within the totality of evidence, implications, and future research. The contributors’ answers were compared, individual views were contrasted with those in the discussion sections of the published paper, and private and public opinions were matched with stated contributions.

Results

Complete data for 5 groups of contributors were available. Discussion sections were haphazardly organized and did not deal systematically with important questions about the study. Weaknesses were often admitted on direct questioning but were not included in the published paper. Contributors frequently disagreed about key findings, weaknesses, implications, and future research. This diversity of opinion was commonly excluded from the published report. Contributors who were less involved in writing the paper were not only more critical of the study’s design but also did not have their criticism included in the published article.

Conclusions

A research paper rarely represents the opinions of scientists whose work it reports. Weaknesses in study design, acknowledged by co-contributors, may be hidden by the contributor who will receive most credit from publication. These data provide empirical support for both structured discussions and for means to recover the plurality of contributors’ opinions.

Lancet, 42 Bedford Sq, London WC1B 3SL, UK, e-mail: r.horton@elsevier.co.uk

Back to Top

Quality Issues and Standards for Published Material

Is the Reporting Quality of Randomized Controlled Trials an Adequate Measure of Their Methodological Quality?

Karin Huwiler,1 Peter Jüni,2,3 Matthias Egger,3 and Christoph A Junker1

Objective

The quality of reporting is often used as a proxy measure for the methodological quality of randomized controlled trials (RCTs). In this study, the relationship between reporting and methodological quality was examined using the example of analysis by intention to treat (ITT).

Design

Sixty RCT reports were randomly selected from a database of 254 reports. Reporting quality was assessed by 2 reviewers using a 30-item scale based on the 1996 CONSORT statement. Each item earned 1 point, for a maximum score of 30 points. Trials with adequate concealment of treatment allocation were considered to be of high methodological quality. Reporting and methodological quality was compared between trials explicitly reporting an ITT analysis, trials reporting exclusions after randomization, and trials with unclear reporting.

Results

The overall quality of reporting was identical for trials with explicit reporting, independently of whether the analysis was by ITT (n=33, median score 18, range 10-22) or not (n=13, median score 18, range 9-27), but lower for trials with unclear reporting (n=14, median score 11.5, range 4-16) (P=.001 by Kruskal-Wallis test). Conversely, there was a decreasing trend in methodological quality from trials with ITT analyses (methodological quality high in 33.3% of trials) to trials without ITT analyses (methodological quality high in 23.1% of trials) to trials with unclear reporting (methodological quality high in 14.3% of trials) (P=.17 by chi-square test for trend).

Conclusions

Similar quality of reporting may hide differences in methodological quality and a distinction between the 2 is therefore required. When assessing methodological quality a distinction should be made between the presence or absence of a quality criterion and unclear reporting. Indeed, in the landmark study by Schulz et al (JAMA 1995) the larger effect estimates in trials without explicit exclusions may be due to confounding by inadequate reporting.

1Departments of Social and Preventive Medicine and 2Rheumatology and Clinical Immunology/Allergology, University of Berne, Switzerland; 3MRC Health Services Research Collaboration, University of Bristol, Whiteladies Rd, Canyge Hall, Bristol BS8 2PR, UK, e-mail: juni@bristol.ac.uk

Back to Top

Misleading Publications of Major Mammography Screening Trials in Major General Medical Journals

Peter C Gøtzsche and Ole Olsen

Objective

To compare the reporting of crucial information for assessment of bias in the mammography screening trials in high-impact general medical journals with that in specialist journals and gray literature.

Design

Cochrane review of the 7 large randomized trials of screening.

Results

Three of the 7 main publications on the trials appeared in JAMA (New York HIP study), the Lancet (two-county study), and BMJ (Malmö study); a major follow-up report from a fourth trial appeared in the Lancet (Edinburgh study). Three of the reports did not mention that more women had been randomized than those reported. In less prestigious reports, misclassification bias was detected in the assessment of cause of death for 2 of the trials that favored screening, and a significant increase in overall mortality among young screened women, probably related to radiotherapy, was detected in 1 of the trials. Other important problems related to trial design and analysis were not mentioned in the main publications. Crucial information was often unpublished or published only in Swedish, letters, theses, trial protocols, conference reports, reviews, and journals that are not widely read.

Conclusions

Publications in major medical journals may make major trials look better than they really are. Journal editors should require trial protocols as part of the peer review process and should publish on their Web sites protocols and relevant criticism no matter how long after the publication it appears.

The Nordic Cochrane Centre, Rigshospitalet, Dept 7112, Blegdamsvej 9, Copenhagen DK-2100, Denmark, e-mail: p.c.gotzsche@cochrane.dk

Back to Top

Discussing Trials in Context: The Balkanization of Research Results

Phil Alderson, Mike Clarke, and Iain Chalmers

Objective

Anyone wishing to interpret a trial needs to know how its results compare to those of similar studies. This was recognized by the original CONSORT statement, which recommended that the report of a randomized trial should discuss its findings in light of the “totality of relevant evidence.” In a previous study, Annals of Internal Medicine, BMJ, JAMA, Lancet, and the New England Journal of Medicine published 26 reports of randomized trials in May 1997. Reports of apparently similar trials were found for 25 of these. In only 2 were the trial’s results placed in the context of a systematic review of other relevant studies. Thus, almost all trials did not provide sufficient information to interpret their results reliably.

Design

The study was repeated in May 2001.

Results

Thirty-three reports of randomized trials were identified in the 19 issues of these 5 journals published in May 2001. None of these reports contained a discussion of the trial’s results in the context of an updated systematic review of earlier trials. Reference was made to relevant systematic reviews in 3, but no attempt was made to integrate the results of the new trial in updated versions of these reviews. Four reports claim to have been the first published trial to address the question studied. Six reports did not claim to be the first trial but did not cite any related randomized trials. The remaining 20 reports cited other trials, but there was no evidence that any systematic attempt had been made to set the new trial’s results in context.

Conclusions

If anything, the situation has worsened during the last 4 years. There is evidence of a Balkanization in the reporting of new trials, with relevant, related research appearing to be ignored. Trials reported in these major medical journals still do not provide sufficient information to interpret their results reliably.

UK Cochrane Centre, NHS R&D Programme, Summertown Pavilion, Middle Way, Oxford, OX2 7LG, UK, e-mail: mclarke@cochrane.co.uk

Back to Top

Reporting Number Needed to Treat and Number Needed to Harm in Randomized Controlled Trials

Denise Cheng, Joy Melnikow, and Jim Nuovo

Objective

To examine the frequency of reporting the number needed to treat (NNT) and the number needed to harm (NNH) in randomized controlled trials (RCTs).

Design

Five journals were selected for investigation: Annals of Internal Medicine, BMJ, JAMA, Lancet, and New England Journal of Medicine. For each journal, 4 specific years were evaluated: 1989, 1992, 1995, and 1998. A MEDLINE/HEALTHSTAR search was conducted of each journal for the years identified. The search terms used were “randomized controlled trials” and “controlled trials.” The search was supplemented by a manual review of each journal for the specific year of interest. Eligible articles included those in which an RCT was conducted on the use of a medication for any treatment effect. The study needed to demonstrate a significant positive effect of the treatment drug. The following was abstracted from each eligible article: condition investigated, event being treated/prevented, intervention, study results, and reporting methods (relative risk reduction, absolute risk reduction, odds ratio, NNT, NNH, or other).

Results

Of 358 eligible articles, NNT was reported in 7 articles; 6 of these articles were published in 1998. Number needed to harm was not reported.

Conclusions

Despite the usefulness of this reporting method, few authors express their findings in terms of NNT and none reported NNH. Consideration should be given to including these values in reports of RCTs.

University of California, Department of Family & Community Medicine, 4860 Y St, Sacramento, CA 95817, USA, e-mail: jim.nuovo@ucdmc.ucdavis.edu

Back to Top

Extrapolation of Association Between Two Variables in Four General Medical Journals

Yen-Hong Kuo

Objective

An estimated association between 2 variables is valid only within the range of the data. Extrapolation is risky and should be handled with caution. This study was undertaken to assess how the extrapolation issue was managed in 4 weekly general medical journals.

Design

All of the articles published from January through June 2000 in the BMJ, JAMA, the Lancet, and the New England Journal of Medicine were reviewed manually. Articles containing a scatterplot with raw data and a corresponding fitted regression line were included for analyses.

Results

A total of 178 articles presenting at least 1 scatterplot were identified. Among them, 37 articles (21%) containing a fitted line were included: 5 from BMJ, 7 from JAMA, 23 from the Lancet, and 2 from the New England Journal of Medicine. Of them, 31 articles (84%) used a simple linear regression line to illustrate the association between 2 variables. Nine articles (24%) provided the confidence interval (CI) for the fitted line. Twenty-two articles (59%; 95% CI, 42%-75%) from all of the 4 journals involved extrapolation, that is, the fitted line beyond the observed data. None of them changed the line type to indicate extrapolation. Four articles (11%) contained a plot with the fitted line reaching the unreasonable or meaningless values. Three articles (8%) stated explicit conclusions about values outside the range of the observed data.

Conclusions

A high proportion of the reviewed articles contained a fitted line beyond the data without indication. This extrapolation problem should be eliminated by presenting the fitted line within the range of the observed data or by changing the line type outside the data range.

Department of Research, Jersey Shore Medical Center, Meridian Health System,1945 State Route 33, Neptune, NJ 07753, USA, e-mail: yhkuo@monmouth.com

Back to Top

The Nature of Statistical Input to Medical Research

Douglas G Altman1, Steven N Goodman2, and Sara Schroter3

Objective

To investigate the nature and frequency of statistician involvement in medical research and its relation to the final editorial decision.

Design

In a prospective cohort study, authors of original research articles consecutively submitted to the BMJ and Annals of Internal Medicine have been sent a short questionnaire at the time of manuscript submission. This questionnaire explores whether they received any assistance from a person with statistical expertise, the nature of the statistical contribution, the availability of statistical expertise at their institution, and the qualifications of the person providing that contribution. In cases where there was no such input, they are asked for reasons why. The study is expected to include more than 500 papers submitted during the first half of 2001.

Results

Answers to the questionnaire will be summarized, both overall and in relation to the outcomes of the editorial decision. Analysis will be stratified by design of studies.

Conclusions

Statistical input to medical research is widely recommended but inconsistently obtained. This study will provide information about the timing and nature of statistical input into medical research submitted for publication and will indicate how often those providing such expertise go unrecognized by either authorship or acknowledgment. The study will enhance understanding of the impact of statistical input as well as help to clarify why so much medical research is conducted without such assistance.

1ICRF/NHS Centre for Statistics in Medicine, Institute of Health Sciences, Old Road, Headington, Oxford OX3 7LF, UK, e-mail: altman@icrf.icnet.uk; 2Department of Oncology, Division of Biostatistics, Johns Hopkins School of Medicine, Baltimore, MD, USA; 3BMJ, London, UK

Back to Top

Association of Journal Quality Indicators With Methodological Quality of Clinical Research Articles

Kirby P Lee,1 Marieka Schotland,1,2 Peter Bacchetti,3 and Lisa A Bero1,2

Objective

To assess whether journal quality indicators (peer review status, impact factor, citation rate, acceptance rate, circulation, MEDLINE indexing, and Brandon’s Library List indexing) are valid predictors of research article methodological quality.

Design

Data on 7 quality indices were collected for a random sample of 30 general medical journals. Original research articles involving human subjects published in 1999 were randomly selected from each journal and categorized as either randomized controlled trials (RCTs) or other (NONRCTs). Meta-analyses, qualitative studies, case series, and case reports were excluded. Using a validated quality assessment instrument (quality score range 0 [low] to 2 [high]), 2 coders independently assessed each article for methodological quality with 90% agreement. Repeated measures models tested for associations between journal quality indicators and article quality score.

Results

A total of 228 research articles (RCT, n=92; NONRCT, n=136) were included. The mean quality score was 1.37 (SD, 0.22). All journals reported a peer review process and were indexed in MEDLINE. For the remaining journal quality indicators, univariate regression analyses revealed significant associations between mean quality scores and higher citation rates (P<.0001), lower acceptance rates (P<.0001), higher impact factors (P<.0001), higher circulation (P=.0001), and indexing on Brandon’s List (P<.0001). Multivariate analysis showed that citation rate alone was the most reliable predictor of journal article quality. When stratifying the analysis by article type, citation rate remained the most predictive factor for RCT quality or NONRCT quality. The estimated effect of citation rate on quality is that for every doubling in citation rate, journal article quality score increases by 0.06 point.

Conclusions

High citation rates, impact factors and circulation rates, low manuscript acceptance rates, and indexing on Brandon’s List appear to be predictive of higher methodological quality scores for articles published in the journals.

1Department of Clinical Pharmacy, School of Pharmacy, University of California, San Francisco, San Francisco, CA 94143, USA, e-mail: ps98267@itsa.ucsf.edu; 2Institute for Health Policy Studies, School of Medicine, University of California, San Francisco; 3Department of Epidemiology and Biostatistics, University of California, San Francisco

Back to Top

The Quality of Systematic Reviews of Economic Evaluations in Health Care and What They Are Telling Us: It Is Time for Action

Vittorio Demicheli and Tom Jefferson

Objective

To identify and assess systematic reviews of economic evaluations in health care and to evaluate quality of review methods and evidence of changes in conduct and reporting of economic evaluations.

Design

Descriptive Cochrane review of published and unpublished original reports of exploratory or analytical reviews of published and unpublished economic evaluations with clear aims, methods, inclusion criteria, list of excluded and included studies, results and discussion sections commenting on the quality of methods, and/or reporting of economic evaluations. Once inclusion criteria had been applied, quality assessment of included reviews was carried out based on 8 criteria, each with a 1 to 4 score.

Results

Thirty-seven reports of reviews were identified. Eighteen failed to meet inclusion criteria. The remaining 19 reports were included and grouped by broad topic. The quality of included reviews was high with 18 reports having a mean score of at least 3.25. Correlation of score between the 2 reviewers for quality criteria was good (Spearman’s test range 0.989-1.000). Consistent evidence of serious methodologic flaws was found in a significant number of economic evaluations of health care interventions, regardless of publication status, period of preparation or publication, topic, and study design. There is evidence of modest and slow improvements in quality in the last decade.

Conclusions

Quality of systematic reviews of economic evaluations is good, but given the importance of economic evaluations in resource allocation decisions, the quality of published and unpublished economic evaluations must be improved. Editorial teams, peer reviewers, and regulatory bodies should implement quality assurance based on a single widely accepted and validated standard instrument, such as the BMJ checklist.

Cochrane Health Economics Group and Health Reviews Ltd, 35 Minehurst Rd, Mytchett, Surrey GU16 6JP, UK, e-mail: toj1@aol.com

Back to Top

Graphics in Pharmaceutical Advertisements: Are They Truthful, Are They Adequately Detailed?

Richelle J Cooper,1 David L Schriger,1 Roger C Wallace,1 Vladislav J Mikulich,1 and Michael S Wilkes2

Objective

Are pharmaceutical advertisements sophisticated medical communications akin to scientific publications, or hollow slogans akin to popular advertising? If the former, graphs within advertisements should be similar to graphs in scientific manuscripts. This study characterized the quantity and quality of graphs in pharmaceutical advertisements.

Design

All pharmaceutical advertisements in 10 leading US medical journals published in 1999 were reviewed and each data graph was evaluated to characterize its features. Pharmaceutical advertisement graphs were contrasted with graphs in articles of JAMA and Annals of Emergency Medicine.

Results

There were 836 glossy and 455 small-print pages in 484 unique advertisements (of 3190 total advertisements). Forty-nine percent of glossy page area was nonscientific figures/images, 0.4% tables, and 1.6% scientific graphs (74 graphs in 64 advertisements). The remaining 49% was text or blank page. Eight percent of graphs had errors, 5% had visual obfuscation, and 12% used nonstandard graphing conventions. Only 36% of graphs were self-explanatory. No graphs contained advanced features (pairing, symbolic dimensionality, or small multiples). Fifty-eight percent presented data on an outcome relevant to the drug’s indication. When comparing the pharmaceutical advertisement graphs (n=74) with scientific graphs from JAMA (n=64) and Annals of Emergency Medicine (n=128), more simple univariate graphs (96%) were noted in advertisements than in articles published in JAMA (63%) or Annals of Emergency Medicine (53%). Pharmaceutical advertisement graphs had more visual noise (66% vs JAMA 0% or Annals of Emergency Medicine 10%), more numeric distortion (36% vs JAMA 6% or Annals of Emergency Medicine 5%), and more redundancies within the graphs (46% vs JAMA 14% or Annals of Emergency Medicine 16%). The efficiency of data presentation quantified by the data depiction index (area of graph that contains information) was less in the pharmaceutical advertisement graphs; median and interquartile ranges were for pharmaceutical advertisements 0.22 (0.11, 0.43), JAMA 1.1 (0.52, 3.36), and Annals of Emergency Medicine 0.94 (0.54, 1.7).

Conclusions

Graphs in pharmaceutical advertisements were rare and when present were of lower quality than those in journal articles. The pharmaceutical advertisement graphs’ designs frequently resulted in numeric distortion, which is specifically prohibited by US Food and Drug Administration regulations.

1UCLA Emergency Medicine Center, UCLA School of Medicine, 924 Westwood Blvd, #300, Los Angeles, CA 90024, USA, e-mail: richelle@ucla.edu; and 2Department of Medicine, UCLA School of Medicine

Back to Top

Published Peer Review Policies: Determining Journal Peer Review Status from a Non-Expert Perspective

Marieka Schotland,1 Erin VanScoyoc,2 and Lisa Bero1,2

Objective

Policymakers report that they value evidence from peer-reviewed journals over non-peer-reviewed journals. By examining journals assessed by nonclinical experts such as regulatory health policymakers, a determination whether they can obtain informative descriptions of journals’ peer review processes can be made.

Design

For a data source, journals from which articles were submitted as evidence in 2 regulations and a risk assessment of passive smoking were examined. A cross-sectional survey of all medical and scientific journals (n=278) submitted as evidence was performed. Whether the journals were peer reviewed was determined from printed descriptions of the journal or instructions to authors. For journals with a peer review policy published in print or on the Internet, the description of the peer review process was categorized as (1) extensive: explicit statement that the journal is peer reviewed along with details about the process; (2) basic: explicit statement that the journal is peer reviewed; (3) hints: no explicit statement that the journal is peer reviewed although peer review is implied. A descriptive analysis of this data was performed.

Results

Of the total sample of journals studied, only 67% (187/278) had a published peer review policy. Of these peer-reviewed journals, 28% (53/187) were categorized as extensive, 40% (75/187) were categorized as basic, and 32% (59/187) were categorized as hints. Thus, only 19% (53/278) of all the journals had a published, comprehensive description of their peer review process.

>Conclusion

In many cases, it is difficult to determine whether and how a journal is peer reviewed. For the benefit of nonclinical experts determining peer review status, journals should develop standard reporting requirements for the peer review process.

1Department of Clinical Pharmacy, School of Pharmacy, University of California, San Francisco, San Francisco, CA, USA; and 2 Institute for Health Policy Studies, School of Medicine, and University of California, San Francisco, 3333 California St, Suite 265, San Francisco, CA 94143-0936, USA, e-mail: bero@cardio.ucsf.edu

Back to Top

Saturday, September 15

Research on Peer Review

Effect of Written Feedback by Editors on Quality of Reviews: Two Randomized Trials

Michael L Callaham,1 Robert K Knopp,2 and E John Gallagher3

Objective

Whether simple written feedback by editors to peer reviewers improves the quality of subsequent reviews

Design

All reviews were routinely rated for quality on a 1 to 5 scale by editors blinded to the study. Study 1: Peer reviewers with review volume of 3 or less per year, and median quality scores of 3 or less, were randomized to an intervention group (receiving the editor’s score of their review, plus unscored copies of other reviews of the same manuscript, plus a very brief summary of journal objectives for review content) or to a control group (receiving other unscored reviews). The study was designed to have a power of 0.95 to detect a difference in score of 1, using 52 subjects. Study 2: Same design as study 1 except different reviewers with median score of 4 or less and 3 or more reviews, and the intervention included: receiving the editor’s score of each manuscript review, scored copies of other reviewers’ reviews of the same manuscript, a brief summary of journal objectives for review quality, and a sample outstanding review of another manuscript (power of 0.95 to detect difference of 0.5).

Results

Study 1: 52 active reviewers were eligible and were randomized; 43 had sufficient data (a total of 153 rated reviews) for analysis. The mean rating change during the study was 0.36 (95% confidence in interval [CI] -0.18 to 0.9) for controls and -0.31 (-0.78 to 0.15) for intervention. Study 2: Based on 245 rated reviews to date, 54 control group reviewers had a mean rating change of -0.01 (95% CI -0.29 to 0.28) and 51 intervention group reviewers had a mean rating change of 0.03 (-0.21 to 0.27).

Conclusions

In study 1, minimal feedback from editors on review quality had no effect on subsequent performance of low-volume, poor-quality reviewers. This feedback may actually have had a significant negative effect, perhaps due to the discouragement of poor review scores with minimal guidance as to remediation. In study 2, feedback to average reviewers was more extensive and supportive but to date produced no improvement in reviewer performance.

1University of California, San Francisco, Box 0208, San Francisco, CA 94143-0208, USA, e-mail: mlc@itsa.ucsf.edu; 2University of Minnesota and St Paul-Ramsey Medical Center, Rochester, MN, USA; and 3Albert Einstein College of Medicine, New York, NY, USA

Back to Top

Peer Reviewer Opinions About Manuscript-Specific Acknowledgment

Michael Berkwits1 and Frank Davidoff2

Objective

Peer reviewers are currently the only journal contributors whose identities and contributions are unknown to readers. We assessed reviewers’ opinions of a proposal to acknowledge their contributions to published manuscripts by publishing reviewer names with those manuscripts.

Design

In a pilot survey, reviewers for Annals of Internal Medicine were asked what they thought generally about published manuscript-specific acknowledgment with consent, if they would want it personally, if it would be recognized at their institutions, and if they would want it if it were institutionally recognized. Respondents answered on a 5-point scale with 1 representing unfavorable and 5 favorable responses. We hypothesized that junior and clinical more than senior and research reviewers would desire published manuscript-specific acknowledgment and compared these groups using the t test and rank sum test.

Results

Seventy-eight reviewers responded (80%). Reviewers were an average of 22.5 years out of training and spent equivalent mean times of 30% in clinical and research activities. Sixty-six percent (55,76) were indifferent to or against manuscript-specific acknowledgment (mean score and standard deviation was 2.7±1.3); 68% (57,79) said they would not want it personally (2.6±1.4), 84% (73,91) thought it would not be recognized by their institutions (2.2±1.3), but 51% (39,63) said they would want it if it were recognized (3.3±1.3). There were no differences in responses based on years since training or clinical or research focus, but the difference in wishes for acknowledgment without and with institutional recognition was significant (P<.001).

Conclusions

A majority of peer reviewers did not favor publishing manuscript-specific acknowledgment, in part because of perceptions of a lack of institutional recognition for the effort. Half favored the idea if it were linked to recognition, however. Though surveys assessing attitudes are often poorly predictive of actual behaviors, these data suggest that at least some support exists among reviewers for broadening acknowledgment and recognition of peer review activities.

1University of Pennsylvania, Division of General Internal Medicine, 423 Guardian Drive, 1229 Blockley Hall, Philadelphia, PA 19104, USA, e-mail: berkwits@mail.med.upenn.edu; and 2Annals of Internal Medicine, Philadelphia, PA, USA

Back to Top

Peer Review in Small and Big Medical Journals

Ana Marusic,1 Ivan Kresimir Lukic,1 Matko Marusic,1 David McNamee,2 David Sharp,2 and Richard Horton2

Objective

To compare peer reviewers’ recommendations in the Lancet, a high-impact journal, and the Croatian Medical Journal (CMJ), a small general medical journal.

Design

Comparison of peer review forms for all research manuscripts submitted to CMJ in 1999 and part of 2000 (n=140 manuscripts; 292 review forms) and a sample from the Lancet (n=141; 321 review forms), chosen systematically to cover the same period. Statistical reviews were excluded from the analysis.

Results

Lancet reviewers gave more complimentary scores (0-5 scale) to manuscripts for which they recommended rejection or major/minor revision, but they also more often recommended rejection than CMJ reviewers (44%, n=300) vs 17%, n=280); c2=52.1, P=.029). Among 9 questions about manuscript quality in the review form, there was significant association between editorial decision and scores for the suitability of research design (odds ratio [OR]=2.01, 95% confidence interval [CI]=1.40-2.89, P<.001) and discussion of systematic/random error (OR=1.38, 95% CI=1.03-1.85, P=.031) for the Lancet, and scores for novelty of information (OR=1.75, 95% CI=1.35-2.27, P<.001) for CMJ. The correlation between the summary score or overall recommendation of reviewers and editorial decision was similar and statistically significant for both journals (Spearman’s rho 0.37 to 0.39). For manuscripts with 2 (and only 2) reviews, reviewers for the Lancet (n=52 manuscripts) agreed on recommendation to the editor for 44% (kappa=0.21, 95% CI=0.02-0.40), and reviewers for CMJ (n=54) agreed for 37% of manuscripts (kappa=0.17, 95% CI=-0.01-0.34).

Conclusion

Reviewers’ recommendations have similar influence on editorial decision in big and small journals, although reviewers’ assessment of the quality of manuscripts may differ. Agreement between reviewers in general medical journals is poor, regardless of their size and importance.

1Croatian Medical Journal, Salata 3, 1000 Zagreb, Croatia, e-mail: marusica@mef.hr; 2Lancet, London, UK

Back to Top

Effect of Statistical Review on Manuscripts Quality in Medicina Clínica

Catalina Arnau, 1 Erik Cobo,2 Francesc Cardellach,2 Joseph María Ribera,2 Albert Selva,2 Agustín Urrutia,2 Vicente Fonollosa,2 Celestino Rey-Joly,2 Miquel Vilardell,2 and Josep Lluís Segú2

Objective

To estimate the influence on manuscript quality of adding a statistical and methodological reviewer to the clinical peer review team.

Design

After random allocation, manuscripts were reviewed either by 1 statistical and 2 clinical experts or only by 2 clinical reviewers. Two blinded statistical evaluators measured each paper 2 times, before and after the journal peer review process, using a slightly modified version of the Manuscript Quality Assessment Instrument. The primary endpoint was the mean, of the 2 evaluators, of the sum of improvements on all study report modifications. Our postulated effect was 80% of the within group standard deviation.

Results

Twenty-six manuscripts received clinical review without statistical review and 17 manuscripts received clinical and statistical review. The improvements favored the inclusion of a statistical reviewer although the difference did not reach statistical significance. Mean quality scores and 95% confidence intervals (CIs) were 3.06 (2.03-4.09) for clinical review only and 4.12 (2.34-5.90) for clinical and statistical reviews. The 95% CIs mean difference of improvements were -0.8, 2.9. Point estimate of the statistical effect was 1.1, being 36% of the pooled standard deviation.

Conclusions

Statistical review improved the quality of manuscripts, although it did not reach statistical significance. A qualitative study is needed to investigate if the statistical reviewers fail to give accurate advice, if the authors fail to follow their recommendations, or if the design of this study was not precise enough to show the postulated effect.

1Statistics and Operational Research, Universitat Politecnica de Catalunya, C/ Pau Gargallo 5, 08028 Barcelona, Spain, e-mail: erik.cobo@upc.es; and 2Medicina Clinica, Barcelona, Spain

Back to Top

Editorial Peer Review for Improving the Quality of Reports of Biomedical Studies: A Cochrane Review

Phil A Alderson,1 Frank Davidoff,2 Tom Jefferson,1 and Elizabeth Wager3

Objective

To estimate the effect of editorial peer review on importance, relevance, usefulness, methodological soundness, ethical soundness, completeness, and accuracy of submissions to journals.

Design

Descriptive systematic review of prospective or retrospective studies with 2 or more comparison groups generated by random or other appropriate methods. Methods consisted of extensive searches, independent application of inclusion criteria, and descriptive quality assessment of included studies.

Results

The well-researched practice of reviewer and/or author concealment, while laborious and expensive, appears to have little effect on the outcome of the quality assessment process (9 studies). Checklists and other standardization media have little reliable evidence to support their use (2 studies). There is no evidence that referees’ training has any effect on the quality of the outcome (2 studies). Communication media do not appear to have an effect on quality (2 studies). On the basis of 1 study little can be said about the ability of the peer review process to detect bias against unconventional drugs. Validity of the process was tested by only 1 small study in a specialist area. Editorial peer review appears to make papers more readable and improve the general quality of reporting (2 studies), but the evidence for this has limited generalizability.

Conclusions

There is only a small and scattered amount of empirical evidence supporting the use of editorial peer review as a mechanism to ensure quality of biomedical research. Higher-sensitivity and lower-sensitivity inclusion criteria will be applied and the validity of these conclusions will be tested by carrying out a sensitivity analysis.

1UK Cochrane Centre, Summertown Pavilion, Middle Way, Oxford OX2 7LG, UK, toj1@aol.com; 2Annals of Internal Medicine, Philadelphia, PA, USA; 3GlaxoSmithKline, Middlesex, UK

Back to Top

Measuring the Quality of Editorial Peer Review

Tom Jefferson,1 Elizabeth Wager2, and Frank Davidoff3

Objective

To define quality measures by which the effects of editorial peer review as performed by biomedical journals might be tested.

Design

Systematic review performed as part of a series of Cochrane reviews.

Results

Twenty-four studies were identified and the scales and instruments used to assess the effects of peer review were tabulated. All the studies used surrogate outcomes such as agreement between reviewers or subjective ratings of the quality of submissions. Despite posing a broad range of questions, all examined process measures such as the effects of masking author identity or the use of checklists. None directly assessed the effects of peer review on health care systems, health status, or the advancement of scientific knowledge. Similarly, there have been no large-scale, rigorous studies comparing editorial peer review with different methods of selecting and improving submissions.

Conclusions

Studies published to date have concentrated on peripheral or surrogate measures. Future studies should be designed to measure the effects of peer review on the usefulness, relevance, methodological soundness, ethical soundness, completeness, and accuracy of published research reports. Until these are performed, we cannot assert that editorial peer review is a scientific or evidence-based method.

1UK Cochrane Centre, Summertown Pavilion, Middle Way, Oxford OX2 7LG, UK, e-mail: toj1@aol.com; 2GlaxoSmithKline, Middlesex, UK; 3Annals of Internal Medicine, Philadelphia, PA, USA

Communicating to Readers

Back to Top

Paper or Screen, Mother Tongue or English: Which Is Better?

Pål Gulbrandsen,1 Torben Veith Schroeder,2 Josef Milerad,3 and Magne Nylenna1

Objective

To evaluate whether the presentation (paper or computer screen, mother tongue or English) influences Scandinavian general practitioners’ ability to retain information from a scientific paper.

Design

Controlled trial of 111 general practitioners, ranging in age from 28 to 76 years (mean 47.6 years, SD 8.5), including 45 females (41%), in Denmark, Norway, and Sweden. The participants were randomized to read a review article for 10 minutes in either English or Danish/Norwegian/Swedish, and either on paper or on a computer screen. Immediately after reading, they completed a questionnaire related to the article, with a possible score range from 0 (no correct answers) to 13 (all questions answered correctly). A significance level of 5% and a power of 80% were selected and the necessary number of participants was calculated to be 96 before conducting the study. Mean scores of the different groups were compared using independent sample t tests.

Results

The mean score was 3.9 (95% confidence interval [CI] 3.5-4.4, range 0-11). For those reading English the score was 3.4 (95% CI 2.8-4.0) compared to 4.4 (95% CI 3.8-5.0) for those reading the article in their mother tongue (P=.02). There was no difference between those reading the article on paper (score 4.0, 95% CI 3.3-4.7) and those reading on screen (score 3.9, 95% CI 3.3-4.4)). Physicians younger than 40 years scored significantly higher (5.7, 95% CI 4.4-7.0) than those 40 years or older (3.6, 95% CI 3.1-4.0) (P<.01), and male physicians scored higher (4.3, 95% CI 3.7-4.8) than female physicians (3.4, 95% CI 2.7-4.1) (P=.05).

Conclusion: Medical scientific information is acquired more easily by Scandinavian general practitioners if presented in their mother tongue compared to English. Whether the information is presented on paper or on computer screen does not seem to have any impact.

1Tidsskrift for Den norske lægeforening, PO Box 1152 Sentrum, NO-0107 Oslo, Norway, e-mail: pal.gulbrandsen@legeforeningen.no; 2Ugeskrift for læger, København, Denmark; 3Läkartidningen, Stockholm, Sweden

Back to Top

Comparing the Quality of Review Articles Published in Peer-Reviewed and Throwaway Journals: Should We Be Throwing Away the Throwaways?

Paula A Rochon,1,2,3 Lisa A Bero,4 Ari M Bay,1 Jennifer L Gold,1 Julie M Dergal,1 Malcolm A Binns,1 David L Streiner,1,2 and Jerry H Gurwitz5

Objective

Throwaway journals are not peer reviewed and not cited in the literature. The quality, presentation, readability, and clinical relevance of review articles published in peer-reviewed journals and throwaway journals were compared to determine why throwaway journals are so popular.

Design

All review articles focusing on the diagnosis or treatment of a medical condition published in the Annals of Internal Medicine, BMJ, JAMA, Lancet, and the New England Journal of Medicine or a high-circulation throwaway journal (ie, Consultant, Hospital Practice, Patient Care, and Postgraduate Medicine) in 1998 were eligible. Our sample included 394 articles. Two reviewers independently assessed the methodologic and reporting quality of each article. A reviewer evaluated the article’s presentation and assigned readability scores using standard instruments. Clinical relevance was evaluated by 6 independent clinically oriented physicians who read a randomly generated list of 394 article titles.

Results

Of the 394 articles, 16 (4.1%) were peer-reviewed systematic reviews, 135 (34.3%) were peer-reviewed nonsystematic reviews, and 243 (61.7%) were nonsystematic reviews published in throwaway journals. The mean (SD) quality scores were highest in peer-reviewed articles. Quality scores were .94 (.09) for systematic reviews, .30 (.19) for nonsystematic reviews), and .23 (.03) for throwaway articles. Throwaway journal articles were significantly more likely to use tables (P=.023), figures (P=.011), photographs (P<.001), color (P<.001), and larger font size (P<.001) than peer-reviewed articles. Readability scores were more often in the college or higher range for peer-reviewed articles compare to the throwaway articles (104 [77.0%] vs 156 [64.2%], P=.01). Peer-reviewed article titles were judged to be significantly less relevant to clinical practice than throwaway article titles.

Conclusions

Although lower in methodologic quality, review articles published in throwaway journals appear to be better at communicating their messages than review articles in peer-reviewed journals.

1Baycrest Centre for Geriatric Care, 3560 Bathurst St, Toronto, Ontario M6A 2E1, Canada, e-mail: paula.rochon@utoronto.ca; 2University of Toronto, Ontario, Canada; 3Institute for Clinical Evaluative Sciences, Ontario, Canada; 4University of California, San Francisco, San Francisco, CA, USA; and 5Meyers Primary Care Institute, University of Massachusetts, Worcester, MA, USA

Back to Top

Media Coverage of Scientific Meetings: Too Much, Too Soon?

Lisa M Schwartz,1,2 Steven Woloshin,1,2 and Linda Baczek1

Objective

Although they are preliminary and have undergone limited peer review, research presentations at scientific meetings may receive prominent attention in the news media. This coverage is described, the quality of the research is assessed, and whether the research was subsequently published in the medical literature is reported.

Design

Lexis-Nexis was searched to identify major US news media coverage in the 2 months following 5 scientific meetings held in 1998 (American Heart Association, 12th World AIDS Conference, American Society of Clinical Oncology, Society for Neuroscience, and the Radiological Society of North America). Abstracts that generated the new stories were located and MEDLINE was searched to identify subsequent publication in the next 2 years.

Results

A total of 255 news stories reporting on 149 research presentations were found, an average of 51 news stories per meeting. With the exception of the Wall Street Journal, 10 or more stories appeared in each of the top 5 US newspapers (USA Today, New York Times, Los Angeles Times, Washington Post). Of the 149 research presentations receiving news media coverage, 76% were nonrandomized, 25% were small (ie, less than 30 human subjects), and 15% did not involve patients (eg, animal studies). Nearly half (48%) of the covered presentations were not subsequently published in journals indexed by MEDLINE. Notably, some of the most provocative headlines came from these unpublished studies: “Experimental Drug Reverses Fatal Outlook for Cancer”; “Laser Treatment Kills Tumor Without Incisions”; and “Cancer Breakthrough Attacks Genetic Flaws.” Even among the 39 presentations covered on page 1 of these newspapers, 17 (44%) were not subsequently published.

Conclusions

Scientific meeting presentations receive substantial attention in high profile by US news media. This coverage is concerning since many of the studies are small, have weak designs, and are not subsequently published.

1VA Outcomes Group, 111B, White River Junction, VT 05009, USA, e-mail: lisa.schwartz@dartmouth.edu; and 2Center for the Evaluative Clinical Sciences, Dartmouth Medical School, Hanover, NH, USA

Research on e-Journals and Online Information

Back to Top

Comparison of Editorial Peer Review Practices Among Indexed Health Sciences Electronic Journals

Ann C Weller

Objective

To determine any different editorial peer review practices or variations in qualitative and quantitative measures among different types of indexed health sciences electronic journals (e-journals).

Design

Comparison of 3 types of e-journals indexed during 2000 in MEDLINE: T1: fully electronic with no print counterpart (identifier: location – Internet only); T2: print (T2P) and electronic (T2E) versions not scientifically coequal (identifier: separate ISSNs for the print and electronic versions); T3: print and electronic versions scientifically coequal (identifier: link to an electronic version and a single ISSN). Data were collected from MEDLINE and the journals’ home pages on 11 T1, 16 T2, and 13 T3 journals randomly selected from Abridged Index Medicus.

Results

Most journals in each category (84.6%-93.7%) have a stated editorial peer review policy. T2E and T2P journals always share a homepage. Significant differences (P<.05, analysis of variance) existed among the e-journals for the presence of editorials (0%-75.0%), letters to the editor (8.3%-87.5%), and case reports (16.7%-93.7%), as well as the average number of items indexed in MEDLINE (22.5-544.5); and the average number of editorials (0.5-18.0), letters (1.5-48.4), and case reports (0.1-47.8). No statistically significant differences existed for the requirement of either structured abstracts (36.6%-68.7%) or original research (81.8%-87.5%), or size of editorial boards (43.7-83.5).

Conclusions

As health sciences e-journals evolve, T1 and T2E journals will probably become more numerous, but now neither has the complexity of traditional print journals. While editors’ statements on editorial peer review are similar, there are differences in number and type of material included in the e-journals. Many T2E publications serve a specialized function, publishing short reports, letters, case reports, or rapid communication and do not have the depth or breadth of print journals. The transformation to electronic publication is progressing rapidly with expectations of its increased impact on the indexed medical literature.

Library of the Health Sciences, 1750 W Polk St, University of Illinois at Chicago, Chicago, IL 60612, USA, e-mail: acw@uic.edu

Back to Top

How Aware Are Health Professionals and Consumers of Resources to Find Research Evidence on the Internet?

Christopher Sigouin1 and Alejandro R Jadad1,2

Objective

To examine and compare the level of awareness of sources of research evidence on the Internet, available to patients, clinical oncologists, and oncology nurses involved in the management of cancer.

Deswign: Cross-sectional survey of all physicians and nurses and 1998 patients with cancer affiliated with the Hamilton Regional Cancer Centre, in Ontario, Canada, conducted between July 1998 and January 2000. Descriptive statistics and relevant comparisons were made between and within groups using chi-square.

Results

The response rate was 72%, 97%, and 84% for patients, oncologists, and nurses, respectively. More physicians than nurses or patients reported that they look for health information on the Internet (71%, 50%, 15%, respectively; P<.01); that they have heard about the Cochrane Collaboration (75%, 8%, <1%; P<.01) and MEDLINE (100%, 89%, 7%; P<.01). More physicians than nurses reported having heard about PubMed (64% and 28%; P<.01) and CancerLit (100% vs 83%; P=.03), but there were no differences in the proportion that heard about CancerNet (93% vs 89%; P=.69) or OncoLink (82% vs 69%; P=.38). Overall, 17% of patients said they had not heard of the Internet or the World Wide Web.

Conclusions

Some patients have not heard about the Internet and few use it to find health-related information. Some of the most rigorously developed sources of information are still unknown, even to health care professionals. The awareness of information on the Internet varies between patients and health care professionals and within health care professionals.

1Department of Clinical Epidemiology & Biostatistics, McMaster University, Hamilton, Ontario, Canada; and 2Program in eHealth Innovation, University Health Network, Eaton Wing EN 6-240, 200 Elizabeth St, Toronto, Ontario M5G 2C4, Canada, e-mail: ajadad@uhnres.utoronto.ca

Back to Top

Much Ado About “Nothing”? Looking for Evidence on Harm Resulting From Health Information on the Internet

Anthony George Crocco,1 Miguel Villasis-Keever,2 and Alejandro Jadad3

Objective

Much has been written about the potential harms associated with poor-quality health information on the Internet. The objective of this study was to determine the number and nature of reported cases of harm resulting from alleged misuse of heath information obtained on the Internet.

Design

A systematic review of the literature, including MEDLINE, EMBASE, PSYINFO, CINAHL, and HEALTHSTAR was performed in April 2001 using a refined search strategy. Two authors separately reviewed the abstracts. The goal was to identify articles that describe at least 1 case of harm attributed to the use of health information found on the Internet, published in peer-reviewed and non-peer-reviewed journals. Papers of any format and in any language deemed possibly relevant were obtained. Each full article selected was screened independently and decisions on inclusion were made by consensus. They were divided into those that described harm related to inappropriate use of accurate information, and harm related to use of inaccurate information.

Results. The search yielded 1512 citations and 15 were included. Eight articles reported cases of self-injury associated with accurate Internet information, 4 articles reported harm that resulted in adverse effects after illegal substance abuse, and 1 reported emotional harm resulting from misinterpretation of information. Only 2 articles met the criteria of harm associated with inaccurate information: 1 described poisoning in dogs related to misinformation obtained on the Internet, and the other described a lung cancer patient who self-medicated with an unproved drug and died as a result.

Conclusions

Despite the important number of publications claiming or suggesting potential harm associated with misuse of health information on the Internet, this search failed to find supporting evidence. It remains to be determined whether this is due to publication bias, to absence of studies, or to lack of harm associated with misuse of information found on the Internet.

1Health Information Research Unit, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada; 2Clinical Epidemiology Unit at Pediatric Hospital, Instituto Mexicano del Seguro Social, Mexico; and 3Program in eHealth Innovation, University Health Network, University of Toronto, Eaton Wing EN 6-240, 200 Elizabeth St,Toronto, Ontario M5G 2C4, Canada, e-mail: ajadad@uhnres.utoronto.ca

Back to Top

Sunday, September 16

Publication Bias

Publishing Protocols Electronically: A Way to Reduce or Introduce Bias?

Chris A Silagy,1 Philippa Middleton,2 and Sally Hopewell2

Objective

To assess the nature and extent of changes between published systematic reviews and their previously published protocols and to assess any potential impact these changes may have had in introducing bias to the review.

Design

We identified previously published protocols for all new completed new systematic reviews appearing on the Cochrane Library (issue 3, 2000). The text of the published protocol and the completed review were compared electronically. Two raters independently identified changes to the different sections of the protocol. These were classified as none, minor, or major and their potential impact on the review assessed.

Results

Of 66 completed new reviews, we identified a previously published protocol for 47 reviews. Of these, 43 reviews had at least 1 section that had undergone a major change when compared to the most recently published protocol. The greatest variation between the protocol and review was in the methods section where 68.1% of reviews had undergone a major change. Changes that may have resulted in the introduction of bias include narrowing the objectives, adding comparisons, subgroup analyses or new outcome measures, broadening study design criteria, and narrowing the types of participants included.

Conclusions

Although research protocols are likely to remain at least to some extent iterative documents, it is concerning that some of the changes being made to Cochrane reviews could be prone to influence by prior knowledge of the results. Even if many of the changes improve the overall review, the reasons for making these changes should be clearly documented.

1Monash Institute of Public Health, Monash Medical Centre, Clayton, Victoria 3168, Australia, e-mail: chris.silagy@med.monash.edu.au; 2UK Cochrane Centre, Summertown Pavilion, Middle Way, Oxford OX2 7LG, UK

Back to Top

Publication Bias in Editorial Decision Making: Assessment of Reports of Controlled Trials Submitted to JAMA

Carin M Olson,1,2 Drummond Rennie,1,3 Deborah Cook,1,4 Kay Dickersin,5 Annette Flanagin,1 Joseph W Hogan,5 Qi Zhu,5 Jennifer Reiling,1 and Brian Pace6

Objective

To assess whether submitted manuscripts reporting results of controlled trials are more likely to be published if they report positive results than if they report negative results.

Design

Prospective cohort study of manuscripts submitted to JAMA from February 1996 through August 1999. For inclusion, manuscripts had to report results of a prospective study in which participants were assigned to a treatment or comparison group and that used statistical tests to compare differences between groups. Manuscripts were followed up until the publication decision. We classified results as positive if there was a statistically significant difference (P<.05) reported for the primary outcome. Manuscripts were further classified according to indicators for quality and other study characteristics.

Results

Among 745 manuscripts meeting inclusion criteria, 133 (17.9%) were published: 78 (20%) of 383 with positive results, 51 (15%) of 341 with negative results, and 4 (19%) of 21 with unclear direction of results. The crude relative risk for studies with positive results being published compared with studies with negative results was 1.36 (95% confidence interval [CI] 0.99-1.88). After adjusting simultaneously for quality indicators and other study characteristics, the odds ratio for studies with positive results being published was 1.30 (95% CI 0.87-1.96).

Conclusions

Publication bias may occur even after manuscripts reporting controlled trials have been submitted to a medical journal. However, since associations were not statistically significant, the possibility that the effect observed was due to chance cannot be ruled out.

1JAMA, Chicago, IL, USA; 2University of Washington Medical Center, Box 356123, Seattle, WA 98195, USA, e-mail: colson@u.washington.edu; 3University of California, San Francisco, San Francisco, CA, USA; 4McMaster University, Hamilton, Ontario, Canada; 5Brown University, Providence, RI, USA; 6News and Information, American Medical Association, Chicago, IL, USA

Back to Top

Speed of Publication for Submitted Manuscripts by Direction of Study Results

Carin M Olson,1,2 Drummond Rennie,1,3 Deborah Cook,1,4 Kay Dickersin,5 Annette Flanagin,1 Qi Zhu,5 Jennifer Reiling,1 and Brian Pace6

Objective

Follow-up of protocols submitted to an ethics committee, funded studies, and completed clinical trials show that studies with positive results are published more rapidly than studies with negative results. There is no evidence that such bias toward speed of publication occurs once manuscripts have been submitted to a medical journal. This study determined whether submitted manuscripts reporting positive results are published more rapidly than those reporting negative results.

Design

Prospective cohort study of 133 manuscripts reporting controlled trials accrued over 40 months (February 1996-August 1999) and accepted for publication at JAMA. Direction of study results was determined by 2 investigators using objective criteria before the publication decision. Intervals from manuscript to publication were derived from JAMA‘s database.

Results

Of the 133 manuscripts, 78 reported positive results (median sample size = 242), 51 reported negative results (median sample size = 361), and 4 had unclear results (median sample size = 322 ). The interval (mean [SD]) from submission to publication was 270 (116) days for manuscripts reporting positive results and 248 (86) days for manuscripts reporting negative results. This difference was not statistically significant (mean difference, 22 days; 95% confidence interval for difference 16 to 59 days; P=.26).

Conclusion: Among manuscripts reporting controlled trials submitted to JAMA, there was no evidence that those reporting positive results are published more rapidly than those reporting negative results.

1JAMA, Chicago, IL, USA; 2University of Washington Medical Center, Box 356123 Seattle, WA 98195, USA, e-mail: colson@u.washington.edu; 3University of California, San Francisco, San Francisco, CA, USA; 4McMaster University, Hamilton, Ontario, Canada; 5Brown University, Providence, RI, USA; 6News and Information, American Medical Association, Chicago, IL, USA

Grant Review and Ethical Issues

Of Molecules, Mice, and Men: The Relationship of Biological Complexity of Research Model to Final Rating in the Grant Peer Review Process of the Heart and Stroke Foundation of Canada

Mark Taylor

Objective

To determine whether reductive grant applications (eg, molecular biological studies) are rated more highly on a consistent basis than integrative applications (eg, behavioral sciences) in the peer review process of the Heart and Stroke Foundation of Canada (HSFC).

Design

The 7 grant peer review committees of the HSFC are based primarily on research approach, ranging from molecular biology to clinical research to behavioral science. The average and median ratings assigned by each of these committees were examined from 1995-1996 through 2000-2001 (n = 2646 grant applications). The funding outcome for each committee was also determined on both a relative and absolute basis.

Results

Those committees that reviewed molecular and cellular applications had the highest average ratings, with those reviewing clinical and behavioral science applications having the lowest ratings. This was seen for all 6 years of the study period with the difference in average rating from the lowest to highest committee ranging from 0.4 to 0.6 (on a 0 to 4.9 scale). This had a major impact on the percentage of grants funded per committee in any given year.

Conclusions

Reductive vs integrative applications do not fare equally well in the HSFC peer review process. While the reasons for this are likely multifactoral, these results suggest that different review criteria, committee structures, and/or other measures may be required to ensure the appropriate evaluation of all applications.

Research Programs, Heart and Stroke Foundation of Canada, 222 Queen St, Suite 1402, Ottawa, Ontario K1P 5V9, Canada, e-mail: mtaylor@hsf.ca

Back to Top

Reporting of Informed Consent and Ethics Committee Approval in Clinical Trials: Have Journals Improved?

Veronica Yank1 and Drummond Rennie1,2

Objective

A 1997 study found that major medical journals had poor rates of reporting on informed consent and ethics committee approval. The current study assesses whether journals have improved their reporting.

Design

Pre-post comparison of clinical trials before and after 1997 in the Annals of Internal Medicine, BMJ, JAMA, Lancet, and New England Journal of Medicine. Three hundred articles per time period, 60 per journal, were randomly selected. Primary outcome measures were rates of reporting on informed consent and on ethics committee approval.

Results

Informed consent was not mentioned in 26.3% of articles published before 1997 vs 17.7% of articles published after 1997 (P=.011). Similarly, ethics committee approval was not mentioned in 31% of articles published before 1997 vs 18.8% of articles published after 1997 (P<.001). There was no mention of either in 16.0% of pre-1997 vs 9.3% of post-1997 articles (P=.014). Vulnerable populations (eg, children) were the study participants in 29.2% of pre-1997 and 17.9% of post-1997 articles that reported neither protections. In subgroup analyses, journals with the worst initial rates of reporting generally improved the most. BMJ did not describe informed consent in 41.7% of pre-1997 vs 25.0% of post-1997 articles (P=.053). JAMA did not describe ethics committee approval in 41.7% of pre-1997 vs 21.7% of post-1997 articles (P=.019). BMJ, JAMA, and Annals of Internal Medicine had the lowest initial rates of reporting both protections, with 41.7%, 53.3%, and 56.7%, respectively, but improved markedly to 63.3%, 71.7%, and 75.0% (P=.018, .038, .053). The Lancet and New England Journal of Medicine had the best initial rates and showed a nonsignificant trend toward improvement.

Conclusions

Major medical journals have improved their reporting on informed consent and ethics committee approval. But 1 in 10 studies still do not report on either of these protections.

1Institute for Health Policy Studies and School of Medicine, University of California, San Francisco, c/o 1221 Shrader St, San Francisco, CA, USA, e-mail: vyank@itsa.ucsf.edu; 2 JAMA, Chicago, IL, USA

Back to Top

Absence of Associations Between Funding Source, Trial Outcome, and Quality Score: A Benefit of Financial Disclosure?

Tammy J Clifford, David Moher, and Nicholas Barrowman

Objectives: To examine the relationships between funding source, trial outcome, and trial quality.

Design

Recent issues of 5 peer-reviewed, high-impact factor, biomedical journals were hand-searched to identify a convenience sample of 100 randomized controlled trials (20 trials/journal). Relevant data, including funding source (private/public/mixed) and primary outcome (positive/negative/neutral, assessed on statistical interpretation of results rather than authors’ conclusions), were abstracted. Quality scores were assigned using the Jadad scale.

Results

More than 60% of trials received some private funding, but trial outcome was not associated with funding source (c2=7.87, df=9, P=.548). There was a preponderance of favorable statistical conclusions among published trials with two thirds reporting results that favored a new treatment whereas less than 5% of trials reported negative results. Jadad scores were not associated with funding source (c2=9.90, df=12, P=.624) or trial outcome (c2=5.524, df=12, P=.938) but over 50% of trials received Jadad scores of 3 or less. This may reflect the need for improved methodological rigor and/or requiring authors to follow a standard format (eg, CONSORT) in reporting results of randomized controlled trials.

Conclusions

The observed nonsignificant associations between funding source, trial outcome, and trial quality may reflect inadequate power or a true absence of an association. This finding may also reflect limitations inherent in the study’s reliance on voluntary disclosure of financial conflicts of interests. For example, it is not known whether the absence of disclosure in a published report reflects the absence of competing interests among authors or less-than-optimal compliance with journal policies. The need for standardized reporting is suggested by discrepancies between journals, with respect to the manner in which financial conflicts of interest are reported, along with a preponderance of trials with low Jadad scores. The persistence of an excess of published reports with statistically positive results poses a continued challenge.

Thomas C Chalmers Centre for Systematic Reviews, Children’s Hospital of Eastern Ontario Research Institute, 401 Smyth, Ottawa, Ontario K1H 8L1, Canada, e-mail: TClifford@cheo.on.ca

Back to Top

Disclosure of Financial Conflict of Interest in Published Research: A Study of Adherence to the Uniform Requirements

Anu R Gupta, Cary P Gross, and Harlan M Krumholz

Objective

To analyze the nature of disclosures in published randomized controlled trials (RCTs) and adherence of these disclosures to the Uniform Requirements for Manuscripts Submitted to Biomedical Journals, which recommend disclosures of financial conflict of interest including a specific description of “the type and degree of involvement of the supporting agency.”

Design

Research outcomes and source(s) of support for the study and authors were abstracted from all RCTs published in 5 journals (Annals of Internal Medicine, BMJ, JAMA, the Lancet, and the New England Journal of Medicine) between April 1, 1999, and March 31, 2000. Chi-square and Wilcoxon rank-sum tests were used to compare groups.

Results

Of the 268 RCTs reviewed, 54% were financially supported solely by nonindustry sources, 24% were supported by industry sources alone, 13% by both, and 9% did not identify source of study support. In the 100 industry-supported trials, 31% did not provide any information about authors’ relations with industry, 48% had an author disclosure printed, and 21% had a presumed conflict based on an author’s address affiliation with industry. Only the Annals of Internal Medicine disclosed the type and degree of involvement of the funding source as specified in the Uniform Requirements. Of the 9 industry-supported studies in this journal, 5 disclosed additional industry involvement in the study through trial design and data analysis. Compared to non-industry-supported trials, industry supported trials were significantly more likely to have more enrollees (378 vs 199, P=.001); to be multicenter (82% vs 58%, P<.0001); and to conclude that experimental therapy was better than control therapy in the abstract (84% vs 71%; P=.02).

Conclusion: While sources of study support and author specific conflicts of interest are being disclosed, description of the type and degree of involvement of the supporting agency as specified by the Uniform Requirements is not routinely published.

Department of Medicine, Yale University School of Medicine, PO Box 208025, IE-61 SHM, New Haven, CT 06520-8025, USA, e-mail: anu.gupta@yale.edu

Back to Top

Editorial Independence at Medical Journals Owned by Professional Associations: A Survey of Editors

Ronald M Davis1,2 and Marcus Müllner2,3

Objective

To assess the degree of editorial independence at a sample of medical journals and the relationship between the journals and their owners.

Design

Survey of the editors of 33 medical journals owned by not-for-profit organizations (associations), including 10 journals represented on the International Committee of Medical Journal Editors (9 of which are general medical journals) and a random sample of 23 specialist journals with high impact factors that are indexed by the Institute for Scientific Information.

Results

Of the 33 editors, 23 (70%) reported having complete editorial freedom, and the remainder reported a high level of freedom (a score of >8, with 10 indicating complete editorial freedom and 1 indicating no editorial freedom). Nevertheless, a substantial minority of editors reported having received at least some pressure in recent years over editorial content from the association’s leadership (42%), senior staff (30%), or rank-and-file members (39%). The association’s board of directors has the authority to hire (48%) or fire (55%) the editor for about half of the journals, and the editor reports to the board for 10 journals (30%). Twenty-three editors (70%) are appointed for a specific term (median term=5 years). Three fifths of the journals have no control over their profit. The majority of journals use the association’s legal counsel and/or media relations staff.

Conclusions

Most editors report having complete or nearly complete editorial freedom, although many receive modest pressure from their owners over editorial content. Stronger safeguards are needed to give editors protection against this pressure, including written guarantees of editorial freedom and governance structures that support those guarantees. Strong safeguards are also needed because editors may have less freedom than they believe (especially if they have not yet tested their freedom in an area of controversy).

1Center for Health Promotion and Disease Prevention, Henry Ford Health System, One Ford Place, 5C, Detroit, MI 48202-3450, USA, e-mail: rdavis1@hfhs.org; 2BMJ; London, UK; 3University of Vienna Medical School, Vienna, Austria

Back to Top

Confidentiality of Manuscripts Submitted to Medical Journals: Lessons From a Case

Debra Parrish1,2 and David Bruns3

A laboratory test led to false diagnoses of cancer, and patients suffered harm from resulting surgical procedures. Two peer-reviewed journals, A and B, published reports. A third journal C had rejected the paper published in journal B. A test manufacturer, sued by patients, sought access to records of manuscripts from journals A and C. A related manuscript was in press at journal A.

Questions considered included: should third parties be provided access to publication-related records (1) from journal A that published the initial report (a peer-reviewed letter to the editor); (2) from journal C that rejected 1 paper; and (3) from the author of the unpublished third paper?

Journal A indicated to the court that, by journal policy, the review process was confidential, and that authors submitted their work trusting in that policy. The journal’s motion to quash the subpoena was not contested. Journal C asked permission from the author of the rejected paper to supply the subpoenaed records. The author consented.

In addition, the court ruled that the unpublished paper was confidential: “[The author’s] interest cannot be protected unless he is able to retain control of the article until it is complete; and, where the purpose of the article is publication, complete must be defined as published. Only in this way is [the author] able to preserve the integrity of his work, protect his intellectual property and safeguard his reputation and credibility. Academicians engaged in pre-publication research should be accorded protection commensurate to that which the law provides for journalists.'”

These cases reflect the reactions of journals to third-party subpoenas and the confidentiality of peer review accorded by US courts.

1Debra M Parrish, PC, 615 Washington Rd, Suite 200, Pittsburgh, PA 15228, USA, e-mail: debbie@parrishlawoffices.com; 2University of Pittsburgh Program Teaching Survival Skills and Ethics for Scientists, Pittsburgh, PA, USA; 3Clinical Chemistry, Charlottesville, VA, USA

Back to Top

The Work of the Committee on Publication Ethics (COPE)

Mike Farthing,1 Richard Horton,2 Richard Smith,3 and Alex Williamson3

The Committee on Publication Ethics (COPE) is an informal group founded in 1997 as a response to growing anxiety about the integrity of authors submitting studies to medical journals. Founded by British medical editors, including those of the BMJ, Gut, and the Lancet, the committee had 5 aims:

1. To advise on cases brought by editors. Cases are presented anonymously, and full responsibility for action remains with the reporting editor. The committee has so far considered 103 cases. In 80 cases there was evidence of misconduct. Several cases have been referred to employers and to regulatory bodies like Britain’s General Medical Council. The commonest problems were undeclared redundant publication or submission (29 cases), disputes over authorship (18), falsification (15), failure to obtain informed consent (11), performing unethical research (11), failure to gain approval from an ethics committee (10), and fabrication of data.

2. Publish an annual report describing the cases it considers. The committee has published 3 annual reports and established a Web site (www.publicationethics.org.uk).

3. Draft guidance on these issues. The committee drafted guidelines and after extensive consultation published them in 1999 (available on the Web site). They have been adopted by many journals.

4. Promote research into publication ethics. Little has been achieved so far.

5. Consider offering teaching and training. The committee has run 2 seminars, and individual members of the committee have lectured and taught on research misconduct.

COPE has also been concerned to ensure that the scientific community in Britain responds to research misconduct. Britain has now had several high-profile cases of research misconduct but has yet to make a coherent response to the problem. Several bodies, including the Royal Society and the General Medical Council, are currently considering the problem, and COPE has been important both in spurring these bodies to action and in contributing to a response. COPE might have proved to be a temporary body, but members of the committee judge that its work must continue. It has thus published a draft constitution and proposes to formalize itself.

1Gut, London, UK; 2Lancet, London, UK; 3BMJ Publishing Group, BMA House, Tavistock Sq, London WC1H 9JR, UK, e-mail: rsmith@bmj.com

Back to Top

Postpublication Issues

Positive Outcome and Other Characteristics Predicting Citation by Other Authors of a Cohort of Original Research Publications in Peer-Reviewed Journals

Michael L Callaham,1 Robert L Wears,2 and Ellen J Weber1

Objective

To identify characteristics that predict citation of individual research studies published in scientific journals.

Design

A cohort of all research submitted (as an abstract) to a 1991 medical specialty meeting was eligible; research subsequently published as a full manuscript was identified using previously published methods. Citations of these publications were then identified via computerized search of the Science Citation Index and compared using recursive partitioning (nonparametric classification and regression trees), using the citations per year and the average impact factor of the citing journals as outcome measures, controlling for study characteristics such as design, size, and positive outcome. Study power to determine an effect of strength r2= 0.1 was .97.

Results

A total of 204 studies met study entry criteria; the average citation rate was 2.04 a year, by a variety of 440 different journals. Only 12% of studies were never cited. The ability of any model to predict the citations per year was weak (pseudo-r2= 0.14). The most powerful predictor of citations was the impact factor of the original publishing journal. If that was removed, only presence of a control group and a subjective newsworthiness determined by Delphi panel score strongly predicted citation frequency; only the latter strongly predicted the impact factor of the citing journals. Positive outcome/bias was not evident in citations using either outcome measure.

Conclusions

In this cohort of published research, commonly used measures of study methodology and design did not predict the frequency of citations or the impact factor of citing journals, whereas the impact factor of the publishing journal did. Positive outcome publication bias was not evident in citations by other authors

1Division of Emergency Medicine, University of California, San Francisco, Box 0208, San Francisco, CA 94143-0208, USA, e-mail: mlc@itsa.ucsf.edu; 2Department of Emergency Medicine, University of Florida Health Center at Jacksonville, Jacksonville, FL, USA

Back to Top

How Important Is the Size of a Reprint Order?

Sally Hopewell and Mike Clarke

Objective

The number of reprints ordered for some journal articles can be very large. One, albeit crude, measure of the importance of these articles is the number of times they are cited. This study aims to assess the impact of high-reprint articles by measuring their citation in the subsequent literature as compared to a control group of articles.

Design

The 21 articles published in the Lancet in 1998, that had the highest individual reprint orders (representing a total of more than 1 million reprints) were matched with a control set of 21 articles. Where possible, these control articles were from the same section and issue of the Lancet as the high-reprint article. The Science Citation Index was used to obtain the number of citations for each of the 42 articles.

Results

The 21 high-reprint articles were cited 2548 times; the mean number of citations was 121 (range 3 to 499 citations per article). Fourteen of the high-reprint articles were randomized trials with a mean of 174 citations. Five of the 21 high reprints had more than 200 citations, but seven (33%) were cited 25 times or fewer. The 21 control articles were cited 986 times; the mean number of citations was 47 (range 1 to 165). Seven of the control articles were randomized trials with a mean of 77 citations. Fifteen (71%) of the 21 control articles were cited 25 times or fewer. Twenty-five percent of citations for both the high-reprint and the control articles were in journals published in the United Kingdom.

Conclusions

Articles with a high-reprint order were cited more frequently than other articles. However, some high-reprint articles are cited infrequently. Further research is needed to explore other aspects of the relative importance and impact of high-reprint articles.

The UK Cochrane Centre, NHS R&D Programme, Summertown Pavilion, Middle Way, Oxford OX2 7LG, UK, e-mail: shopewell@cochrane.co.uk

Back to Top

The Correspondence Column: How Important Is Post-Publication Criticism in Shaping Clinical Knowledge?

Richard Horton

Objective

Published correspondence is part of the continuous process of peer review. How is this criticism taken account of in practice guidelines and meta-analyses?

Design

The study included 3 randomized trials in hypertension: the Hypertension Optimal Treatment Trial (HOT), the Captopril Prevention Project (CAPP), and the Swedish Trial in Old Patients with Hypertension-2 (STOP-2). For each trial, a taxonomy of criticism from published correspondence was prepared. The authors’ reply was read to identify agreed weaknesses and unanswered criticisms. Finally, PubMed was searched for practice guidelines and meta-analyses published after the primary trial report, and evidence was sought for incorporation of criticism into interpretations of guidelines and meta-analyses.

Results

From publication to October 25, 2000, HOT was cited in 9 of 36 practice guidelines, CAPP in 6, and STOP-2 was not cited in any of the guidelines. For HOT, CAPP, and STOP-2 there were 14, 14, and 12 criticisms; 5, 9, and 9 comments; and 3, 3, and 3 questions, respectively, in published letters. For HOT, only 1 criticism-lack of power-was referred to in 1 guideline. For CAPP, only 2 of 6 guidelines identified the study as not “entirely satisfactory.” That HOT and CAPP were randomized trials seemed sufficient to grant them unqualified validity when importing their results into guidelines. HOT was cited but not included in 2 of 62 meta-analyses, and CAPP was cited but not included in 1 of the 62 meta-analyses. Further analyses are planned.

Conclusions

Half of all criticism made in correspondence went unanswered by authors. Important weaknesses in trials were ignored in subsequently published practice guidelines. Failure to recognize the critical footprint of primary research weakens the validity of guidelines and distorts clinical knowledge.

Lancet, 42 Bedford Sq, London WC1B 3SL, UK, e-mail: r.horton@elsevier.co.uk

Back to Top

POSTER SESSION ABSTRACTS

Saturday, September 15

Authorship and Contributorship

Back to Top

Equity and Accountability: Successful Use of the Contributorship Concept in a Multisite Study

E Beth Devine,1 Johnny Beney,2 and Lisa A Bero1,3

Objective

In 1991, Roderick Hunt described the use of an “authorship index” as a scoring schema for ordering contributions. The idea of assigning contributors and guarantors, while maintaining a list of acknowledgments, was proposed by Rennie and Yank in 1996. Contributorship has since been adopted by a few, key medical journals. In 1998, a multisite study investigating the impact of a telephone intervention on the quality of life of cancer patients was begun. This presentation summarizes the scoring methods and outcomes of the application of contributorship in this multisite study.

Design

Descriptive study of consensus process. Shortly after study launch the 3 primary investigators convened a meeting of all study investigators. Meeting participants received background reading on contributorship. The concept was discussed and embraced. Participants developed a list of all possible contributions to the study and divided them into those representing “authorship contributions” and “acknowledgment contributions.” Each contribution was ranked 1 to 3 (low to high) according to its level of importance and weighted according to level of participation. A contributorship score was then calculated for each individual. Each participant completed a contributorship worksheet at the end of the study that listed their contributions and calculated the contributorship score.

Results

Three guarantors, 8 authorship-contributors, and 10 acknowledgment-contributors were identified. Investigators welcomed the concept and agreed it clarified potential authorship difficulties. The ordering of contributors in the byline was determined by the contributorship score.

Conclusions

In this multisite study, the concept of contributorship was applied to ensure equity and accountability of contributions. The Department of Clinical Pharmacy at the University of California, San Francisco, is now considering creation of a policy based on this methodology.

1Department of Clinical Pharmacy, University of California San Francisco; 2Institut Central Des Hôpitaux Valaisans, Sion, Switzerland; 3Institute for Health Policy Studies, University of California San Francisco, 3333 California St, Suite 265, San Francisco, CA 94143-0936, USA, e-mail: bero@cardio.ucsf.edu

Back to Top

Authorship Criteria Among Cuban Biomedical Professionals

Guillermo J Padrón1 and Jorge Bacallao2

Objective

To evaluate the level of awareness on authorship criteria among Cuban biomedical professionals and the extent to which they adhere to them.

Design

An anonymous questionnaire was distributed to 532 Cuban professionals from Havana biomedical institutions that agreed to allow all their professional staff to participate. Each participant was asked about the importance of knowing authorship criteria and to rate in an ordinal 1-to-5 scale the importance they attribute to 12 of the most commonly considered criteria. They were also asked whether they considered to have been unduly included or excluded among authors of a reported paper and their opinion about the need of an institutional publication policy.

Results

The response rate was 63%. Of 335 respondents, 67 (20%) declared not to be informed about authorship criteria although 88% of them accepted their importance. Of the 267 participants who responded that they were informed about authorship criteria, 36% failed to show a real knowledge about them. Thirty-seven percent of participants reported to have been unduly included or excluded among authors of reported papers. The real knowledge of authorship criteria did not influence that perception. Among authorship criteria, the participation in research design, analysis of the results, and origination of the idea showed the highest ratings (>70%). Interestingly, although primary data contribution, manuscript revision, procedure setup, and manuscript approval received the lowest ratings (<36%), all were considered a valid criteria by at least 25% of participants (23% if only those with a real knowledge are considered).

Conclusion: The importance of authorship criteria is duly recognized among participants in our study, though only 52% showed a true knowledge of them, regardless of the institution, work position, postgraduate experience, or number of published papers.

1 Elfos Scientiae, PO Box 6072, Ciudad de La Habana 10600, Cuba, e-mail: gjpg@cigb.edu.cu; 2 Havana Medicine School, Havana, Cuba

Back to Top

Authorship of Published Medical Papers in Three Chinese Medical Journals

Li Wenhui,1 Qian Shouchu,2 and Qian Yue3

Objective

To investigate the authorship of published medical papers in 3 Chinese medical journals and to understand the need for authorship education and improvement.

Design

A questionnaire was designed according to the International Committee of Medical Journal Editors (ICMJE) revised definition of authorship (May 2000) and JAMA‘s authorship standards for the year 2001. The items of the questionnaire included 3 major parts: (1) conception and study design, acquisition of data, data analysis and interpretation; (2) drafting manuscripts, substantial revision, and final decision for publication; (3) statistical analysis, obtaining grants, administrative, technical and material support, supervision and management, and other. Authors should meet at least 1 criterion of each major part, otherwise they will be included in the listing of contributors in the acknowledgment section of the papers. The papers subjected to authorship investigation were those published in 3 Chinese medical journals in 2000.

Results

Of 70 authors from the Chinese Medical Journal, 42 (60%) responded; of 61 authors from the Chinese Journal of Neurology, 35 (57%) responded; and of 60 authors from the Chinese Journal of Pediatrics, 28 (47%) responded. In a total of 105 respondents, 4 were excluded because of incomplete surveys. These 101 respondents represented a total of 524 authors and coauthors, of whom 219 (42%) contributed to conception and study design, 304 (58%) to acquisition of data, 228 (44%) to data analysis and interpretation, 132 (25%) to drafting manuscripts, 163 (31%) to substantial or important revision, 197 (38%) to final decision for publication, 138 (26%) to statistical analysis, 143 (27%) to obtaining grants, 237 (45%) to administrative, technical, and material support, 135 (26%) to supervision and management, and 37 (7%) to other. Only 197 authors (36%), including 90 first or main authors, met the ICMJE criteria for authorship or the JAMA standards.

Conclusion: The low rate of authors meeting the ICMJE and JAMA criteria for authorship in this study shows that it is necessary for us to do something in authorship education and improvement.

1Chinese Journal of Neurology, 42 Dongsi Xidajie, Beijing 100710, China; 2Chinese Medical Journal, Beijing, China; 3Chinese Journal of Pediatrics, Beijing, China

Back to Top

Communicating to Readers

Changes in Stage of Learning Associated With Participation in a Journal-Based Continuing Medical Education Activity

Thomas B Cole1,2 and Richard M Glass1,3

Objective

To compare physicians’ stages of learning before and after participating in a journal-based continuing medical education (CME) activity.

Design

A systematic random sample of 170 CME participants who had read 3 journal articles, completed a CME evaluation form, and received a CME credit certificate with a brief survey appended. The survey asked participants to report their stage of learning on each article topic before and after reading the 3 articles.

Results

Of the 170 CME participants, 138 (81.2%) responded to the survey. For most of the articles read for CME credit (212/414, 51.2%), a progression of 1 or more learning stages was reported. Progression from stage 0 (knowledge of the article topic was not a problem for respondent’s practice of medicine) to stage 1 (knowledge of the topic was considered to be a problem, but respondent had not yet made an effort to learn more about it) was reported for 14 articles, progression from stage 1 to stage 2 (respondent reported learning about the topic from articles and other sources) was reported for 46 articles, and progression from stage 2 to stage 3 (respondent reported readiness to apply new knowledge of the topic to patient care) was reported for 70 articles. Most respondents (106, 76.8%) reported a progression of at least 1 learning stage on the topic of at least 1 of the 3 articles read for CME credit. CME participants were not significantly more likely (relative risk 1.14, 95% confidence interval, 0.87-1.49) to report a progression in stage of learning on a particular article topic if they had recorded a commitment to change practice on the CME evaluation form.

Conclusion: Most physicians participating in a journal-based CME activity reported that they learned enough about at least 1 article topic to progress from one stage of learning to the next.

1JAMA, 515 N State St, Chicago, IL 60610, USA, e-mail: thomas_cole@ama-assn.org; 2University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; 3University of Chicago, Chicago, IL, USA

Back to Top

Press Releases: Translating Research Into News

Steven Woloshin and Lisa M Schwartz

Objective

To describe press release policies at high-profile medical journals and to review recent press releases.

Design

Telephone interview with press officers at 8 prominent journals: Annals of Internal Medicine, BMJ, Circulation, JAMA, Journal of the National Cancer Institute, New England Journal of Medicine, Pediatrics, and Surgery. All journals contacted agreed to participate. We also analyzed press releases for the 6 issues of each journal preceding the interview.

Results

Six of the 8 journals issue press releases (the exceptions are New England Journal of Medicine and Surgery) and use the same general process. The editor (with advice from the press office) selects articles based on perceived newsworthiness and general public interest. Releases are written by press officers trained in communications (only 2 had science degrees ). Each of the journals has general guidelines for the press releases (eg, overall length); however, none requires a limitations section or has data presentation standards. Editorial input varies: at 1 extreme the journal editors are uninvolved (Circulation); at the other, the manuscript editor works with the study author and press office to edit the release, which is approved by the editor-in-chief (Circulation). The 6 journals issued a total of 104 press releases about research articles during the study period (an average of 3 press releases per issue, range: 2-5). The releases differed markedly in length from 3 sentences (Annals of Internal Medicine, Pediatrics) to 2 pages (Circulation, JAMA). Only 16% noted any caveats for interpreting results. Sixty-five percent reported main effects using numbers. Among the 51 releases presenting differences between study groups, 43% used only relative terms (eg, relative risk or odds ratio) without providing the base rate.

Conclusions

There is considerable variation in how high-profile journals approach press releases. Data are often presented using formats known to exaggerate the perceived importance of findings, and study limitations are not routinely highlighted. Journals may be missing an important opportunity to enhance the quality of medical reporting.

VA Outcomes Group, 111B, White River Junction, VT 05009, USA, e-mail: lisa.schwartz@dartmouth.edu; and Center for the Evaluative Clinical Sciences, Dartmouth Medical School, Hanover, NH, USA

Back to Top

Editorial Process

The Metaedit: When Is Copyediting Peer Review?

Jane C Lantz

Objective

To quantify and characterize manuscript content deficiencies raised during copyediting that likely should have been addressed during peer review (metaedits).

Design

This retrospective study reviewed edited manuscripts (text, tables, figures) of all original articles published in Mayo Clinic Proceedings between January 2000 and June 2001. With use of the author query as a marker for the level of edit, each query was categorized as pertaining to a microedit – basic correction at the paragraph level (eg, checking grammar, spelling, punctuation, style); a macroedit-revision as necessary at the manuscript level (eg, rewriting for clarity, checking consistency among manuscript elements, verifying arithmetic); or a metaedit-addition or detailed request to author for addition of information that changed content (eg, describing study design, interpreting statistical results). The main outcome measure was the number of manuscripts that contained metaedit-related queries. Secondary outcome measures were characterization of the nature of the query (ie, in what section of the manuscript was the metaedit needed) and whether the subject of the metaedit had been addressed during peer review.

Results

Data analysis included all 92 original articles published during the study period. Each manuscript had undergone at least 1 round of peer review with traditional unstructured review techniques, and each had been revised at least once. Eleven manuscripts (12%) needed metaedits, 3 in methods, 4 in results, and 4 in both methods and results. None of the metaedits addressed concerns that were raised during peer review.

Conclusions

Metaediting detected manuscript deficiencies not addressed during peer review in 12% of manuscripts. The deficiencies likely would have had a detrimental effect on the reader’s ability to understand the study’s design, results, or both. Tools to remind reviewers (eg, structured review forms or checklists) may enable careful evaluation of the methods and results of manuscripts under consideration.

Mayo Clinic Proceedings, Mayo Foundation, 200 First St SW, Rochester, MN 55905, USA, e-mail: Lantz.Jane@mayo.edu

Back to Top

Changes in Manuscripts and Quality: The Contribution of Peer Review

Gretchen P Purcell,1 Shannon L Donovan,2 and Frank Davidoff2

Objective

To measure objectively the changes in manuscripts during the editorial process, the sources of those changes, and their relationships to manuscript quality.

Design

Using line-by-line comparison, all substantive modifications to original research articles made between submission and publication by Annals of Internal Medicine were identified. Each change was classified according to our previously published taxonomy based on problems in reporting scientific research. From communications between the author and editorial staff, the principal source of each change was determined (ie, editor, peer reviewer, statistician, or technical editor). Blinded independent reviewers assessed manuscript quality using a modification of a previously published instrument.

Results

Analysis to date has identified 1358 changes in 22 manuscripts (mean: 62 changes per manuscript). Editors prompted the most changes (28%), followed by peer reviewers (14%), statisticians (14%), and technical editors (3%). More than 1 person recommended 6% of the modifications. A source was not identified for 47% of changes. Editors, peer reviewers, and statisticians most often requested revisions because information was missing. Technical editors frequently asked for modifications when the text did not contain enough detail. Changes without an obvious source usually added missing data or removed extraneous text. Overall, the modifications most commonly involved missing (29%) or extraneous (19%) information.

Conclusions

All members of the editorial team for Annals of Internal Medicine contribute significantly to changes in research articles during the editorial process. Over half of the modifications made to manuscripts between submission and publication result from peer review and editorial review. The relationship between these changes and manuscript quality is the subject of ongoing research.

1Duke University Medical Center, DUMC 31134, Durham, NC 27710, USA, e-mail: purcell@duke.edu; 2Annals of Internal Medicine, Philadelphia, PA, USA

Back to Top

Effects of Technical Editing and Other Standardization Processes Applied by Biomedical Journals to Research Reports

Elizabeth Wager,1 Philippa Middleton,2 and the Peer Review and Technical Editing Systematic Review (PIRATES) Group

Other members of the PIRATES group are Phil Alderson, Frank Davidoff, and Tom Jefferson

Objective

To assess the effects of technical editing and other standardization processes on research reports in peer-reviewed biomedical journals.

Design

Systematic review following Cochrane Collaboration methodology of comparative studies presenting original data identified from Cochrane Library, MEDLINE, 12 other databases, and hand searching relevant journals from earliest available entry until July 2000.

Results

Eighteen studies addressing editing performed between acceptance and publication or interventions designed to standardize publications (eg, journal instructions to authors) were identified. Two studies showed improvements in the quality of papers after peer review and editorial processes, with 1 indicating that editing improved style and readability. Two others demonstrated improvements in readability, although papers remained difficult to read. Adequacy of reporting increased over time in 2 of 3 studies, although the overall quality of reporting remained low in 1. Instructions to authors alone were ineffective in improving overall reporting quality in 1 study but were associated with improved reporting of ethical review in another. One study found that providing a worked example improved readers’ ability to make calculations compared with presenting only a theoretical formula in a paper about physiotherapy equipment. Six studies examined structured abstracts and found them to be more readable, easier to search, and of higher quality but longer than unstructured abstracts. Four other studies examined deficiencies in abstracts, showing that instructions to authors were ineffective, but that editing abstracts in-house before publication improved accuracy. Only 1 study (of instructions to authors) was prospective and randomized; the others used before/after designs, and some surveyed different journals, leading to possible bias.

Conclusions

Surprisingly few studies have evaluated the effects of these interventions rigorously. However, there is some evidence that the package of technical editing and other processes used by biomedical journals does improve papers.

1GlaxoSmithKline, International Medical Publications, Greenford Road, Greenford, Middlesex, UB6 0HE, UK, e-mail: liz@sideview.demon.co.uk; 2Royal Australasian College of Surgeons, North Adelaide, Australia

Peer Review of Grants

Characteristics of Successfully Recruited Grant Application Peer Reviewers

William C Grace, Teresa Levitin, and Susan Coyle

Objective

Because of anecdotal reports of many scientists’ refusing duty on grant application review panels, with subsequent effects on the quality of review, this analysis of available samples was undertaken to document rates of accepting invitations to review applications to an institute at the US National Institute on Drug Abuse (NIDA) and to provide an initial analysis of factors commonly thought to be associated with refusals.

Design

Cross-tabulations of responses to invitations to review for the NIDA with demographic variables associated with expertise and professional rewards for reviewing, along with qualitative analysis of reasons for refusal.

Results

Acceptance rates ranged from 49% (36 of 73 contacted) for a relatively nondemanding request to serve on a single panel to 22% (13 of 60) for a complex, multiple meeting, review assignment. For the less demanding review, invitees employed by universities were less likely to agree to serve than those from other organizations (25 of 60, or 42%, vs 11 of 13, or 85%; Fisher’s exact test, P=.005). Gender, ethnicity, degree (MD vs PhD), and academic rank were not associated with refusals. Reasons given for refusing indicated high demands on reviewers’ time and the availability of other rewarding scientific activities.

Conclusions

Acceptance rates among those typically asked to serve on NIDA review panels were less than half, supporting anecdotal reports of widespread reluctance to serve and the presence of demands that compete with reviewing. Despite the value of study section service to universities, university personnel were less likely to agree to serve. Data did not otherwise suggest systematic differences in types or level of expertise among those who agreed.

Office of Extramural Affairs, National Institute on Drug Abuse, National Institutes of Health, 6001 Executive Blvd, Room 3158, MSC 9547, Bethesda, MD 20892, USA, e-mail: wg15v@nih.gov

Back to Top

Proposal Review Scores as Predictors of Project Performance

Karunesh Tuli and Alan Sorkin

Objective

To identify programmatic factors at the proposal stage that predict child survival project performance in improving health practices and coverage.

Design

This retrospective observational cohort study explored the relationship between proposal review scores and project performance in the context of child survival projects implemented by voluntary organizations and funded by the US Agency for International Development. The study examined 85 projects that started in or after 1991 and ended by 1997. Comparable baseline and final data on health practices and coverage were available for 65 projects. These projects were located in 21 developing countries. Three or 4 technical reviewers appraised each proposal. In all, there were 95 reviewers. Some reviewers assessed more than one proposal. However, review panel composition varied from 1 proposal to another. Multiple regression analyses were carried out to identify proposal scoring criteria that predicted project performance. The dependent variables were computed as the gap covered between baseline and final evaluations in 10 indicators of child survival project performance.

Results

There was a moderate level of agreement among reviewers in assigning scores to proposals (intraclass correlation coefficient 0.61, 95% confidence interval 0.48-0.72). The proportion of variability in project performance explained by the independent variables ranged from 13% to 44% for the 10 indicators. Positive predictors of project performance included ratings on the quality of proposed staff and plans for community participation, private sector involvement, and revenue generation. Negative predictors of performance included ratings on plans for monitoring and evaluation and plans for collaboration with other health and development agencies.

Conclusion: Greater attention to predictors of project performance identified in this study may help organizations in improving the performance of health projects in developing countries.

Department of International Health, Johns Hopkins University, 5001 Chicago Ave, #203, Lubbock, TX 79414, USA, e-mail: ktuli@hotmail.com

Back to Top

Quality Issues and Standards

Factors Associated With the Quality in Randomized Controlled Trials Examining Brain Injury

Chisa Yamada and Mark K Borsody

Objective

To identify characteristics of randomized controlled trials (RCTs) that define quality and to determine if RCT quality in a representative field of medical research is improving over time.

Design

A binary (eg, present or absent) quality scale consisting of 7 criteria of internal validity was developed. The 7 criteria were (1) use of randomization, (2) assessment of prognostic factor distribution, (3) use of intention-to-treat, (4) blinding of the patient, (5) blinding of the health care team, (6) blinding of the outcome observer, and (7) assessment of patient follow-up. These criteria each have empirical support in the medical literature as measures of RCT quality, and their use in a quality scale was pretested for validity and reliability. Next, a PubMed search from 1966-2001 was performed for the medical subject heading (MeSH) “brain injury” limited to RCTs. These manuscripts (n=139) were then graded in a manner blind to study authors, publication year, and journal. Additionally, study authors’ degrees and specialties and the nature of the findings were noted. The sum of the aforementioned RCT criteria (RCT score) was analyzed in relation to publication date, author characteristics, and results.

Results

Regression analysis showed that no change in RCT score occurred over time (slope = 0.03; 95% confidence interval -0.07, 0.14). An RCT score was positively influenced by having a PhD as the primary author (t=4.7, P<.01), and by having coauthors with degrees or academic appointments related to statistics (t=3.2, P<.01). Those RCTs reporting negative results exhibited higher RCT scores when compared with RCTs reporting positive results (t=8.4, P<.01).

Conclusions

No evidence that RCTs examining brain injury are increasing in quality over time was found. Quality of RCTs was influenced only by having authors with scientific and statistical training. Also, studies reporting negative results were of higher quality than those with positive results.

Department of Neurology, McGaw Medical Center of Northwestern University, 645 N Michigan Ave, Chicago, IL 60611, USA, e-mail: mborsody@hotmail.com

Back to Top

An Evaluation of the Graphical Literacy of JAMA: Implications for Peer Review

Richelle J Cooper, David L Schriger, and Reb J H Close

Objective

Graphs are an efficient means of presenting data. Despite this, peer review has not emphasized the importance or quality of graphs in medical manuscripts. This study sought to characterize the quantity and quality of graphs in JAMA.

Design

All original research in 12 random issues from 1999-2000 were evaluated using a previously validated 43-item abstraction form. Two reviewers independently rated the type of graph, the clarity of each graph, discrepancies within graphs or between graph and text, use of advanced graphical features, and efficiency of data presentation. Agreement was assessed and discordant ratings were adjudicated by consensus of the 3 authors.

Results

The 12 issues contained 56 research articles. There were 64 (the N unless otherwise stated) graphs in the 37 articles that had graphs. Raters agreed on 91% of 2752 items. Simple bar or point charts (63%) predominated. Internal errors (8%), contradictions with text (3%), numeric distortion (6%), lack of visual clarity (5%), nonstandard graphing conventions (11%), or extraneous decoration (0%) were rare. Graphs generally defined all symbols (98%), but 31% were not self-explanatory (despite knowing the study’s design and reading the figure’s legend we could not unambiguously interpret the graph). Fourteen percent contained redundancies. Graphs infrequently showed by-subject data (9%), or displayed advanced features (15%) such as pairing, symbolic dimensionality, or small multiples. Forty-eight percent (21/44) of graphs failed to portray the underlying distribution, 48% (26/54) failed to depict stratification on potential confounders, and 67% (14/21) failed to depict pairing inherent in the data. Thirty-eight percent of articles needed more graphs to adequately convey results. There were no large differences between 1999 and 2000 graphs.

Conclusions

The graphs in JAMA, while generally clear and without errors, often fail to capture important details. Editors should encourage the graphical presentation of data, especially in formats that portray stratified, granular data.

UCLA Emergency Medicine Center, UCLA School of Medicine, 924 Westwood Blvd, #300, Los Angeles, CA 90024, USA, e-mail: richelle@ucla.edu

Back to Top

Toward Complete and Accurate Reporting of Studies on Diagnostic Accuracy: The STARD Statement

Jeroen G Lijmer,1 Hans R Reitsma,2 Patrick M M Bossuyt,3 Les Irwig,4 Paul Glasziou,5 David E Bruns,6 Drummond Rennie,7 David Moher,8 Riekie de Vet,9 and Constantine Gatsonis,10 for the STARD Group

A key element in the evaluation of diagnostic tests is the assessment of diagnostic accuracy. Such an assessment usually consists of a comparison of information from 1 or more tests with information from a reference standard measured in consecutive patients suspected of the condition of interest. Accuracy refers to the amount of agreement between test and reference standard and can be expressed in many ways, including sensitivity and specificity. There are several threats to the validity of such a study. A survey of the methodological quality of studies on diagnostic accuracy revealed that key elements of design, conduct, and analysis were often not reported. Another report showed that diagnostic studies with methodological flaws are associated with biased, optimistic, estimates of diagnostic accuracy compared to studies without such shortcomings.

The objective of the STAndards for Reporting of Diagnostic accuracy (STARD) initiative is to improve the quality of reporting of studies on diagnostic accuracy. Complete and accurate reporting allows the reader to detect the potential for bias in the study and to judge the generalizability and applicability of the results.

The STARD group started with a literature search to identify publications on the conduct and reporting of diagnostic studies. Potential checklist items were extracted into a list of 75 items. During an international consensus meeting, attended by researchers, editors, methodologists, and members of professional organizations, this list of items was reduced to a 25-item checklist. In addition a flow diagram was developed to provide information on key numbers. The checklist and flow diagram were tested on a set of diagnostic articles and posted on the Internet. Comments from this phase were used for revisions.

Use of the STARD checklist and flow diagram in the review process is proposed to improve the quality of the reporting of studies of diagnostic accuracy.

1Department of Psychiatry, University Medical Center-Utrecht, Heidelberglaan 100, 3508 GA Utrecht, the Netherlands, e-mail: j.g.lijmer@psych.azu.nl; 2Academic Medical Center, Amsterdam, the Netherlands; 3Academic Medical Center, Amsterdam, the Netherlands; 4University of Sydney, Sydney, Australia; 5Mayne Medical School, Herston, Australia; 6Clinical Chemistry, Charlottesville, VA, USA; 7JAMA, Chicago, IL, USA; 8Thomas C Chalmers Centre for Systematic Reviews, Ottawa, Ontario, Canada; 9Free University, Amsterdam, the Netherlands; 10Brown University, Providence, RI, USA

Back to Top

Appointment of Statistical Editor and Quality of Statistics in a Small Medical Journal

Ivan Kresimir Lukic, Ana Marusic, and Matko Marusic

Objective

To test the if the appointment of a statistical editor improves the quality of manuscripts published in a small general medical journal.

Design

Retrospective review of all research manuscripts containing statistical analysis, published in the Croatian Medical Journal between 1992 and 2000 (n=241), and statistical reviews (n=30) of published manuscripts. Statistical analysis and its presentation were assessed by a blinded observer.

Results

Before the appointment of a statistical editor in 1996, 97 research manuscripts with statistical data were published. The statistics was not satisfactory in 52 of them (54%), including 26 definite errors in analysis and 43 in reporting. After the appointment of the statistical editor (1996-2000), 144 manuscripts with statistical data were published. Statistics was not satisfactory in 91 (53%), with 51 definite errors in analysis and 69 in reporting. Of 144 published manuscripts, the editor-in-chief sent only 30 (21%) out for statistical review. Statistics was not satisfactory in most manuscripts reviewed by the statistical editor (25/30), including 11 definite errors in analysis and 17 in reporting. Statistics was a priori satisfactory in 2 manuscripts and statistical editor comments improved another 3. However, if the authors had acknowledged all suggestions of the statistical editor, 9 more manuscripts would have been improved. The statistical editor made a total of 195 comments on 30 published manuscripts. Half of them (51%) concerned the presentation of data and analysis, followed by general comments (26%), comments on analysis (11%), study design (8%), and interpretation (4%). No manuscript was rejected specifically because of the statistical review.

Conclusions

Appointment of a statistical editor is not a guarantee for the improvement of statistics in small journals. Other measures are necessary, including editorial policy on sending manuscripts out for statistical review and strict monitoring of revised manuscript versions.

Croatian Medical Journal, Salata 3, 1000 Zagreb, Croatia, e-mail: marusica@mef.hr

Back to Top

The Effect of Dedicated Methodology/Statistical Review on Published Manuscript Quality

David L Schriger,1 Richelle J Cooper,1 Robert L Wears,2 and Joseph F Waeckerle3

Objective

To examine how reviews performed by the 2 dedicated methods/statistical reviewers affect the quality of manuscripts published in Annals of Emergency Medicine.

Design

The 2 dedicated reviewers developed a manuscript scoring form based on previously used instruments. The form contained 85 unique elements. Eight items sought the presence of state-of-the-art features (eg, formal exploration of sensitivity to assumptions); the others sought substandard quality. Two raters independently scored each original research publication appearing in 4 consecutive issues of Annals of Emergency Medicine for the presence or absence of each relevant item. The formal methods/statistical review and all subsequent correspondence between authors and editors were examined to determine if the methods/statistical review provided guidance regarding each of the 85 items and whether the advice was incorporated into the final manuscript.

Results

There were 32 original research articles. One was never subjected to methods/statistical review. Reviewers agreed on 91% of all items; single-item agreement ranged from 77% to 100%. State-of-the-art features were present in 31 (13%) of 248 ratings; the methods/statistical review had commented on 13 (42%) of these. State-of-the-art features were absent in 187 (87%) of ratings; the methods/statistical review had commented on 33 (18%) of these. Substandard features were deemed present in 196 (13%) of 1521 ratings; the methods/statistical review had commented on 86 (44%) of these. Substandard features were absent in 1325 (87%) of ratings; the methods/statistical review had commented on 132 (10%) of these. No fatal flaws in the published manuscripts were found.

Conclusions

Methods reviewers often failed to comment on deficiencies that they had classified as substandard when designing this study. Reviews also failed to encourage inclusion of state-of-the-art features. When reviews identified areas in need of improvement, only half of the comments led to improved manuscripts. In the other half, either authors rebuked the suggestions or editors did not act when suggestions were ignored.

1UCLA Emergency Medicine Center, UCLA School of Medicine, 924 Westwood Blvd, #300, Los Angeles, CA 90024, USA, e-mail: schriger@ucla.edu; 2Department of Emergency Medicine, University of Florida, Jacksonville, FL, USA; 3Department of Emergency Medicine, University of Missouri-Kansas City School of Medicine, Kansas City, MO, USA

Back to Top

Review of Reference Inaccuracies

Lee Ann Riesenberg and Srinvas Dontineni

Objective

Peer-reviewed studies of reference inaccuracies published in English were reviewed.

Design

The search strategy included a MEDLINE, CINAHL, and Health STAR search from 1966 to November 2000 and a hand search of the reference sections of identified articles. Search terms included the text words citation, OR quotation, OR reference AND error. This search was then combined using AND with the exploded terms writing OR publishing OR periodicals. Studies included used a random sample procedure for selection of references or included all references from specified issues and were published in English. Citation errors involved the author’s name, article title, journal name, volume number, publication year, or page numbers. Quotation errors ranged from word changes in quotes to misrepresentation of the original author’s work.

Results

Thirty studies published (1979 to 2000) in the biomedical literature, including dentistry, nursing, medicine, pharmaceutical, public health, science, and veterinary medicine, were identified. These studies revealed citation error rates of 7% to 60%, with 1% to 24% of citation errors so significant that readers could not immediately locate the articles. Quotation error rates of 0% to 58% were identified.

Conclusions

Despite the call issued over 2 decades ago for improvements, authors continue to make reference errors. Authors can prevent these errors by (1) allowing time in the writing process for adequate reference preparation; (2) checking citations and quotations for interpretation errors; (3) following the reference format of a style manual when citing a secondary source; and (4) using reference software. Publishers can prevent these errors by checking references against MEDLINE and other electronic databases.

Guthrie/Robert Packer Hospital, Guthrie Square, Sayre, PA 18840, USA, e-mail: lriesenb@ghs.guthrie.org; and Upstate Medical University, Binghamton, NY, USA

Back to Top

Content Categorization for a Peer-Reviewed Journal

David Newman,1 Bin Zou,2 and Judith Tintinalli3

Objective

To develop a system for content categorization of manuscripts for a peer-reviewed journal that can also categorize reviewers and editors.

Design

In phase 1, a set of 51 content categories that was developed for reviewer and editor assignments was reorganized by the journal’s editorial board, using the Delphi approach. Decision rules for the content categories were defined. The content categories for 4 months of journal publications were then validated. In phase 2, content categories were refined, granular subcategories were developed, and categories were validated for 24 months of publications. The unweighted κ coefficient was determined to measure agreement between 2 reviewers for the content categorization of articles.

Results

In phase 1, 17 content categories were developed with a κcoefficient of 0.79 (95% confidence interval [CI] 0.68-0.89, P<.001) for 4 months of publications. In phase 2, content categories were expanded to 26 with an additional 94 subcategories. A total of 248 articles were classified, with 343 observations. Subcategories were necessary to categorize reviewers with focused areas of interest, but were impractical to use for articles. The κcoefficient for agreement between authors was 0.88 (95% CI 0.84-0.92, P<.0001) for phase 2. The 26 content categories were then used to assign areas of responsibility for editors.

Conclusions

A single system of content categorization was developed to organize the content of articles, reviewer expertise, and editor responsibility. This method is simple and practical and while it was developed for a single journal, the principles could be applied to other journals to categorize their unique content.

1Department of Emergency Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA; 2Department of Biostatistics, University of North Carolina School of Public Health, Chapel Hill, NC, USA; 3Department of Emergency Medicine, University of North Carolina, CB7594, Chapel Hill, NC 27599-7594, USA, e-mail: jet@med.unc.edu

Back to Top

Research on Abstracts

Full Publication of Results Initially Presented in Abstracts: Revisited

Roberta Scherer and Patricia Langenberg

Objective

To re-examine the rate at which results first appearing in abstracts are published in full, the time between presentation and subsequent full publication, and the association between study characteristics and publication.

Design

Systematic review of all reports that examined the subsequent full publication of biomedical results initially presented in abstract or summary form. Reports were found by searching MEDLINE (1965 through June 2000), Science Citation Index (citations to identified reports), reference lists of identified reports, author files, and word-of-mouth. Reports with less than 2 years of abstract follow-up were excluded because previous studies have shown that subsequent publication of most abstract results takes place within this time period. Reports that did not show the numbers of abstracts examined or followed were also excluded. Dichotomous variables were analyzed using relative risk (RR).

Results

Data from 46 of 49 reports resulted in a weighted mean rate of publication of 44.8% (95% confidence interval [CI] 44.0 to 45.6). Few abstracts were published more than 3 years after presentation. Positive results, defined in 4 reports (635 abstracts) as a preference for the experimental treatment, were not associated with full publication (RR=0.97; 95% CI, 0.81-1.18). In contrast, positive results defined in 3 reports (458 abstracts) as statistically significant results or a definite preference favoring either treatment arm, were associated with full publication (RR=1.51; 95% CI, 1.27-1.79), thus showing evidence of publication bias. Other factors associated with full publication were randomized trial vs other clinical study designs, acceptance for presentation at a meeting, basic vs clinical research, and sample size at median.

Conclusion: Efforts to collect all the evidence in a field are stymied first by the failure of investigators to take abstract results to publication and second by the bias in taking only those results with significant results to publication.

Department of Epidemiology and Preventive Medicine, University of Maryland School of Medicine, 660 West Redwood Street, Baltimore, MD 21201, USA, e-mail: rscherer@epi.umaryland.edu

Back to Top

Data Inconsistencies in Abstracts of Research Articles in the New Zealand Medical Journal

Robert Siebers

Objective

Previous studies have demonstrated data inconsistencies in medical journal articles. The objective of the study was to determine the abstract data inconsistency rate of the New Zealand Medical Journal, and whether this correlated with reference inaccuracies.

Design

Assuming a 25% (range: 10%-40%) abstract inconsistency rate, 24 research articles (80% power at P=.05) were chosen by random number generation from 91 research articles with abstracts containing data published during 2000. Abstracts were deemed inconsistent if they contained data inconsistent with what was reported in the body of the article (including figures and tables) or not reported in the article at all. References were checked for accuracy against the MEDLINE listing.

Results

Nine articles had abstracts with data inconsistencies giving an abstract inconsistency rate of 37.5% (95% confidence interval 16.6%-58.4%). In these 9 articles there were 2 abstracts containing data (n=2) that were absent in the body of the article, the other 7 abstracts contained data (n=14) inconsistent to that in the body of the article. There was no significant difference in the corresponding reference error rates between inconsistent and consistent abstracts (mean 26.7% vs 34.9%, P=.44). The most frequent reference errors were spelling mistakes of author names and in the titles.

Conclusions

About one third of articles published in the New Zealand Medical Journal during 2000 had data inconsistencies in the abstracts. All these articles contained reference errors that were unrelated to the abstract data inconsistencies suggesting different reasons for these occurring.

Department of Medicine, Wellington School of Medicine, PO Box 7343, Wellington, New Zealand, e-mail: rob@wnmeds.ac.nz

Back to Top

Fate of Abstracts Submitted to Biomedical Meetings: A Systematic Review

Björn Erik von Elm, Greta Poglia, Bernhard Walder, and Martin R Tramèr

Objective

To study the acceptance rate of abstracts at biomedical meetings (meeting acceptance), the rate of subsequent publication (publication rate), and predictive factors for both.

Design

Systematic search (MEDLINE, Embase, Cochrane, CINAHL, BIOSIS, SCI, through March 31, 2001, all languages) for any relevant reports. Dichotomous data on predictive factors were combined using a fixed-effect model with relative risks (RRs) and 95% confidence intervals [CIs]. Kaplan-Meier survival analyses were used to estimate delay until publication.

Results

Twelve studies reported on meeting acceptance, 40 on publication rate, and 8 on both. At 45 meetings, 14 945 abstracts were submitted and 6815 (45.6%) accepted. Predictive factors for meeting acceptance were basic vs clinical research (RR 1.92, 95% CI 1.63-2.27), positive vs negative/equivocal outcome (1.27, 1.08-1.50), and surgical vs nonsurgical meeting (1.10, 1.05-1.16). At 228 meetings, 18 707 abstracts were accepted and 7705 (41.2%) subsequently published (follow-up, 8 to 300 months). Predictive factors for publication were basic vs clinical research (1.46, 1.30-1.63), positive vs negative/equivocal outcome (1.38, 1.23-1.55), surgical vs nonsurgical meeting (1.24, 1.20-1.28), and oral vs poster presentation (1.25, 1.08-1.44). Of all 7705 published abstracts, 14.3% were published within 1 year, 37.4% within 2, 71.4% within 4, and 97.2% within 8 years.

Conclusions

Acceptance rate of abstracts at biomedical meetings is about 45%, and the rate of subsequent publication of those is about 40%. A source of uncertainty remains the cohort of abstracts that have been rejected at meetings but that may nevertheless get published. Of all accepted and eventually published abstracts; one third is published within 2 years, one third from the 3rd to the 4th, and one third from the 5th to the 8th year after the meeting. Predictive factors for both meeting acceptance and subsequent publication are basic research, positive outcome, and surgical meeting.

Division of Anesthesiology, Department APSIC, Geneva University Hospitals, CH-1211 Geneva 14, Switzerland, e-mail: martin.tramer@hcuge.ch

Back to Top

Study of Peer Review

Identifying Manuscript Reviewers: Ask First or Just Send?

Roy M Pitkin and Leon F Brumeister

Objective

Some journals routinely pre-solicit potential reviewers by seeking their consent before sending manuscript (ASKFIRST) whereas others send it and allow reviewers to opt out (JUSTSEND). These approaches were compared in a randomized trial.

Design

Setting was the main editorial office of Obstetrics & Gynecology, a monthly medical specialty journal. Two reviewers for each of 283 consecutive research manuscripts were assigned randomly (computer-generated) to either ASKFIRST (queried by fax) or JUSTSEND (sent manuscript and instructed to phone if unable to review). Those opting out in either group were replaced with substitute reviewers handled according to original assignment. Outcome variables were (1) proportion of turndowns, (2) time to file review for individuals who agreed or did not decline, (3) overall time for review process (from manuscript receipt until review receipt), and (4) review quality (global rating on 5-point scale with 1 best and 5 worst).

Results

Among initial ASKFIRST reviewers, 15% declined, compared with 8% of JUSTSEND who opted out (P<.01; relative risk 1.95, 99% confidence interval [CI] 1.03-3.70). Additionally, 21% of ASKFIRST candidates failed to respond within 3 working days (necessitating a replacement), so only 64% of ASKFIRST reviewers contacted initially assented. However, once the manuscript was mailed, the mean time to return the review was significantly shorter with ASKFIRST (21.0 ±9.2 vs 25.0 ±10.1 days, P<.001). Thus, the overall time from manuscript receipt to review receipt did not differ significantly (24.7 ±9.7 vs 25.9 ±10.5 days, P=.178). There also was no significant difference in quality scores between the 2 groups (P=.446, Wilcoxon rank-sum test).

Conclusions

ASKFIRST led to higher turndown rate than JUSTSEND but once reviewers gave specific assent, they completed reviews sooner. With these offsetting influences, overall review time did not differ between the 2 approaches, nor did review quality.

Obstetrics & Gynecology, PO Box 70410, 409 12th St SW, Washington, DC 20024-0410, USA, e-mail: rpitkin@greenjournal.org

Back to Top

Journal Manuscript Peer Review: Research and Controversies

Charles F Curran

Objective

To evaluate assessments and criticisms of the peer review process.

Design

Searches of MEDLINE (1966-2000), BIOETHICSLINE (1966-2000) using key words peer review together with periodicals or publications and Index Medicus (1960-1965) using the subject terms publishing, ethics, ethics, medical, and writing. Published retrospective and prospective studies, substantive critical commentary, and editors’ experiences with the process of manuscript peer review were evaluated. Historical summaries, descriptions of peer review procedures for individual journals, and ad hominem criticisms were excluded.

Results

Forty prospective and 21 retrospective studies, 136 evaluations and criticisms, and 7 editor commentaries on or experiences with the peer review process were published. Their primary foci were identification of elements of peer review capable of improvement and in identifying and reducing possible sources of peer review biases. Of the comments and suggestions for improving peer review, the most persistent have been elimination of peer reviewer anonymity (17 studies and comments) and masking author identity to reviewers (12 studies and comments). Interreviewer consistency has been evaluated or commented on frequently (12 studies and comments).

Conclusions

Evaluation of the literature on peer review demonstrates biases against certain author institutions and foreign-originated submissions and toward reports with positive findings. Suggestions, many of which are feasible to implement, have been made for reducing the impact of biases. The peer review process is generally incapable of identifying fraudulent publications. There is nearly universal agreement that peer review remains a useful instrument for improving the quality of biomedical publications.

Professional Affairs, Forest Pharmaceuticals, Inc, 13600 Shoreline Dr, St Louis, MO 63045, USA, e-mail: ccurran@Forestpharm.com

Back to Top

Reviewing Peer Reviews: The Experience of a Public Health Journal

Ana M Garcia,1 Toni Plasencia,2 and Esteve Fernandez,3 for the Editorial Board of Gaceta Sanitaria

Objective

To describe the quality of manuscript reviews in Gaceta Sanitaria, the journal of the Spanish Public Health and Health Administration Society.

Design

First 40 manuscripts managed by editors (n=8) of Gaceta Sanitaria during 2000 were included, comprising 69 reviews from 64 different reviewers. Each review was evaluated by only 1 editor. The instrument for the assessment was based on the Review Quality Instrument (J Clin Epidemiol. 1999;52:625-9), adapted and extended. The reviewer’s assessment of the manuscript’s suitability, relevance, originality, format, design, and interpretation of the results was evaluated by each editor through 12 items either scored yes/no or on a 5-point scale (1=nothing/very poor, 5=extensively/excellent). A quality score was calculated as the sum of items 1 to 10 and compared to items 11 (global usefulness, scale 1 to 5) and 12 (global quality, scale 1 to 5).

Results

Mean global usefulness and quality of the reviews were respectively 3.63 (SD=0.88) and 3.46 (SD=0.91). Items reaching the highest gradings were suitability, format, and constructiveness of the review, while reviews were poorer when assessing relevance, originality, and interpretation. Spearman’s correlation coefficients for quality score (ranging from 11 to 39 points) and items 11 and 12 were, respectively, 0.74 and 0.83.

Conclusions

In Gaceta Sanitaria, the selection of reviewers for the manuscripts has been mostly based on editors’ personal pool of experts, with unequal results regarding review quality. This study has revealed main skills (assessment of suitability and format of the manuscript and constructiveness of the review) and deficiencies (assessment of relevance, originality and interpretation of the manuscript) of reviewers. Measuring systematically the performance of the reviewers and factors related with the quality of reviews can be useful to better support the selection of reviewers by the editors and to provide further orientations to help reviewers improve their contributions.

1Faculty of Social Science, University of Valencia, Ayda Tarongers, sln 46022, Valencia, Spain, e-mail: anagar@uv.es; 2Municipal Institute of Health, Barcelona, Spain; 3Catalonian Institute of Oncology, Barcelona, Spain

Back to Top

Peer Review: Are There Other Ways?

Philippa Middleton,1 Mike Clarke,2 Philip Alderson,2 and Iain Chalmers2

Objective

The combined editorial load for the 50 Cochrane Review Groups equates to that of some of the largest biomedical journals. As a decentralized and largely voluntary organization, the Cochrane Collaboration must tackle problems of workload, uneven quality, and variable uptake of advances in systematic review methods. Nevertheless, several studies have rated Cochrane reviews, on average, more highly than their counterparts published in print journals. We speculate that a collaborative editing and peer review system is an important reason for this.

Design

Descriptive account of the Cochrane Collaboration’s editorial and peer review processes.

Results

Some of the features and aspirations of the Cochrane Collaboration’s editorial and peer review system are: substantial prepublication support, including advance approval of topics; published peer-reviewed protocols; a process of peer reviewing and editing designed to facilitate publication; standard review structure and format; involvement of consumers, including as peer reviewers; response to new data; and criticisms in subsequent versions of reviews.

Conclusions

Although the Cochrane processes appear to offer advantages over traditional approaches, the relative contributions of peer review and other editorial processes are not clear. Future challenges include untangling the effects of these processes, finding reliable ways to monitor the quality of Cochrane reviews, providing more training and support for editors and peer reviewers, addressing potential conflicts of interest and other sources of bias, providing more specialized or centralized editing (eg, statistical editing), and supporting authors in updating their reviews.

1Royal Australasian College of Surgeons, 51-54 Palmer Pl, North Adelaide 5006, Australia, e-mail: mpm@ozemail.com.au; 2UK Cochrane Centre, Oxford, UK

Back to Top

A Comparison of the Performance of National and International Peer Reviewers in a Brazilian Journal

Hooman Momen

Objective

Investigate the performance of international reviewers in comparison with Brazilian reviewers of manuscripts submitted to the Memorias do Instituto Oswaldo Cruz, one of the oldest and most-cited periodicals published in Latin America and with the highest impact factor (0.636 in 1999) of any Brazilian journal.

Design

Manuscripts submitted in 1998 and 1999 were analyzed. The data on the rejected manuscripts were then desegregated by country of origin of author and of reviewer.

Results

In 1998, 207 manuscripts were submitted to full peer review (at least 2 external reviewers) and 64 were rejected or canceled (rejection rate 30.9%). In 1999 of 208 manuscripts submitted, 69 were rejected (rejection rate 33.2%). In 1998 Brazilian reviewers rejected 42.3% of articles submitted by Brazilian authors while international reviewers rejected only 14.3% of such articles. Manuscripts containing both international and Brazilian authors were excluded from the analysis. A specific analysis was then made on those cases were the same manuscript was submitted to both international and Brazilian reviewers (n=69, 1999; n=87, 1998). In this reanalysis the null hypothesis that international peer reviewers were more rigorous than Brazilians was rejected. For 1999 McNemar’s chi-square test=0.57, df=1, P=.45 and for 1998 McNemar’s chi-square test=0, df=1, P=1.

Conclusions

There is no evidence of a difference in performance with regard to rigor as measured by rejection rates between Brazilian and international reviewers at the the Memorias do Instituto Oswaldo Cruz. The apparent difference seen in the rates in 1998 could be due to an unconscious decision by the editor to submit the better-quality manuscripts for international reviewers.

Bulletin of the World Health Organization, EIP/IMD, World Health Organization, 1211 Geneva 27, Switzerland, e-mail: momenh@who.int

Back to Top

Unblinding by Authors: Incidence and Nature at Two Medical Journals With Double-Blinded Peer Review Policies

Douglas S Katz,1 Anthony V Proto,2 and William W Olmsted3

Objective

To determine the incidence and nature of unblinding by authors of original manuscripts submitted to 2 medical journals with explicitly stated double-blinded peer review policies in their information to authors.

Design

A prospective study of 880 consecutive manuscripts was performed by the editors of 2 major medical journals. Each editor reviewed every major original article submitted to his journal over a 6-month period, blinded to the identities and institution(s) of the authors. Each manuscript was inventoried for the presence or absence of possible author or institutional unblinding as well as the specific types of unblinding violations.

Results

Three hundred manuscripts (34%) contained information that potentially unblinded the identities of the authors and/or their institutions (257 of 734 manuscripts at 1 journal [35%], and 43 of 146 [30%] at the other journal). The 2 blinded editors correctly identified the authors and/or their institutions in 221 (74%) of the 300 manuscripts. There were a total of 374 separate violations of the blinded peer review policy by authors. The most frequent violations were statement of the authors’ initials within the body of the manuscript (107 instances), referencing work “in press” (66 instances), revealing the identity of the institution in the figures (54 instances), identifying references as the authors’ previous work (57 instances), and stating the identity of the authors’ institution(s) within the body of the manuscript (47 instances).

Conclusion: Despite explicit instructions to authors regarding their double-blinded peer review policies, a significant minority of 880 prospectively inventoried manuscripts submitted to 2 medical journals (34%) contained information that potentially or definitely unblinded the identities of the authors and/or their institution(s).

1Department of Radiology, Winthrop, University Hospital, 259 First St, Mineola, NY, 11501, USA, e-mail: dsk2928@pol.net; 2Radiology, Richmond, VA, USA; 3RadioGraphics, Washington, DC, USA

Back to Top

Comparing Author Satisfaction With Signed and Unsigned Reviews

Maeve Rooney,1 Elizabeth Walsh,2 Louis Appleby,3 and Greg Wilkinson4

Objective

To compare the quality of signed with unsigned reviews, as assessed by authors and to examine whether authors are able to correctly identify the reviewers of their papers.

Design

Authors of manuscripts submitted to the British Journal of Psychiatry were asked to assess the quality of the reviews they received, using a validated instrument. The reviews had previously been randomized into those signed by the reviewer and an anonymous group, but authors remained blind to randomization status of reviews. Independent sample t tests were used to compare mean scores on quality for the 2 groups. Authors were also asked to guess the identity of their reviewer.

Results

Of the 408 requests for reviews that were sent to reviewers, 358 (88%) were completed. Of these 358 reviews sent out to authors for quality rating, 216 (60%) were returned. Authors were no more likely to respond whether their review was signed or not. Authors of manuscripts that were not published were more likely to respond than authors whose papers were published. There was no association between signed or unsigned reviews and eventual publication. Although the mean quality score for signed reviews was slightly higher than for unsigned, this did not reach statistical significance. Only 8% of authors were able to guess the identity of their reviewer accurately.

Conclusions

There is no evidence that ending the anonymity of the peer review process in the British Journal of Psychiatry would adversely affect quality. Most authors who have manuscripts reviewed with this journal are unable to identify the reviewer correctly.

1Adelaide and Meath Hospital, 270 Charlemont, Griffith Ave, Dublin 9, Ireland, e-mail: me_rooney@yahoo.co.uk; 2Institute of Psychiatry, London, UK; 3School of Psychiatry and Behavioural Sciences, University of Manchester, Manchester, UK; 4University Department of Psychiatry, Royal Liverpool University, Liverpool, UK

Back to Top

Double-Blind and Single-Blind Peer Review: A Comparison of Their Quality

Sui Xingfang, Ren Xiaoli, and Xue Aihua

Objective

To assess statistically the quality of double-blind and single-blind peer review in the Chinese Journal of Radiology.

Design

This study included 210 sheets of single-blind peer review (reviewers are anonymous to authors; comments are circulated among the reviewers) in 7 months (group I) were selected and 265 sheets of double-blind peer review (comments are circulated anonymously between reviewers and authors as well as between the reviewers) in 6 months (group II). The criteria for the judgment of peer review quality included the following 6 aspects of comments or suggestions: (1) publication, quick publication, publication after revision, peer review again after revision, and rejection; (2) important discovery or specific experience and interpretation, implications from theoretical and clinical point of view, resources for further study; (3) clinical or scientific design, shortcomings in design, conclusion; (4) mechanical problems and style; (5) statistical methods, data analysis, and their precision; and (6) bibliographic references, including their relevance. The sheets of review were scored according to the above criteria: 1 point for each criterion with a total of 6 points. Total points for quality of double-blind and single-blind review were calculated and the average points of each sheet were compared.

Results

Points for criterion 1 in group I (single-blind review) were 198 and in group II (double-blind review) 249 (P>.05); 116 for criterion 2 in group I, and 172 in group II (P<.05); 25 for criterion 3 in group I, and 30 in group II (P>.05); 113 for criterion 4 in group I, and 132 in group II (P>.05); 4 for criterion 5 in group I, and 10 in group II (P>.05); and 22 for criterion 6 in group I, and 32 in group II. The results showed that in the 2 groups there is no significant difference between the criteria, except criterion 2 (scholarly comments).

Conclusion: This study was unable to demonstrate a difference in quality between double-blind and single-blind peer review.

Chinese Journal of Radiology, 42 Dongsi Xidajie, 100710 Beijing, China

Back to Top

Ideas and Assumptions of Peer Review Research: A Short Review

Sven Trelle

Background: Editors regard reviews by peers as help for selecting the best articles. It is considered as a nonobjective process not comparable to scientific research processes. This study was conducted to investigate if studies about peer review take this view into account and to identify the governing research questions.

Design

MEDLINE search (1966-2000; MeSH and Title/Abstract-Search: “peer-review”) for articles about experimental peer-review research and review of references of all papers found on the subject.

Results

Three major research fields were identified:
(1) improvement of readability of articles by peer review,
(2) quality and quality improvement of review reports, and
(3) evaluation of peer review on the basis of test-theory criteria (ie, objectivity, reliability, and validity). Most of these test-theory studies do not discuss lower test criteria. Even though it is agreed that objectivity is not given, they discuss reliability, which is proven low, and validity. Validity-driven studies comparing citation rates of accepted and rejected articles conclude that peer review is a valid procedure.

Conclusions

Several evaluation studies on journal peer review do not apply test criteria accurately. As lower test criteria determine maximum values of following criteria it is not useful to test reliability since objectivity is low or validity since objectivity and reliability are low. Application of these criteria indicates that researchers think of scientific publishing and journal peer review as a scientific process. But this perception is opposed to the statement at the 1995 Bellagio Conference and several editorials in biomedical journals. In addition, it contrasts with the expressed wish of some editors for conflicting reviews.

University of Hamburg, Division for Technology Assessment of Modern Biotechnology, Falkenried 94, 20251 Hamburg, Germany, e-mail: trelle@uni-hamburg.de

Back to Top

Measuring the Quality of Peer Review in the Nederlands Tijdschrift Voor Geneeskunde (Dutch Journal of Medicine): Development and Testing a New Instrument

Roos A A van Duursen, Peter Hart, and A John P M Overbeke

Objective

To develop and test an instrument for assessing the quality of peer review in the Nederlands Tijdschrift voor Geneeskunde (NTvG).

Design

Descriptive. An instrument containing 11 items was developed; 10 items concerned specific aspects of the review (eg, comments on data analysis, interpretation of results, and suitability for the NTvG). Item 11 rated the global impression of the quality of the review. The first version of the instrument was tested for 2 weeks; after this period a few adjustments were made in the wording of some questions, and the 10-point scale was changed into a 5-point scale. No interrater agreement was measured. Three raters assessed the quality of all reviews that were reviewed for the first time and consecutively received from week 46 (1999) until week 4 (2000). Reviews received in week 46 and 47 were assessed for the second time after 9 weeks, using the same instrument, and intrarater agreement was calculated.

Results

A total of 140 reviews were assessed; 34 reviews were assessed twice. The interrater and intrarater agreement on specific items was poor (mean κ=0.118 and 0.204). The interrater agreement of the global item after first assessment was poor for raters 1-2 and 2-3 (κ=0.248, κ=0.256) and satisfactory for raters 1-3 (κ=0.622). After second assessment interrater agreement of the global item was satisfactory for raters 1-2, 1-3, and 2-3 (κ=0.599, κ=0.713, κ=0.511). Intrarater agreement of the global item was moderate to satisfactory for individual raters (κ=0.666, 0.291, and 0.763).

Conclusion: The agreement on the global quality of the reviews was satisfactory, which reflects daily practice. However, the instrument is not yet reliable to measure the quality of specific aspects of reviews, but further adjustments need to be made.

Nederlands Tijdschrift voor Geneeskunde PO Box 75971, 1070 AZ, Amsterdam, the Netherlands, e-mail: overbeke@ntvg.nl

Back to Top

Author Perception of Peer Review: Impact of Reviews and Acceptance on Satisfaction

Ellen J Weber,1 Patricia P Katz,2 Joseph F Waeckerle,3 and Michael L Callaham1

Objective

To determine satisfaction of corresponding authors with peer review.

Design

Surveys were sent to corresponding authors of all manuscripts submitted to Annals of Emergency Medicine during an 18-month period. The survey was pilot tested for comprehension and to ensure content validity; satisfaction questions used a 5-point Likert scale. t tests were used to compare satisfaction between authors whose manuscripts were sent out for review vs those rejected by the editor without full review; and between those whose manuscripts were accepted vs rejected after full review. Editors of Annals of Emergency Medicine routinely rate reviews for quality. For respondents’ whose manuscripts underwent full review, correlation and multivariate analyses determined association of review quality and publication decision with author satisfaction, controlling for author gender, specialty, and previous peer-reviewed publications.

Results

Of the 934 surveys mailed, 634 (67%) were returned, and 597 analyzed. Of these, 355 were from authors whose manuscripts underwent full review. Overall satisfaction, indicated by agreement with “My experience with the review process will make me more likely to submit to Annals in the future,” was 3.1 (±1.0). Authors of reviewed papers, whether accepted or rejected, were more likely to agree with this statement (P<.05) and were more satisfied with the letter explaining the editorial decision (P<.0001) than those whose papers were rejected without review. Among respondents whose manuscripts underwent full review, authors of accepted papers were more likely than those with rejected papers to submit to the journal in the future (P<.05), and were more satisfied with the decision letter (P<.05), and the reviews (P<.001). Editors’ rating of review quality was not correlated with authors’ overall satisfaction or satisfaction with reviews. Author characteristics did not affect these associations.

Conclusions

Contributor enthusiasm about peer review was modest. Satisfaction is greater when papers receive full review. For reviewed manuscripts, author satisfaction is correlated with acceptance but not with review quality.

1Department of Medicine, Division of Emergency Medicine, University of California, San Francisco, School of Medicine, Box 0208, L-138 San Francisco, CA 94143-0208, USA, e-mail: weber@itsa.ucsf.edu; 2Institute for Health Policy Studies, University of California, San Francisco School of Medicine, Department of Medicine, San Francisco, CA, USA; 3Department of Emergency Medicine, Baptist Medical Center, Menorah Medical Center, University of Missouri-Kansas City School of Medicine, Kansas City, MO, USA

Back to Top

Peer Review of Continuing Professional Development in Medical Journals: Desirable Yes, But Feasible?

Erica Weir,1,2 John Hoey,1 Harvey Skinner,3 Dave Davis,3 Brian Gibson,3,4 and Ted Haines2

Traditionally, practicing physicians have continued their medical education by reading clinical journals. A new vision of continuing professional development (CPD) requires physicians to pursue other modalities: group learning, journal clubs, and self-assessment (Bennet et al. Acad Med. 2000;75:1167-72). The Internet offers a medium that supports these modalities and potentially threatens the traditional role of the medical journal. Yet, because the Internet is unregulated, the peer review process that has conventionally ensured high-quality CPD has been side-stepped. In an informal review of 53 online academic CPD sites in 1999, fewer that 10% had any peer review process (Peterson. Acad Med. 1999;74:750).

To bring journal-quality peer review to online CPD the Canadian Medical Association Journal is experimenting with transforming review-type articles. We are developing a series of case-based CPD articles on environmental and occupational exposures. Each article illustrates use of a simple exposure history-taking tool and encourages the reader to change his or her practice by incorporating this tool. Web-based resources and online interactive exercises to reinforce the learning exercise are being developed and peer reviewed. The list of players includes authors, editors, peer reviewers, and online modulators, educationalists, and members of the accrediting colleges. A workshop to finalize objectives and online exercises for 5 environmental modules is planned for this September 2001. Work on server and software issues is ongoing.

The initiative and results to date are described and offer frank discussion is offered of the decisions, obstacles, and lessons involved in developing the potential of an electronic journal to integrate peer review, editing, and publication with online accredited CPD.

1Canadian Medical Association Journal, Ottawa, Ontario, Canada; 2Department of Community Health, McMaster University, Health Sciences Centre, Room 2C8, 1200 Main St West, Hamilton, Ontario L8N 3Z5, Canada, e-mail: eweir@sympatico.ca; 3University of Toronto, Toronto, Ontario, Canada; 4Boston University School of Public Health, Boston, MA, USA

Back to Top

Sunday, September 16

Authorship and Contributorship

Authorship in a Medical Journal From a Developing Country

Humberto Reyes, Marcela Jacard, and Viviana Herskovic

Objective

Revista Médica de Chile is a monthly journal that contains about 40% of the clinical and biomedical manuscripts published annually by Chilean authors. This study was conducted to evaluate (1) temporal trends in the number of authors per article and (2) author’s compliance with the International Committee of Medical Journal Editors (ICMJE) definition of authorship.

Design

A retrospective analysis of the number of authors per article between 1969 and 2000 and a prospective survey applying a pre-established contribution checklist to every author of manuscripts submitted in the year 2000. Full authors were classified as those who declared to have contributed to the design of the study, analysis of the data, and drafting and approval of the final manuscript. Unwarranted authorship was classified as those who participated only in data collection, or only in diagnostic/therapeutic procedures, or only in the statistical analysis. Other combinations qualified as partial authors.

Results

For research articles, the number of authors per article increased in the last decade: for 1969, 3.9 mean (1.6 SD); 1979, 4.5 (±1.9); 1984, 3.8 (±1.7); 1989, 4.9 (±2.0); 1994, 5.7 (±2.5)*; 1999, 5.2 (±2.6)*; 2000, 5.4 (±2.2)* (*P<.05 compared to 1969, 1979, and 1984). In contrast, the number of articles remained stable in case reports (4.1 [±1.9]) and in reviews, public health reports, or medical education articles (3.3± [1.8]). Of the 1024 authors surveyed, the response rate was 90%. Tweny-seven percent of respondents qualified as full authors, 63% as partial authors, and 10% as unwarranted authorship.

Conclusions

Creditable authorship needs to be improved in Revista Médica de Chile, mainly in research articles where a low compliance with the definition criteria of authorship coincided with an increase in the number of authors. Continuous education and a critical attitude by the editors and the universities should be enforced.

Revista Médica de Chile, Bernarda Morin 488, Providencia, Casilla 168, Correo 55, Santiago 9, Chile, e-mail: hreyes@machi.med.uchile.cl; and University of Chile School of Medicine, Santiago, Chile

Back to Top

What Makes an Author? The Views of Authors, Editors, Reviewers, and Readers

Susan van Rooyen,1 Sandra Goldbeck-Wood,1 and Fiona Godlee2

Objectives: Previous research has found discrepancies between authors’ declared contributions to their research and editors’ guidelines on who should qualify for authorship (the guidelines from the International Committee of Medical Journal Editors [ICMJE], known as the Vancouver group). The reasons for this discrepancy remain unclear. To elicit the views of authors, editors, reviewers, and readers as to what should constitute authorship using a taxonomy of authors’ declared contributions; to establish whether the ICMJE guidelines are underpinned by the views of the scientific community.

Design

Questionnaire survey of 100 BMJ authors, 102 BMJ reviewers, 303 BMJ subscribers, and editors of the BMJ (12) and JAMA (22). The survey asked for respondents’ ratings of the listed contributions by their importance as a qualification for authorship and respondents’ views as to which contributions on their own were deemed sufficient to justify authorship.

Results

The response rates were 35% for readers, and 78% for authors, reviewers, and editors combined (total responses, 260). The roles most frequently deemed essential were project initiation, study design, interpretation, writing/editing, and chief investigator. More than half the respondents considered that these same 5 contributions were each sufficient on their own to qualify for authorship. Only 10 (4%) rated as essential all 7 contributions deemed by us to be the essential declared contributions for an ideal paper.

Conclusions

This study confirms that the ICMJE guidelines are not underpinned by the views of the scientific community. It supports calls to abandon the present prescriptive approach in favor of a more descriptive system, using the declared contributions of authors as a basis.

1BMJ, BMA House, Tavistock Sq, London WC1H 9JR, UK, e-mail: svanrooyen@kscable.com; 2BioMed Central, London, UK

Communicating to Readers

Back to Top

Toward Improving the Study of the Usability of Electronic Forms of Scientific Information

Philippa Jane Benson

A tremendous amount of work has been done, particularly in the computer industry, on the readability and usability of information in hard copy and electronic forms. This poster reviews the most current work by interdisciplinary teams of engineers, psychologists, sociologists, and designers in investigations of where and how professionals read. Studies using laboratory experimentation, ethnographic, and field study methods are included, with a particular focus on studies done on the advantages and disadvantages of different electronic reading devices in hospitals and on studies of technologies used for data input. Recommendations and additional resources are provided on how to employ simple usability testing, task analysis, studies of reading and writing processes to improve studies of the effectiveness of electronic forms of scientific information. The purpose of the presentation is to foster discussion on how electronic information can be designed to provide users/readers of scientific information (who have very specific reading and writing tasks), not just with the information they want, but also in the form they want, when they want it.

Center for Applied Biodiversity Science, Conservation International, 1919 M St NW, Suite 600, Washington, DC 20036, USA, e-mail: p.benson@conservation.org

Back to Top

The Importance of Letters to the Editor in the Nederlands Tijdschrift voor Geneeskunde

Shalindra Mahesh, Martin Kabos, Henk C Walvoort, and A John P M Overbeke

Objective

To determine whether in the correspondence section of the Nederlands Tijdschrift voor Geneeskunde (Dutch Journal of Medicine, NTvG) serious criticism is formulated or important mistakes in the original articles are pointed out.

Design

Descriptive, retrospective bibliometric study of published correspondence letters (n=196; no letters were rejected) from July 5, 1997, to June 27, 1998. Letters were scored by the authors for 10 nonvalidated items and categorized in categories: agree, do not agree (ie, letters criticizing methods or results or interpretation, or containing unmotivated criticism), and political reaction. Twenty-two letters from the period October-December 1998 were judged separately as the peer review reports of the original articles were still available.

Results

In 115 letters (58.7%), the writers expressed agreement with the original article. Almost 40% (77) of the 196 letters contained scientific discussion on the subject in question. Most reactions concerned “Original Articles” (25%) and “Clinical Lessons” (19.4%). In 8 letters (4.1%), a mistake was revealed; 6 of these reactions led to a published correction (to 3 articles). There was no criticism that would have led to rejection of the article involved had it been known before publication. The letters about articles of which the peer reviews were still available contained no criticism of points the peer reviewers had missed.

Conclusions

Of the correspondence letters of the NTvG, 4.1% contained scientific criticism that could have led to changes in the article if it had been known before publication.

Nederlands Tijdschrift voor Geneeskunde, PO Box 75971, 1070 AZ, Amsterdam, the Netherlands, e-mail: overbeke@ntvg.nl

Back to Top

What Is Newsworthy? A Comparison of Reporting of Medical Research in British Tabloid and Broadsheet Newspapers

Christopher Bartlett, Jonathan Sterne, and Matthias Egger

Objective

To assess the reporting of medical research in 2 British newspapers, a sensationalist tabloid, and a serious broadsheet.

Design

Characteristics of all research articles (excluding editorials, discussions, letters) published in the Lancet and the BMJ were recorded and the Friday and Saturday editions of the Sun (tabloid) and the Times (broadsheet) were searched for reports relating to these articles. Data extracted for each article included study design, population, and topic area. Quality of newspaper reports was determined by recording whether they were full reports or filler items and whether they gave basic, accurate data. The study is ongoing, with 2 years’ coverage envisaged. For this preliminary analysis we calculated relative risks (RR) of reporting, with 95% confidence intervals (CIs), comparing the medical journals with each of the 2 newspapers.

Results

To date, 217 full articles published in the Lancet and BMJ have been assessed (for August-December 2000) and 24 corresponding newspaper items were identified. The reporting rate in the newspapers was low, 1 per 11 articles. Randomized trials and systematic reviews were underreported in both newspapers (Times, RR=0.21 [95% CI 0.03-1.40]; Sun, RR=0.75 [95% CI 0.28-2.0]). Observational studies (cohort, case-control, and cross-sectional) were overreported in the Times (1.80 [1.13-2.88]), but less so in the Sun (1.33 [0.68-2.61]). Both newspapers, but particularly the Sun, overreported on women’s health (Sun, 3.81, [2.03-7.17] and Times, 1.62 [0.57-4.59]) and on sexual and reproductive health (2.96 [1.03-8.47] and 1.67 [0.44-6.38]). Both newspapers largely ignored research specific to men and the elderly. Reporting quality in the 2 newspapers was similar: most reports were full items and provided basic numerical data.

Conclusions

Reporting of medical research is highly selective, and clearly biased, both in broadsheet and tabloid newspapers. Interestingly, these preliminary results suggest little difference in the quality of reporting.

MRC Health Services Research Collaboration, Department of Social Medicine, Canynge Hall, Whiteladies Rd, University of Bristol, Bristol BS8 2PR, UK, e-mail: m.egger@bristol.ac.uk

Back to Top

Ethical Issues

Guidelines for Good Publication Practice: The COPE Experience

Richard Horton

Objective

The United Kingdom’s Committee on Publication Ethics (COPE) was formed in 1997. It aims to provide a forum for editors to seek advice about allegations of misconduct. Early experience of COPE indicated that there was an urgent need for more formal guidance in matters relating to suspected breaches of research and publication ethics, and COPE set out to produce guidelines for good publication practice.

Design

A conference was held in April 1999 to discuss the creation of guidelines. Participation was inclusive: in addition to COPE members, the meeting was open with invitations to the UK medical licensing authority (General Medical Council) and the royal colleges. Eighty people took part, and draft guidelines written by COPE members were tabled for discussion.

Results

The final guidelines included sections on study design and ethical approval, data analysis, authorship, conflicts of interest, peer review, redundant publication, plagiarism, duties of editors, media relations, advertising, and dealing with misconduct. They guidelines were published in the 1999 COPE report. Since first publication these guidelines have been republished and endorsed by 28 journals. Two revisions have been proposed in 2000 about ghost authorship and contacting authors regarding alleged misconduct.

Conclusions

Self-organization by editors to deal with cases of alleged scientific misconduct has led to guidelines that aim to provide a more consistent basis for decision making. In drawing up these guidelines and in securing endorsement for them, we found a large degree of unmet need and enthusiasm among editorial colleagues. A secondary effect was to stimulate statutory national bodies to take misconduct more seriously.

Lancet, 42 Bedford Sq, London WC1B 3SL, UK, e-mail: r.horton@elsevier.co.uk

Back to Top

Informed Consent in Clinical Trials: Survey of Five Chinese Medical Journals

Wang Mouyue

Objective

Informed consent is an ethical problem in biomedical research and biomedical publishing in China. This survey was conducted to find out frequency of publication of informed consent in published clinical trials in Chinese medical journals and to remind Chinese biomedical professionals and editors to attach importance to this issue.

Design

Clinical trials published in National Medical Journal of China (NMJC), Chinese Journal of Internal Medicine (CJIM), Chinese Journal of Surgery (CJS), Chinese Journal of Obstetrics and Gynecology (CJOG), and Chinese Journal of Pediatrics (CJP) 1996-1999 were investigated. All the trials surveyed were published in full text as original articles. All the journals were selected for investigation because they are top and representative medical journals in China, and the Chinese Medical Association sponsors all of them.

Results

The frequency of publication of informed consent in NMJC, CJIM, CJS, CJOG and CJP were 5.1% (7/138), 3.6% (7/195), 0.8% (1/121), 4.7% (11/232), and 2.5% (5/198), respectively. The total frequency was 3.5% (31/884). No notice to contributors of these journals required authors to meet ethical demands in their biomedical research. No approval from any ethics committee was reported in these surveyed clinical trials.

Conclusions

The rates of reporting informed consent in these journals are very low, suggesting even among top Chinese medical journals, closer attention is needed to the conduct of clinical research and the reporting of ethical aspects. China should establish a supervision mechanism on the ethical aspects in its biomedical research and biomedical publication as early as possible.

Editorial Department of Chinese Journal of Tuberculosis and Respiratory Diseases, Chinese Medical Association, 42 Dongsi Xidajie, Beijing 100710, China, e-mail: cmawmy@263.net

Back to Top

Publication Bias

Effects of Reviewers’ Gender on Assessments of a Gender-Related Standardized Manuscript

Addeane S Caelleigh,1 Mohammedreza Hojat,2 Ann Steinecke,1 and Joseph S Gonnella2

Objective

To investigate the effects of reviewers’ gender on assessment of a gender-related study.

Design

Participants were 100 reviewers (50 women), randomly selected from Academic Medicine‘s reviewer pool. Two versions of an empirical study of medical students who had been asked to forecast their incomes after graduation were prepared and each version was randomly assigned to 25 men and 25 women. The manuscript’s data were real, and women students forecast lower incomes than did men students. The two versions of the manuscript were identical except for 1 sentence in the abstract and 2 sentences in the conclusion. Version 1 attributed the lower-forecasted income of women to intrinsic gender factors (eg, lower financial incentives, different negotiation strategies). Version 2 attributed the difference to extrinsic social learning factors (eg, socialization bias, gender discrimination). The evaluation form, designed for this study, consisted of 16 Likert-type items (14 items assessed aspects of the manuscript, 1 asked for an overall recommendation for publication, and 1 asked about revisions). Multivariate analysis of variance was used for statistical analyses. The reviewer pool had been informed of the possibility that a study would be conducted, and a summary of the results was sent to all reviewers after the data were collected.

Results

Seventy-one reviews were received, 37 reviewers of version 1 (18 men, 19 women) and 34 reviews of version 2 (16 men, 18 women). Response rates for men and women were not statistically significant, and no significant differences were found in ratings given by men and women reviewers to the 2 versions of the manuscript.

Conclusion: There was no evidence that reviewers’ gender influenced their assessments of a gender-related study on a controversial gender issue. Studies using standardized manuscripts can help editors to monitor bias in the peer review processes of scholarly journals.

1Academic Medicine, Association of American Medical Colleges, 2450 N St NW, Washington, DC 20037, USA, e-mail: asteinecke@aamc.org; 2Center for Research in Medical Education and Health Care, Jefferson Medical College of Thomas Jefferson University, Philadelphia, PA, USA

Back to Top

Influence of Impact Factors on Scientific Publication

Michael Hart and Eileen S Healy

Objective

The Journal of Neuropathology and Experimental Neurology (JNEN) publishes original scientific articles and 1 comprehensive review article per issue, covering human and experimental nervous system diseases. Review papers summarize subject matter suggested by editorial board members and readers on neuroscience topics of time-honored interest, important neurological diseases about which there is new information, and topics of current heightened interest (hot topics). Overall, JNEN review articles achieve significantly higher impact factors than original articles (10.5 vs 7.7) based largely on numbers of citations. This study addressed the hypothesis that subject matter positively or negatively influences impact factors for review papers.

Design

Fifty review articles published between 1992 and 1998 were separated into 6 categories of subject matter and mean impact factors compared across the groups. The articles in all categories were distributed throughout the 7 sampled years, thereby diminishing the overall influence of time that might result from concentration of articles in a category toward one end of the 7 years.

Results

The 6 categories of subject matter are followed by mean number of citations in each category over the 7 years: degenerative disease (74), cellular reactions (66), infection/immunity (38), developmental/genetics (26), peripheral nerve/muscle (25), and vascular/metabolic (19). These results indicate a significant difference in citations between the first 2 categories (high impact or hot topics) and the last 4 categories.

Conclusions

Scientific journals are very conscious of stature as measured by impact factors. Thus, these results underscore a possibly unhealthy propensity for journals to favor publication of articles that generate higher impact factors. If this is the case, it is an undesirable product of the Science Citation Index rankings and could result in unintended skewing of scientific direction toward publication of original and review articles believed to generate greater numbers of citations and away from scientifically sound research on less popular topics.

Journal of Neuropathology and Experimental Neurology, Department of Pathology & Laboratory Medicine, University of Wisconsin Medical School, 1300 University Ave, 509 SMI, Madison, WI 53706-1532, USA, e-mail: jnen@pathology.wisc.edu

Back to Top

Evidence of Journal Bias Against Complementary and Alternative Medicine

David Moher,1,3 Terry Klassen,2 Ba’ Pham,1 and Margaret Lawson3

Objective

To evaluate the influence of language restrictions on the results of systematic reviews evaluating the effectiveness of conventional (CM) and complementary and alternative (CAM) medicine interventions.

Design

Forty-two systematic reviews published between 1989 and 1999 were identified through 4 databases. Eligible systematic reviews had to include at least 1 randomized trial published in English language, a language other than English, and report binary outcomes. Each trial report was quality assessed using a validated instrument. Conditional logistic regression with LogXact was used to explore the language effects on the systematic review results.

Results

A total of 684 trials (549 CM, 135 CAM) were included. Median other than English-language trials in systematic reviews was 13% (range 3%-67%) for CM and 37% (20%-77%) for CAM. Quality of reporting was similar between English-language and other than English-language trials for both CM (median 2, range 0-5) and CAM (3, 0-5). Allocation concealment was adequately reported for 14% of CM and 30% of CAM trials. On average, English-language and other than English-language trials in CM reported similar estimates of intervention effect (ratio of odds ratios [ROR] = 1.09; 95% confidence interval [CI]: 0.97-1.22). In contrast, English-language trial reports in CAM, compared to other than English-language trial reports, yielded significantly smaller intervention effect estimates by 37% (ROR = 1.37; 95% CI: 1.16-1.61).

Conclusion: Trials published in English are likely to report positive findings of CM interventions but significantly less likely to do so for CAM interventions.

1Thomas C Chalmers Centre for Systematic Reviews, Children’s Hospital of Eastern Ontario Research Institute, 401 Smyth Rd, Room R-226, Ottawa, Ontario K1H 8L1, Canada, e-mail: dmoher@uottawa.ca; 2Department of Pediatrics, University of Alberta, Edmonton, Alberta, Canada; 3Department of Pediatrics, University of Ottawa, Ottawa, Ontario, Canada

Back to Top

Association of Industry Funding With Manuscript Quality Indicators

Carin M Olson,1,2 Drummond Rennie,1,3 Deborah Cook,1,4 Mary Mickel,5 Kay Dickersin,6 Annette Flanagin,1 Qi Zhu,6 Jennifer Reiling,1 and Brian Pace7

Objective

A previous investigation found that published randomized trials funded by a single pharmaceutical company were of lower quality than trials without such funding. This study examined whether manuscripts of controlled trials that reported being funded by pharmaceutical companies or other industry are of similar quality to those without industry funding.

Design

Prospective cohort study of 745 manuscripts reporting controlled trials submitted over 42 months to JAMA, a weekly general medical journal. Trial quality was assessed by 2 investigators.

Results

Among the 745 manuscripts, 298 (40%) reported industry funding. Of the industry-funded manuscripts, 117 (39%) reported having funding from other sources as well. In univariate analysis, trials with industry funding were associated with markers for higher quality than those without industry funding: having sample size >100 (71% vs 58%, P<.001), being multicenter (62% vs 36%, P<.001), reporting withdrawals (90% vs 83%, P=.006), analyzing results by treatment assignment (20% vs 13%, P=.017), being double-blinded (65% vs 30%, P<.001), having a sample size calculation (45% vs 34%, P=.002), and having a good global quality rating (29% vs 15%, P<.001). In a multivariate logistic regression analysis, only being multicenter and double-blinded were independently associated with industry funding (P<.001 for each). A logistic regression model indicates only quality of the study (P<.001) and multicenter studies (P=.004) were significantly associated with publication status. When industry funding was added to this model, it was nonsignificant (P=.98).

Conclusions

Among manuscripts reporting controlled trials submitted to JAMA, those that had industry funding were associated with higher quality than those without industry funding. However, industry funding is not independently associated with publication status.

1JAMA, Chicago, IL, USA; 2University of Washington, Seattle, WA, USA; 3University of California, San Francisco, San Francisco, CA, USA: 4McMaster University, Hamilton, Ontario, Canada; 5Axio Research Corporation, Seattle, WA, USA; 6Brown University, Providence, RI, USA; 7News and Information, American Medical Association, 515 N State St, Chicago, IL 60610, USA, e-mail: brian_pace@ama-assn.org

Quality Issues and Standards

Back to Top

Relationship Between Manner of Presentation of Illustrations During the Peer Review Process and the Number of Evaluative Responses

Judith A McKay

Objective

Illustrations are an important complement to the text of a scientific article, but are the manuscript element least likely to receive careful consideration during the peer review process. This study was conducted to evaluate whether a change in the method of presentation of illustrations would elicit more comments from reviewers.

Design

Although the reviewers of the Journal of the American Academy of Orthopaedic Surgeons, a generously illustrated, peer-reviewed orthopedic surgery journal, had always been asked to comment specifically on the illustrations, the editorial board and staff found that such comments were relatively infrequent. To call more attention to the figures, some changes were made in preparing manuscripts for review. Each figure was scanned and placed on a separate page, accompanied by its legend and an evaluation box containing questions regarding accuracy, appropriateness, and reproducibility. Rather than placing all the illustrations at the end of the manuscript in the traditional manner, these presentations were interleaved with the manuscript pages to correspond with the first text citation of each illustration. Forty manuscripts published before the inauguration of this new system and 33 manuscripts prepared afterward were evaluated. For each manuscript, the number of illustrations and the number of comments made by the 3 reviewers were recorded.

Results

The 40 manuscripts from the early phase contained 395 illustrations. Only 75 comments were made by the reviewers, for an overall rate of comments per illustration of 19%. The 33 manuscripts from the later period contained 336 illustrations. A total of 230 comments were elicited, for a rate of 68%. This represents a 28% increase in the elicitation of comments.

Conclusion: The new method of presentation appears to better focus reviewers’ attention on the illustrations.

American Academy of Orthopaedic Surgeons, 6300 N River Rd, Suite 301, Rosemont, IL 60018, USA, e-mail: mckay@aaos.org

Back to Top

What Is the Quality of the Economic Information Provided in Promotional Material for Family Practitioners? The Case of Proton Pump Inhibitors in the United Kingdom

Vittorio Demicheli1 and Tom Jefferson2

Objective

To assess the quality of the economic information on proton pump inhibitor drugs contained in promotional material for British family practitioners.

Design

Review of promotional material (glossy cards, folders, and charts, so-called leave-pieces) for 5 proton pump inhibitors (esomeprazole, lansoprazole, omeprazole, pantoprazole, rabeprazole) produced by 4 pharmaceutical companies. Quality assessment using a short instrument containing the following questions: (1) Were comparisons presented and if so were they appropriate? (2) Were costs and outcomes presented? (3) Were the assumptions underlying the comparisons clear?

Results

Five sets of material (17 items) were assessed. All presented cost comparisons with competitor products. Two sets presented comparisons with different classes of compounds to treat gastroduodenal disturbances. All comparisons presented cost-per-period of acute or maintenance treatment; none presented cost-per-outcome(s). In all cases, the source of the information was either a national formulary or sources such as the “Doctors’ Independent Network” and publications that are difficult to access for family practitioners.

Conclusions

As all proton pump inhibitors have a different spectrum of licensed indications, cost comparisons per treatment period are at best meaningless and at worst misleading. Given physicians’ limited time, lack of training in health economics, and the high visual impact of the promotional material, it is doubtful whether family practitioners would be able to critically assess the information presented. A higher standard of control and peer review of such promotional material is recommended for economic information in the United Kingdom. The instrument used in this study could be introduced as a rapid quality test.

1Azienda Sanitaria Locale 20, Alessandria, Italy; 2Health Reviews Ltd, 35 Minehurst Road, Mytchett, Camberley, Surrey GU16 6JP, UK, e-mail: toj1@aol.com

Back to Top

Research Design of Published Diagnostic Experiment Papers in Laboratory Medicine

Yan Zang1 and Qin Xiaoguang2

Objective

To observe problems in research design in some published diagnostic experiment articles so as to improve the quality of manuscripts.

Design

According to methodology of clinical epidemiology, 111 diagnostic experiment articles published in the Chinese Journal of Laboratory Medicine from 1996 to 2000 were critically reviewed. The following aspects were included: (1) diagnostic standards; (2) blinding comparison; (3) amount, composition, and sources of samples; (4) accuracy indices (sensitivity and specificity); and (5) description of new reference values.

Results

Of the 111 articles, 41.5% described nothing about the diagnostic criteria or only non-gold standard clinical indexes. Only 3.6% articles described the use of blinding method; 76.5% did not describe conditions for samples selection. In 49 articles with qualitative data as reference values in comparison groups, 38 had the sample amount in comparison groups less than 100 cases. In all 111 articles, 18.9% did not have comparison groups, 33.3% did not illustrate the source of the samples, and 40.0% did not mention the age and gender distribution. Articles without specificity and sensitivity indices accounted for 55.0%. In the 49 articles articles with qualitative data as reference values in comparison groups, 44 used mean and standard deviation to describe the ranges of the reference values; however, whether the reference values represent a middle-form distribution could not be judged in 5 of the 44 articles, and the cutoff point of the reference values could not be judged in 10 of these articles.

Conclusion: In reviewing manuscripts that involve diagnostic experiments, the above findings should be considered as the fundamental requirements for acceptance from an editorial point of view.

1Chinese Journal of Laboratory Medicine, 42 Dongsi Xidajie, 100710 Beijing, China; 2General Hospital of Coalmining, Beijing, China

Back to Top

The Use of Dedicated Methods/Statistical Reviewers for Peer Review: A Content Analysis of Comments to Authors Made by Methods and Regular Reviewers

Frank C Day,1 David L Schriger,1 Christopher Todd,1 and Robert L Wears2

Objective

In 1997, the Annals of Emergency Medicine initiated a protocol by which every original research paper, in addition to regular review, is concurrently evaluated by 1 of 2 methods/statistical reviewers. Comments made by the methods and regular peer reviewers were characterized and contrasted.

Design

After pilot testing, interrater reliability assessment, and revision, we finalized a 99-item taxonomy of reviewer comments organized in 8 categories. Two individuals uninvolved in the writing of reviews classified each comment from 125 randomly selected methods reviews from 1999. For 30 of these manuscripts (15 for each methods reviewer) raters also scored all (range 2-5) regular reviews.

Results

Sixty-five reviews by methodologist A, 60 by methodologist B, and 68 by regular reviewers were analyzed. The median number of comments made per review by A, B, and regular reviewers were similar (9, 9.5, and 10, respectively). Comments by methodologist A most frequently concerned the presentation of results (33% A’s comments) and presentation of methods (17%). Methodologist B commented most frequently on presentation of results (28%) and statistical methods (16%). Statistical methods constituted 6% of comments by methodologist A and 3% of comments by regular reviewers. Regular reviewers most frequently made comments not pertaining to methods and statistics (45%) and comments on presentation of results (18%). Of note, methods/statistical comments made by regular reviewers often directly contradicted those of the methodologist.

Conclusions

The distribution of comments made by the 2 methods reviewers were similar though reviewer A emphasized presentation and the models underlying the research while reviewer B stressed statistical issues. The regular reviewers (most of whom were unaware that a dedicated methods reviewer would be reviewing the paper) paid much less attention to methods issues. Methodology comments made by regular reviewers were often contradictory to those made by methodologists. More attention is paid to methodology when dedicated methods/statistical reviewers are used.

1UCLA Emergency Medicine Center, UCLA School of Medicine, 924 Westwood Blvd #300, Los Angeles, CA 90024, USA, e-mail: fday@ucla.edu; 2Department of Emergency Medicine, University of Florida, Jacksonville, FL, USA

Back to Top

Reference Accuracy in Peer-Reviewed Journals: A Systematic Review

Elizabeth Wager,1 Philippa Middleton,2 and the Peer Review and Technical Editing Systematic Review (PIRATES) Group

Other members of the PIRATES group are Phil Alderson, Frank Davidoff, and Tom Jefferson

Objective

To investigate the accuracy of references in peer-reviewed biomedical journals.

Design

Systematic review following Cochrane Collaboration methodology. Publications were identified from 14 electronic databases, references in retrieved articles, and hand-searching relevant journals from earliest available entry/volume until July 2000. A broad search strategy was adopted since articles were identified from a larger systematic review on other aspects of editing. Articles relating to reference accuracy in peer-reviewed biomedical journals were included.

Results

Forty articles on this subject were identified, 30 of which presented data gathered using sufficiently similar methods to permit comparisons. They surveyed more than 12 000 references in 64 journals. The majority were descriptive surveys of published articles giving no information about the effects of peer review or editorial processes. More than 9000 references were checked for inaccuracies in bibliographic citations and the error rate ranged from 4% to 67% (median 36%). Nearly 2000 references were checked for quotation errors (inaccurate or misleading representations of the originally cited reference) with error rates ranging from 0% to 47% (median 20%). The total proportion of inaccurate references showed no discernible trend with time. However, some individual journals appeared to show improvements over time when results from different studies were compared. One study showed a significant improvement after the Canadian Journal of Anaesthesia required authors to submit copies of cited references (inaccurate references fell from 48% to 22%). There was some evidence that journals employing in-house checking had lower error rates (eg, New England Journal of Medicine 4%, Annals of Internal Medicine 6%).

Conclusions

These studies show the need for improvement in the accuracy of references in many peer-reviewed journals. Studies of interventions were disappointingly scarce but employing in-house checking or asking authors to provide copies of references may improve accuracy. Further descriptive surveys are not warranted.

1GlaxoSmithKline, International Medical Publications, Greenford Road, Greenford, Middlesex, UB6 0HE, UK, e-mail: ew33645@gsk.com; 2Royal Australasian College of Surgeons, North Adelaide, Australia

Back to Top

Effects of Published Errata on Subsequent Publications

MaryEllen Sievert,1,2 John M Budd,1 Gabriel Peterson,1,2 and Kui Chun Su1,2

Objective

This study explores the role of published erratum in biomedical communications. It looks at the characteristics of errata and examines the citation pattern of both the erratum and the original article.

Design

MEDLINE was searched for the publication type, published erratum. As of June 1, 2001, there were 890 published errata and 94 of them appeared in the AIM subset of MEDLINE. These 94 items form the data for this study. The first step was to locate the MEDLINE record for the earlier publication. The characteristics of the erratum were examined. Then, both the erratum and the original publication were searched in the Web of Science to determine whether or not citations to the original record also cited the later erratum.

Results

Of the 57 errata examined to date, some or all of the author(s) of the original article also authored the erratum. For 14 errata, however, the author of the erratum was not an author of the original article. Nineteen of the original articles had never been cited. Of the remaining, 25 contained citations to the erratum. For 7 errata, the erratum was cited but the original publication was not cited. At times, there were even errata to errata so the relation of the errata to the original article was difficult to ascertain.

Conclusions

Most published errata are authored by 1 or more of the authors of the original article and the erratum appears fairly soon after the original. Generally, later authors do not recognize the problem articles and the original article is cited as valid. At times, however, only the erratum is cited. In most cases, therefore, subsequent authors are unaware that the original article contained an error. This lack of knowledge of the erratum is complicated by the fact that the erratum and original article are not always easily recognized as being related.

1School of Information Science and Learning Technologies College of Education, 324 Clark Hall, University of Missouri-Columbia, Columbia, MO 65211, USA, e-mail: sievertm@health.missouri.edu; 2Department of Health Management and Informatics, School of Medicine, University of Missouri-Columbia, Columbia, MO, USA

Study of Manuscript Submission and Publishing

Back to Top

Medical Journal Publishing: One Culture or Several?

Tim Albert1 and Alex Williamson2

Objective

Medical publishing uses the skills of people from a wide range of backgrounds. This study set out to examine their attitudes and assess the degree of homogeneity.

Design

A questionnaire was designed to (1) explore attitudes to 14 statements about journal publishing, (2) ask recipients to rank 5 statements relating to various roles in medical publishing, and (3) elicit standard demographic data. This questionnaire was sent to 50 amateur editors (medically qualified, part-time), 32 technical editors, and 26 editorial assistants working for BMJ Publishing Group. It was also sent to 50 reviewers chosen at random from the BMJ‘s database and to 34 professional editors (medically qualified, full-time) working for 5 general journals.

Results

The overall response rate was 70%. The most striking finding was that there was considerable unanimity about the attitudes. Strongly held views were that there is too much emphasis on impact factors, that the review process does not guarantee quality, and that ethical values are higher for medical journals than for mainstream magazines and journals. There were contradictions: reviewers thought journals were more interesting than did other groups (t test, 99% confidence); amateur editors thought it less important to keep to production schedules (t test, 95% confidence); technical editors thought it more important to make the contents reader-friendly (Wilcoxon rank sum test, 99% confidence).

Conclusions

This study found that, on the whole, there was a homogeneous culture, though there were some significant differences. This has important implications for managers and trainers.

1Tim Albert Training, Paper Mews Court, 284 High St, Dorking, Surrey RH4 1QT, England, e-mail: tatraining@compuserve.com; 2BMJ Publishing Group, London, UK

Back to Top

Manuscript Submissions and US Contributions to Medical Journals in an Era of Health System Reform: An Analysis of Trends, 1994-1998

Michael Berkwits,1 Warren B Bilker,2 and Frank Davidoff3

Objective

To determine if manuscript submissions by US contributors to medical journals declined between 1994 and 1998, when changes in health system financing created financial pressures within academic health centers that appeared to threaten academic scholarship.

Design

Information on total, research, and nonresearch manuscript submissions and on contributors’ country of origin was solicited from 55 major medical journals. Each journal used its own criteria to define these categories. Trends by submission category and contributor nationality and comparisons between journal categories (general medical vs specialty, United States vs international) were analyzed using generalized estimating equation regression for absolute and percent change. Submission types and contributor nationality were determined by participating journals

Results

Forty-eight journals (6 US general medical, 22 US specialty, 4 international general medical, and 16 international specialty) provided data (87%). Results are reported as absolute change in manuscripts per year. Total contributions rose significantly at both US (75.7 manuscripts, 95% confidence interval 15.8-135.6, P=.01) and international journals (57.6 manuscripts [16.2-98.9] P=.006), mostly due to research submissions at specialty journals (88.3 manuscripts [25.4,151.2], P=.006 US, and 31 manuscripts [11.6-50.5], P=.002 international; P=.03 for difference with general medical journals). Nonresearch submissions were static. Submissions did not differ between US and international journals. Contributions from US authors did not change (11.7 manuscripts [21.2-44.6], P>.20), but non-US contributions increased significantly (73.1 manuscripts [24.8-121.4] P=.003); differences between the 2 were significant at US (P=.03) and all journals (P=.02).

Conclusions

US author contributions to major medical journals held steady at a time of national health system restructuring. Non-US contributions increased significantly at the same time, however, in trends that may reflect a relative stasis of US academic output.

1University of Pennsylvania, Division of General Internal Medicine, 423 Guardian Dr, 1229 Blockley Hall, Philadelphia, PA 19104, USA, e-mail: berkwits@mail.med.upenn.edu; 2Center for Clinical Epidemiology and Biostatistics at the University of Pennsylvania, Philadelphia, PA, USA; 3Annals of Internal Medicine, Philadelphia, PA, USA

Back to Top

The Influence of War on Publishing in Peer-Reviewed Journals

Rajko Igic

Objective

To assess the influence of civil war during the recent disintegration of the former Yugoslavia on scientific output, as measured by changes in numbers of articles published in peer-reviewed journals.

Design

In this study, the articles published in journals indexed in the Science Citation Index (SCI) were retrieved for the former Yugoslav republics. According to the census of 1991, the republics’ populations were as follows: Serbia 9.7 million inhabitants, Croatia 4.7, Bosnia and Herzegovina (B&H) 4.3, Macedonia 2.0, Slovenia 1.9, and Montenegro 0.6. The annual numbers of articles from each were determined from 1985 to 2000. This period includes several prewar years, 5 years of civil war from 1991 to 1995, and the NATO military interventions in B&H (1995) and F. R. Yugoslavia (1999), which includes Serbia and Montenegro.

Results

In the late 1980s, Serbia produced more than 900 scientific articles per year and was well ahead, with more than twice as many publications as Slovenia. The number of publications from Croatia fell between that of Serbia and Slovenia. In the prewar period, the remaining republics had a relatively small scientific presence. The outputs of B&H decreased, from 50 articles in 1991, sharply during the war and continued to decrease. During the postwar period only 18 to 27 papers per year were published. In 1995, the output from Serbia dropped 33% in comparison to 1991. Slovenia produced more publications that year while Croatia was stagnant, and the 3 most productive states had a similar output. In 1998, Serbia produced 1543 publications, Slovenia 1116, Croatia 1103, Macedonia 100, B&H 25, and Montenegro 12. The number of articles from Serbia dropped in 1999 and 2000 10.2% and 27.9%, respectively. For the same period, the number of publications was increased in Croatia (37.3% and 12.5%), Slovenia (10.9 and 52.8%), Macedonia (5% and 6%) and Montenegro (75% and 66%) in comparison to 1998.

Conclusions

Scientific output was hindered by the war in Serbia and B&H. The war also caused a short stagnation in publishing from Croatia. The Serbs improved scientific production after an initial drop during UN sanctions. However, Serbia did not enjoy the same progress as Slovenia or Croatia, states that were less damaged by the war. It seems that the recent decline of scientific production in Serbia is not only the outcome of devastating effects of civil war and NATO bombing on the economy and other functions of society. This country was scientifically relatively isolated from Western countries, and the numbers of internationally co-authored papers with European and US scientists dropped significantly. As shown earlier, bibliometric indicators provide a sensitive mirror of the real world, and reveal much of the local and international political situation, including devastating influences of war.

Department of Anesthesiology and Pain Management, Cook County Hospital, 637 S Wood St, Chicago, IL 60612, USA, e-mail: rigic@hektoen.org

Back to Top

Where Are the High-Quality, Clinically Relevant Studies Published?

K Ann McKibbon, Nancy C Wilczynski, and R Brian Haynes

Objective

To determine which journals publish high-quality, clinically relevant studies from the perspectives of internal medicine, general and family practice, general practice nursing, and mental health care.

Design

A survey of 172 health care journals and the Cochrane Database of Systematic Reviews (CDSR). Seven research assistants with masters level education and extensive training to achieve high interrater reliability (κ>0.80) shared the reading of all journal issues for 2000. Each article was categorized into clinically relevant original studies, review articles, case reports, or general papers. The original and review articles were further categorized as pass or fail by methodologic criteria in the areas of treatment, diagnosis, prognosis, causation, economics, clinical prediction guides, and qualitative analyses. Clinicians affiliated with the editing of evidence-based practice journals (ACP Journal Club, Evidence-Based Medicine, Evidence-Based Nursing, and Evidence-Based Mental Health) determined which high-quality studies and reviews were the most clinically important.

Results

Over half of the high-quality, clinically relevant studies for each discipline were published in a subset of 10 journals but the concentration of these studies in any single journal was low, the highest for internal medicine being 11%. The leading journals differed across the 4 disciplines. The CDSR was a leading contributor to all 4 disciplines. Of the 4 highest circulation general journals, only the Lancet and JAMA made the top 10 list for each discipline.

Conclusion: Most clinically important studies and reviews in health care in several disciplines were published in a small number of medical journals, differing by discipline, but the concentration of such articles in any one journal was low.

Health Information Research Unit, McMaster University, 1200 Main St W, Hamilton, Ontario L8N 3Z5, Canada; e-mail: mckib@mcmaster.ca

Back to Top

The Coverage of Women’s Health in General Medical vs Women’s Specialty Journals: Not Just “Navel-to-Knees”

Jocalyn P Clark,1 Julie M Dergal,2 Penelope de Nobrega,2 Anjali Misra,2 Georgina D Feldberg,3 and Paula A Rochon4

Objective

Women’s health is traditionally portrayed as “navel-to-knees,” which emphasizes reproductive and maternal functions without consideration of social contexts. The medical literature influences the definitions and delivery of women’s health care. A comparison of how women’s health was represented in leading peer-reviewed general medical (GM) vs women’s specialty (WS) journals was conducted.

Design

Original investigations between January 1 and June 30, 1999, published in GM journals Annals of Internal Medicine, BMJ, JAMA, Lancet, New England Journal of Medicine, and Canadian Medical Association Journal (n=553) were compared to original investigations in leading WS journals: Health Care for Women International, Journal of Women’s Health, Women & Health, and Women’s Health Issues (n=87). Data were collected from articles related broadly to women’s health, identified using 3 strategies: articles that studied women only, keyword searches of all article titles, and a MEDLINE subject heading search. One hundred and five GM and 87 WS women’s health articles were found. Two independent reviewers critically evaluated and categorized each article as “navel-to-knees” (eg, menstruation, cervical cancer), “non-navel-to-knees” (eg, abuse, osteoporosis), or both. Discrepancies were resolved by consensus.

Results

Of the GM articles, 54 (51.4%) focused solely on “navel-to-knees” topics; half were reproductive issues and half female cancers. In contrast, 25 (28.7%) WS articles focused only on “navel-to-knees.” A “non-navel-to-knees” topic was the sole focus of 29 (27.6%) GM articles vs 38 (43.7%) WS articles. One quarter of GM and WS articles addressed both. Of all WS articles, 71% addressed a “non-navel-to-knees” issue; conversely, 72% of GM articles included a “navel-to-knees” issue (P<.02).

Conclusions

Most of GM articles drew on a narrow definition of women’s health. Women’s specialty journals provided more balanced coverage, addressing social concerns in addition to reproductive health. Since “navel-to-knees” represent only a fraction of women’s lives and GM journals have wide impact, editorial decisions and peer review in leading GM journals should promote broader conceptualizations of women’s health.

1Department of Public Health Sciences, Room 6, McMurrich Bldg, 12 Queen’s Park Crescent West, University of Toronto, Toronto, Ontario M5S 1A8, Canada, e-mail: j.clark@utoronto.ca; 2Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada; 3York University, Toronto, Ontario, Canada; 4Departments of Public Health and Medicine, University of Toronto, Toronto, Ontario, Canada

Back to Top

Study of Peer Review

Prodding Tardy Reviewers: Randomized Comparison of Phone, Fax, and e-Mail.

Roy M Pitkin and Leon F Brumeister

Objective

When manuscript reviewers fail to file reviews within a specified interval, journals usually prod them to complete their reviews. The objective of this study was to compare the efficacy of phone, fax, and e-mail for this prodding.

Design

Study conducted in the main editorial office of Obstetrics & Gynecology, a monthly medical specialty journal. The journal routinely asks reviewers to return reviews within 21 days of manuscripts being sent to them. When 28 days had elapsed without review being received, consecutive tardy reviewers were entered into the study if phone and fax numbers and e-mail address were known to the journal. Enrollees (N=378) were contacted by phone, fax, or e-mail, assigned randomly (computer-generated), to inquire about status of review and urge its completion. Contacts were made by 1 of 2 editorial assistants and identical wording was used in all 3 approaches. The main outcome variable was receipt of review within 7 days of contact.

Results

The proportion returning reviews within 7 days was essentially identical for each of 3 methods studied: phone 85/125 (68%, 95% confidence interval [CI] 60%-76%); fax 86/129 (67%, 95% CI 59%-75%); and e-mail 84/124 (67%, 95% CI 59% – 75%). Further, among the two thirds who returned reviews within 7 days, the mean time to return did not differ significantly across the 3 groups. In the phone-contact group, 67% of the calls were answered by a secretary, assistant, or receptionist; 7% by the reviewer; and 23% by some type of machine (2% were unanswered).

Conclusion: Contacting tardy reviewers resulted in review being received within 7 days in about two thirds of cases, and it made no difference if contact was made by phone, fax, or e-mail.

Obstetrics & Gynecology, PO Box 70410, 409 12th St SW, Washington, DC 20024-0410, USA, e-mail: rpitkin@greenjournal.org

Back to Top

Effect of Voluntary Attendance at a Formal Training Session on Subsequent Performance of Journal Peer Reviewers

Michael L Callaham1 and David L Schriger2

Objective

Do peer reviewers who attend a formal interactive training session thereafter produce better reviews?

Design

Peer reviewers were invited to attend a formal 4-hour highly interactive workshop on peer review. Attendees received a sample manuscript to read and review in writing in advance. The workshop included presentations on analyzing a study and the journal’s expectations for a quality review, discussion of the sample manuscript’s flaws and how to address them in a review, discussion of the reviews written by the attendees, and discussion of real reviews of other manuscripts illustrating key points. The performance of attendees using standard editor quality ratings (1 to 5) was assessed for the 2 years following workshop attendance. Controls matched for previous review quality and volume were selected from non-attenders of the workshop. In study 1, all reviewers were invited. In study 2, only 75 randomly selected reviewers with average scores were invited.

Results

Study 1: Twenty-five reviewers volunteered for the course, were eligible for study, and were compared to 25 matched controls. Of attendees filling out evaluations, 19% thought it somewhat and 81% thought it very helpful. All reviewers thought it would improve their subsequent reviews and 85%, their review ratings. The mean change in rating after the workshop was 0.16 (95% CI -0.17 to 0.49) for controls and 0.09 (-0.23 to 0.42) for attendees. Study 2: Of 75 reviewers invited, only 12 attended; 100% thought the workshop would improve their performance and ratings. Test scores at the end of the workshop improved 23% (73% of participants) compared to pre-tests. To date, controls average rating has changed -0.08 (95% CI -0.6 to 0.5) vs. -0.18 (-0.6 to 02) for attendees.

Conclusions

Among self-selected peer reviewers, attendance at a highly structured and interactive workshop did not improve quality of subsequent reviews, contrary to the predictions of attendees. Efforts to target average reviewers were logistically difficult and showed similar lack of effect on ratings, despite improvement in scores on a test instrument.

1Box 0208, University of California, San Francisco, CA 94143-0208, USA, e-mail: mlc@itsa.ucsf.edu; 2University of California, Los Angeles, Los Angeles, CA, USA

Back to Top

Structured Training Resources for Scientific Peer Reviewers

Richelle J Cooper,1 Michael L Callaham,2 and David L Schriger1

Although peer review is the foundation of scientific publication, most reviewers have no formal training in manuscript review and are chosen solely on the basis of their content expertise. Despite much attention to improving the quality of scientific manuscripts, there exists no systematic method to teach individuals how to perform a review. For this reason, an educational package to train novice peer-reviewers and provide a reference for those with more experience was developed. This multimedia project is based on experience gained and methods developed during 10 years of training and assessing peer reviewers in a formal environment. Previous training workshops included small group sessions, with interactive lectures and real-time review of a manuscript. Participants underwent pre- and post-workshop skill testing. Yearly workshops were modified in response to participant test results and feedback, and editors’ reports of recurring patterns of deficiency in manuscript reviews.

A CD-ROM contains the core materials. It is a subset of material that will be contained on a Web site that can be regularly updated with additional teaching modules. The CD provides a comprehensive set of audiovisual lessons addressing the purpose of peer review, the peer review process, the role of the reviewer, the essential content of the review, the writing of the review, and common pitfalls in manuscript evaluation and review preparation, using actual manuscripts and reviews as examples. It includes a simulated peer review experience (a manuscript for the participant to formally review, with detailed discussion of the manuscript’s strengths and weaknesses and how to address them). The Web site, when completed, will provide an expanded electronic textbook of peer review that supplements the lecture series and is linked to other relevant Web sites. For example, it will offer more detailed material on design-specific research methodology and statistical issues.

1UCLA Emergency Medicine Center, UCLA School of Medicine, 924 Westwood Blvd, #300, Los Angeles, CA 90024, USA, e-mail: richelle@ucla.edu; 2School of Medicine, University of California, San Francisco, San Francisco, CA, USA

Back to Top

The Quality of Reviewers of the Chinese Journal of Internal Medicine

Ding Yunqiu1 and Qian Shouchu2

Objective

To investigate the quality of reviewers of the Chinese Journal of Internal Medicine and know the need of training for reviewers.

Design

Of 219 reviewers for the year of 2000, 73 were surveyed during a meeting held by the Chinese Journal of Internal Medicine by means of questionnaire with 4 categories of qualities the reviewers should possess at the present knowledge level of science. These reviewers are mostly members of specialty societies, academicians, and noted scientists in China. Their ages ranged from 39 to 70 years, averaging 54.5 years.

Results

All the reviewers check the originality of scientific research and its creativity in the process of review. Sixty-eight (93.1%) have knowledge about evidence-based medicine and randomized controlled trials. Fifty-five (75.3%) verify the methodology in manuscripts, 43 (59.0%) check statistics, and 28 (38.4%) consider the importance of argument. Forty-nine (67.1%) check statistics in manuscripts by themselves and 33 (45.2%) ask others to do it (17 get help from their students or colleagues). Forty-nine reviewers (71.2%) know what the Vancouver style (Uniform Requirements for Manuscript Submitted to Biomedical Journals) is about. Twenty-six (47%) consider the importance of bibliographic references, 33 (45.2%) consider their relevance to review, and 15 consider nothing important. Twenty-five (34.2%) indicate the necessity of quantitative peer review, and 58 (79.5%) have knowledge about editorial ethics. Thirteen (17.8%) agree with the exposure of reviewer’s name in the paper, and 23 (31.5%) agree with the disclosure of the function of each author in their research in the paper, but 1 indicates that it is hard to verify. Nineteen (32.9%) consider the necessity of disclosure of the conflict of interest among reviewers and authors. Fifty reviewers (74.0%) have access to the e-journals of Western countries and 54 (74.0%) to online retrieval systems for help in review. Thirty-seven (50.7%) use e-mail to communicate with the editorial staff. Of the 73 reviewers, 43 publish more than 1 paper every year in major international journals and 62 in domestic journals. One has published 30 papers abroad, 2 have published 20 papers at home, and 3 have published nothing.

Conclusions

Reviewers are prominent persons in their specialty and usually earn high reputation for their contributions in clinical practice and basic study, but that is not enough for them to be a competent reviewer. Reviewers of medical journals at the present time should possess adequate knowledge of science and humanity. To improve the quality of medical journals, reviewers need to be trained or reeducated according to the requirements of medical journals.

1Chinese Journal of Internal Medicine, 42 Dongsi Xidajie, 100710, Beijing, China; 2Chinese Medical Journal, 42 Dongsi Xidajie, 100710, Beijing, China, e-mail: qsc@ht.rol.cn.net

Back to Top

Frequency and Consistency of Reviewers’ Comments on a Methods Section

Erica Frank,1 Lucia McLendon,1 Donna Brogan,2 and Dorothy Fitzmaurice1

Objective

To quantify an aspect of the thoroughness and consistency of peer review, issues frequently raised by authors but poorly quantified.

Design

External reviews were examined regarding the methods section of 58 manuscript submissions (29 published manuscripts submitted an average of 2 times each) from the Women Physicians’ Health Study, a large national study (n=4501 respondents, n=716 variables). Sets of reviews from 34 different journals (n=128 reviews, mean=2.2 reviews per set of reviews from each journal) were examined. All examined manuscripts contained 3 virtually identical paragraphs that described the basic study methods and 1 or more paragraphs (all written by the same group of authors) describing analytic issues specific to the particular submitted manuscript. Based on manuscript feedback from coauthors on these manuscripts, and on comments from the reviewers, it was determined that the most important components of the methods sections concerned response rate and data weighting. Thoroughness and consistency were assessed by the extent to which these and other components were regularly addressed in reviews.

Results

Nearly all review sets (91%) included at least 1 comment about the methods, 67% of individual reviews included some methodological comment, and 16% of comments addressed methods. By their time of publication, 97% of manuscripts had received at least 1 reviewer comment about methodology. Among reviews, 26% included 1 comment about the methods section, 16% made 2 separate comments, and 25% made more than 2 comments. There were a total of 217 methods-related comments. Approximately 17% of comments addressed response rate (11% positive, 56% negative, and 33% neutral); 10% addressed data weighting (43% positive, 5% negative, and 52% neutral); 25% considered other statistical methodological issues; and 49% considered other assorted methodological issues.

Conclusion: In papers examined, methodological issues were nearly always addressed by journal reviewers, although comments were inconsistent.

1Department of Family and Preventive Medicine, Emory University School of Medicine, 69 Butler St SE, Atlanta, GA 30303, USA, e-mail: efrank@fpm.eushc.org; 2Department of Biostatistics, Grace Crum Rollins School of Public Health, Emory University, Atlanta, GA, USA

Back to Top

Reviewers and Reviews Under Scrutiny: An Assessment of Strengths and Weaknesses

Sheila M McNab

Objective

To investigate whether reviewers for 10 biomedical journals performed their task conscientiously and thoroughly and whether their criticism helped authors improve their manuscripts.

Design

An author’s editor, assisted by Dutch graduate authors, scrutinized the initial reviews for 12 articles submitted to 10 international journals covering general biology, medical physics, radiology, virology, and vision research. Whenever possible, critiques were discussed with a manuscript’s principal author. Only 3 of the principal authors were senior researchers who had published widely. Supplementary material comprised (if available) copies of initial letters sent to potential referees, guidelines and instructions for referees, and checklists. Journal selection was governed by the availability of adequate material and authors willing to engage in discussions. Revision of manuscripts had given the author’s editor considerable insight into the fields concerned.

Results

Each manuscript initially received 2 to 4 reviews varying in length from 3.5 lines to 2 pages. The total number of reviews and reviewers involved was 28. Turnaround times were checked where possible; initial reviews for most papers, with 2 notable exceptions, were received within the 3 to 5 weeks specified by most journals. Direct comparison of instructions for reviewers (6 sets available) with corresponding reviews revealed that reviewers ignored much of the advice supplied by journals. They made few or no comments on the suitability of the title, quality of the abstract, or adequacy of references. In 3 cases, however, such aspects were included in checklists. Each critic concentrated mainly on technicalities stemming from his/her special approach; hence, reports on any 1 paper hardly overlapped. Poor English in 4 reviews was not helpful to inexperienced Dutch authors trawling for idiomatic English expressions. Of the extra references proposed by 5 referees, 1 proved inappropriate and 1 inaccessible. Nevertheless, all authors appreciated reviewers’ efforts and the opportunity to rid their manuscripts of errors and ambiguities.

Conclusions

Strengths and weaknesses were fairly evenly balanced. Superficial reviews made little impact on manuscript quality, but the best reviews clarified the science, educated the authors, and advised the editor. For inexperienced authors, responding to criticism was an intellectual challenge and a first step to publication. Experienced authors also benefited, but were more critical about review quality.

Physics and Astronomy Library, c/o Dr A B A Schippers, PO Box 80.000, 3508 TA Utrecht, the Netherlands

Back to Top

Peer Reviewer Recommendations and Ratings of Manuscript Quality for Accepted and Rejected Manuscripts

Robert McNutt1,2 and Richard M Glass1,3

Objective

To study the association of peer reviewer recommendations and quality ratings with editorial decisions, and the level of agreement among reviewers.

Design

Case-control study of original contribution submissions to JAMA during 1999. A random sample of 25 (of 199) accepted manuscripts and 50 (of 966) peer-reviewed rejected manuscripts matched by topic were obtained. The reviewers provided recommendations and ratings on the manuscript checklist. The editors’ ratings of the quality of the review for each reviewer were also obtained.

Results

Reviewer recommendations were statistically significantly associated with acceptance. The mean likelihood ratios were modest (2.4 for the recommendation to “accept as is”; 0.25 for “reject”). The intraclass correlation coefficient for reviewer recommendations for all manuscripts was 0.50 (95% confidence interval [CI] 0.21-0.65). However, their agreement was statistically significant only for recommendations made for rejected manuscripts. Agreement on the components of manuscript quality was statistically significant only for ratings of importance. Logistic regression revealed that reviewer recommendations and ratings of importance explained 35% of the variation in acceptance. Controlling for the editors’ quality grades of the reviews did not improve the prediction to accept or to reject.

Conclusions

Reviewer recommendations were modestly associated with editorial decisions but showed only fair agreement among reviewers. Agreement was best for recommendations to reject. Editors’ grades for the reviews were not associated with recommendations or the final decision. Summary recommendations and quality ratings may not capture the full value of peer reviewer contributions. Specific comments made by reviewers must be assessed in addition.

1JAMA, 515 N State St, Chicago, IL 60610, USA, e-mail: robert_mcnutt@rush.edu; 2Rush-Presbyterian-St Luke’s Medical Center, Chicago, IL, USA; 3University of Chicago, Chicago, IL, USA

Back to Top

Attitudes Toward Open Peer Review and Electronic Transmission of Papers for Their Review

Remedios Melero1 and F López-Santoveña2

Objective

To assess reviewers’ attitudes or preferences in favor or against masking their identity, and toward the electronic transmission of papers for review.

Design

A survey was mailed to 293 referees: ages: 35-45 (35%), 45-55 (37%), and 55-65 (27%): 93% PhD graduates; 69% male, 98% researchers, 82% teachers too, 85% review for other journals as well). The reviewers were mainly from Europe, North America, and South America. The questionnaire was anonymous and asked if respondents were in favor of an open review or masking the reviewers, and if they agreed with the electronic transmission of the papers for their review (both from the point of view of author and reviewer). Statistical analysis included frequency table analysis by kappa test of reliability, Pearson chi-square, and test of homogeneity of odds ratios when applicable.

Results

The response rate was 35% (103 respondents). The consistency between the answers as being author or reviewers when asked by the peer review process was significant (P<.001) without significant differences in terms of gender or age. Seventy-five percent were in favor of masking reviewers, and 17% favored completely unblinded review. The consistency between the answers for paper transmission was significant (P<.001) without significant differences in terms of gender or age. Seventy-five percent were in favor of electronic transmission, 25% were against it. There was a significant association between the answers in favor or against e-transmission and the age either as reviewers (P=.009) or as authors (P=.031). The other associations between the system of review and gender or age were not significant.

Conclusions

There was a preference among the participants for masking the reviewers, and a tendency to use the Web as the transmission media because it is considered faster, easier, simple, and more economic.

1Food Science and Technology International, Instituto de Agoquímica y Tecnología de Alimentos, CSIC, PO Box 73, 46100 Burjasot, Valencia, Spain, e-mail: rmelero@iata.csic.es; 2Computing and Statistics Unit, Instituto de Agoquimica y Tecnología de Alimentos, Valencia, Spain

Back to Top

The Core of Peer Review: Reviewers vs Editors and Future Citation of Manuscripts

Tobias Opthof, Ruben Coronel, Jacques M T de Bakker, Jan W T Fiolet, Marcel M Levi, Martin Pfaffendorf, Marieke W Veldkamp, Arthur A M Wilde, and Michiel J Janse

Objective

Modern developments in electronic publishing may remove limitations to the publication of scientific material. A reappraisal of the value of the peer review process seems justified.

Design

The editorial team of Cardiovascular Research has scored the priority position of original manuscripts based on the abstracts of the submitted papers between 1998 and 2000. The total score of the whole team was calculated to a scale 0% to 100%, based on the score (recommendation of either pass or reject) of the individual editors. The scores of each of 3 reviewers (indicating high or low priority) were also calculated to an overall reviewer’s score (score ranging from 0% to 100%). For each individual paper, the citations obtained for a 36-month period after publication was cumulated and correlated with reviewers’ and editors’ scores.

Results

There was a high correlation between editor’s and reviewer’s ratings in 1998, 1999, and 2000 (r=0.214, n=922); r=0.234, n=805) and r=0.236, n=835), all fulfilling P<.0005. Still, these correlations had no practical meaning for individual manuscripts. The period of citation accumulation is under progress.

Conclusion: Editors and reviewers both seem poor predictors of future citation. At this stage of analysis, editors seem to perform better than reviewers, but this may be different after final analysis.

Cardiovascular Research, Academic Medical Center, Meibergdreef g, Room J1-27, 1105 AZ Amsterdam, Amsterdam, the Netherlands, e-mail: t.opthof@med.uu.nl.

Back to Top

Peer Reviewers’ Age and Peer Review Quality

Qian Yue, Hou Cunming, Cui Xiaolan, Li Wenhui, Bao Yalin, Huang Yubin, Cai Lifeng, Zhao Hongmei, and Wang Guizhen

Objective

To understand whether the age of peer reviewers is related to the quality of peer review.

Design

A total of 4060 sheets of peer reviewer comments on manuscripts were obtained from 20 issues of the Chinese Journal of Pediatrics, 24 issues of the Chinese Journal of Obstetrics and Gynecology, 10 issues of the Chinese Journal of Neurology, and 10 issues of the Chinese Journal of Ophthalmology. The sheets were divided into 2 groups: group A (reviewers younger than 50 years [35-49] who had 785 sheets and group B (reviewers 50 years and older) who had 327 sheets. The peer review standards comprised more than 10 items (see handout), which were divided into excellent, good, and moderate comments made by peer reviewers. Excellent comments included such substantial or constructive comments as improvement of design and methodology, statistical analysis, and bibliographic references; good comments included other aspects on writing and interpretation; and moderate comments consisted of constructive comments or asking others to review. The data were processed with Fox pro database and the difference between the 2 groups was analyzed by using the chi-square test.

Results

Constructive comments on methodology in group A (younger reviewers) accounted for 3.5% (99 sheets) and in group B (older reviewers) 4.3% (335 sheets) (P=.06). Comments on design in group A accounted for 10.1% (288 sheets) and group B 11.7% (921 sheets) (P=.01). Comments on bibliographic references in group A accounted for 2.1 % (59 sheets)and in group B 1.2% (95 sheets) (P=.001), and those on statistical analysis in group A accounted for 4.5% (128 sheets) and in group B 3.4% (266 sheets) (P=.002).

Conclusions

The results show that the effect of the age of peer reviewers on the quality of peer review differs because of their different focus on manuscripts in peer review. The substantial comments on study design made by group A were less than those by group B, indicating that younger reviewers provided inferior comments on study design compared to those of older reviewers. The constructive comments on statistics and bibliographic references were more in group A than in group B. It is suggested that younger reviewers are encouraged to review manuscripts to meet complementary need between young and old reviewers.

Medical Journal Publishing House, Chinese Medical Association, 42 Dongsi Xidajie, 100710 Beijing, China

Back to Top

Referees’ Opinions About Editorial Policies and Practices of an Academic Medical Journal in Brazil

Julio Cesar Voltarelli,1,2,3 Valderez Aparecida Coelho Falaschi,1 and Maria De Lourdes Veronese Rodrigues1,2

Objective

Revista Medicina-Ribeirão Preto is a quarterly medical journal published since 1961 by a Brazilian institution. The journal is published mainly in Portuguese, emphasizes review papers and topic symposia, and is not indexed in MEDLINE. Identity of authors is blinded for peer reviewers and the publication is in the process of being freely available on the Internet. This study was conducted to assess reviewers’ attitudes toward the journals’ editorial policies and practices

Design

An electronic survey was sent to 100 reviewers (senior Latin American faculty who performed peer review of the articles submitted in the last 3 years of publication).

Results

Fifty-four reviewers replied. Sixty percent agreed with the journal’s editorial policy even if it prevents being indexed in MEDLINE. Sixty-one percent would like a more structured form for paper reviews, and 78% declared that blinding the identity of authors is important for impartial judgment of the papers. Only 9% reported that both authors and reviewers should be identified. Fifty-three percent favor the free availability of whole medical literature in the Internet, while 15% would restrict this freedom to article summaries and 17% to review articles. Only 37% of the reviewers agreed in abolishing paper correspondence in the editorial process and a minority think that reimbursement would speed (40%) or improve the quality (24%) of paper reviews.

Conclusions

The reviewers’ opinions may in part be explained by cultural factors and should be taken in consideration in planning editorial policies and practices of medical journals in that part of the world.

1Revista Medicina-Ribeirão Preto, São Paulo, Brazil; 2Medical School of Ribeirão Preto, University of São Paulo, São Paulo, Brazil; and 3Hospital das Clínicas, Depto Clínica Médica, Campus USP, 14048-900 – Ribeirão Preto, São Paulo, Brazil, e-mail: jcvoltar@fmrp.usp.br

Speakers

Isabelle Boutron
PRC Speaker

Brian Nosek
PRC Speaker

Paul Glasziou
PRC Speaker

HOLLY FALK-KRZESINSKI
PRC Speaker

Tony Ross-Hellauer
PRC Speaker