The NIH peer review system is often severely criticized by unsuccessful applicants (see e.g., A.D. Hollenbach, ASBMB Today, April 13, 2015) as being unfair to many productive investigators. We suspect that this would be the conclusion of many NIH grant applicants who have received a review of their proposal that is not particularly favorable. Although there is little published evidence to support the validity of this concept, we would hypothesize that overall impact scores (or a lack of a score) correlate closely with levels of criticism of the peer review process. We would further hypothesize that applicants who do not fare well are much more likely to be critical than those whose applications are scored very positively. Notwithstanding the constant efforts of NIH to ensure the validity of scores assigned to specific proposals, and extensive efforts to exclude from participation any individuals who would have a real or potential conflict of interest in the review process, independent evidence to assess fairness has been fairly infrequent.
Of potential importance, therefore, is a recently published study in Science titled “Big Names or Big Ideas: Do Peer-Review Panels Select the Best Science Proposals?” (www.sciencemag.org/content/348/6233/434.full), As described in this article, the authors, Danielle Li and Leila Agha, critically evaluate the degree of success of peer-review panels in being able to accurately identify the future quality of proposed research as assessed by quality peer-reviewed publications. To do this, the authors tracked peer-reviewed publications resulting from the award, as well as numbers of citations and patent outcomes. The comprehensive nature of the study is underscored by the fact that >130,000 research project (R01) grants funded by the NIH from 1980 to 2008 were examined. The authors reported that lower (reflecting higher quality applications) peer-review scores on grant proposals are consistently associated with more productive research outcomes. Moreover, this relationship was found to be maintained even when detailed controls for an investigator’s publication history, prior grant history, institutional affiliations, career stage, and degree types were considered. Further, higher peer-review scores (less favorable evaluations) among awarded grants was found to be associated with 15% fewer citations, 7% fewer publications, 19% fewer high-impact publications, and 14% fewer follow-on patents.
It should be clear that the validity of the conclusions reached by the authors in this publication would be predicated upon the assumption that publications, citations and patents accurately reflect the quality of science. Of relevance to this latter point, therefore, data presented in a recent publication entitled “Publication metrics and success on the academic job market” (Current Biology, 24(11), 2014), the authors (D. van Dijk et al) provide strong evidence that an individual’s publication record accurately reflects the author’s academic research success. The authors acknowledge that, while publication in the so-called “top-rated peer-reviewed journals” (such as Science and Nature) is important, the total number of peer-reviewed publications, increased number of citations relative to the average number of citations in a given journal, and quality of the journal, were strongly predictive of the probability that an individual would become an independent Principal Investigator.
Both of these recent publications underscore the critical importance of frequent publication of research findings in quality peer-reviewed journals. In recognition of that reality, we have created a comprehensive workbook (Writing for Biomedical Publication) available on our website at http://www.grantcentral.com/workbooks/biomedical-publication that has been written explicitly to assist authors in understanding the intricacies of the peer review publication process and to develop those skills required to efficiently prepare a manuscript for publication.