Friday, December 13, 2013

A Shot Across the Bows for SCIENCE, NATURE and CELL

In a recent article in the Guardian newspaper Nobel laureate Randy Schekman castigated the ‘ holy trinity' of prestige journals for their editorial practices (1). He also criticized the academic promotions process for putting too much emphasis on publication is these journals. The journals in turn exploit this for their own ends of increased circulation and profit.  While there is a lot of excellent science published in these venues, the prestige journal system has two major flaws. The first, pointed out by Schekman, is that these journals want articles on timely and ‘sexy’ topics only- other equally good science is ignored.  The second is that the editorial process really isn’t peer review. Much of the decision making for each journal lies with a small cohort of admittedly very bright, usually young, full-time staff editors (sometimes decried as ‘failed postdocs’).  This is especially true of the Nature stable of journals and is quite at odds with the more traditional approach of journals based in scientific societies where the editors are distinguished investigators in their own right and serve on a part time basis.

Schekman advocates publishing in open-access journals and ignoring the prestige journals. The trouble with that is that the expanding universe of on-line journals include a lot of junk, as a recent experiment showed (2). Personally I have more faith in some of the old-line conventional journals in the biomedical field that have a long track record of publishing solid science. Despite the emphasis on publication in premiere journals, my experience is that good science published in good mid-level journals eventually gets recognition. 


Thursday, December 5, 2013

Peer Review: ‘Herding’ Behavior Versus ‘Gut’ Instincts in Science

An interesting article in NATURE scrutinizes the peer review process via a computer model (1). The authors compare the rapidity of acceptance of a hypothesis in a field when peer review is based solely on objective analysis of data versus when the reviewer includes subjective feelings about the validity of the hypothesis. They find that convergence is more rapid in the first case. Rapid convergence has an element of herd behavior- this may be valuable when the hypothesis under consideration is indeed correct. However, if it is not, then ‘herding’ can lead to premature acceptance of false concepts.

The NATURE article reflects a problem that is very familiar to scientists undertaking review of journal articles (or grant proposals). In the current ideal model of peer review one is supposed to confine oneself to the data and methods presented in the submitted article. If these all seem consistent then one should recommend acceptance of the article by the journal.  However, often one has the ‘gut’ feeling, based on long experience, that something isn’t right, that the results can’t be valid given the nature of the experiment or the methodology used. But you can’t just come out and state this in your review! In these cases reviewers often hunt for some excuse to reject the article, but are sometimes forced into reluctant acceptance. If more subjectivity were allowed in peer review it might reduce the frequency of papers, especially high profile ones, whose data cannot be reproduced upon subsequent analysis (2). This type of subjective evaluation does take place- in hallway conversations at meetings or in the university cafeteria, but perhaps there should be more of a place for it in the formal review process. Eventually flawed research is revealed- especially if the topic is quite important. However, reputations are not made by repeating the work of others, and surely here are many observations and concepts of more modest importance that persist in the literature even though they are basically incorrect.