Editorial GuidePeer Review

Re-reviewing Peer Review

See allHide authors and affiliations

Science Signaling  25 Aug 2009:
Vol. 2, Issue 85, pp. eg11
DOI: 10.1126/scisignal.285eg11

Abstract

The peer-review process can be improved by having reviewers focus on improving the work rather than simply noting its flaws.

Michael B. Yaffe, Chief Scientific Editor of Science Signaling

The only thing scarier than being asked to review a paper that is directly in your research area is to actually get back the reviews on your own manuscript submission. Peer review is, of course, one of the cornerstones of modern scientific publishing, whose goal is to ensure that inaccurate or sloppy research is weeded out before it can be published to muddle the field for subsequent researchers. But we need to remember that, as reviewers, our primary responsibility should be to make sure that new discoveries—or differences from and corrections to those discoveries that have been previously reported—get disseminated to the scientific community as rapidly as possible. With that in mind, I thought I would use this bully pulpit to share some of my experiences and thoughts about our current system of peer review, with an eye toward how the system might be improved from within.

Most of the style and substance that I learned for reviewing my colleagues’ manuscripts, grants, and promotion packages was based on trial and error, some errors perhaps more egregious than others. I had the opportunity, both as a graduate student and as a postdoc, to help review papers with my advisers. My early reviews were full of piss and vinegar, angrily decrying the missing controls, the improper interpretation of the data, and the fuzzy thinking revealed in the text—pretty much what one would expect from a young scientist with high standards, a distinct lack of experience with how real data often look, and a profound underestimation of how long experiments actually take. (This was back in the days when we sent paper manuscripts to journals by mail and waited months for reviews to come back. Experiments took even longer.) Fortunately, I was rescued from this vitriolic state of affairs by my mentors, who reminded me that we were trying to help the work get published, not trying to prevent it! One of the key lessons I learned from these more experienced scientists was that the most interesting papers were those in which most of the story held together, but some of the data didn’t quite fit perfectly. First, that meant that the data were probably real and not selectively edited, and, second, that meant that there was still some more work for others to do to try and bring the remaining loose ends together. It wasn’t necessary to put everything together into one completely tight self-contained shrink-wrapped package. In fact, we should be skeptical of stories that seem too perfect—or ones in which the data lack the streaks, warts, and bristles that real data have. One of my mentors, a notoriously rigorous but fair reviewer, taught me not to criticize an experiment unless I could tell the authors how they could do it better. “If you just want to throw darts,” he would say, “go to the pub.”

Unfortunately, not all reviews facilitate the process of getting new data published; some seem devoted to actually squelching the new discoveries. Some of the more, shall we say “creative,” approaches I've seen in this regard, as the recipient of the products of peer review (or as the colleague of other recipients), include the following: (i) reviews that make demands for large amounts of additional supporting data, so that, in effect, an entire additional manuscript or two ends up in the Supplemental Data section, where it runs the risk of being forgotten and ignored; (ii) reviews that demand creation of a new transgenic or knockout animal to test the results in a whole animal setting, or request an experiment that requires a piece of equipment or highly specialized technique that isn't readily available (except maybe to the reviewer); (iii) reviews that focus obsessively on a single fine point of data interpretation as though the validity of the entire paper hinges on it; (iv) reviews that state that everyone already knows what's in your paper without bothering to cite any publications to support this claim; (v) reviews that raise arguments about data in a previous publication that already made it through peer review; and (vi) my personal favorite: reviews in which the reviewer has clearly misinterpreted the data or—still worse—misinterpreted a published paper by another group and claimed that it conflicts with the current results. A good editor can often weed out the more unreasonable demands. However, sometimes an editor will abdicate his or her responsibility for making independent decisions and instead demand that a manuscript satisfy every request of three or more reviewers. In my experience, getting three scientists to agree on where to go for lunch is arduous enough—having them all agree on every single merit of a manuscript seems beyond hope at times.

What can we, as scientists, do to improve this situation? If you agree to be a reviewer, focus your critique on improving the work so that the data—assuming it is novel and obtained correctly—can be published somewhere (even if not in the journal for which you are reviewing it). Keep in mind that no one ever built a statue to a critic. If the work is so close to your own that it is hard to be objective, or the paper is so poorly written that reading it makes you want to strangle the dog, or if you are so overwhelmed that you realize you cannot devote the time required to do a good job, then recuse yourself—even if you have put off reading the paper for 6 weeks and have an inbox full of “Review of Manuscript XXXX OVERDUE” notices. Finally, read your review from the perspective of the recipient—if you received this review, would you be able to use the comments constructively, or would you need a punching bag to decompress? Fortunately, the great majority of reviewers are fair, are devoted to the process, and take the job seriously.

Looking to the future, perhaps some formal training in peer-reviewing manuscripts should be considered for our students and postdocs. All graduate programs require students to critically evaluate the published literature, but I suspect that only a few actually teach how to write a formal review. One interesting way to implement this is described in a Science Signaling Teaching Resource on using Web-based discussion forums (1). An alternative, discussed in a second Teaching Resource on training at the undergraduate level, is to have students write up laboratory exercises as though they were research articles and then act as peer reviewers for each other's “research” (2).

Finally, as editors, we need to ensure that our reviewers don’t make exorbitant demands, or “move the goalposts,” requiring new revisions in second or third rounds of review. Once the initial set of criticisms has been addressed, the reviewers shouldn’t be permitted to ask for things they missed in the initial review, unless they relate to experiments used to satisfy the original critiques. Editors have an obligation to get involved; the reviewers’ opinions should not fully dictate the final results. Reviews are meant to guide editors’ thinking, not rule it. This independence requires a certain amount of scientific courage, particularly when one decides to go against the advice of a highly decorated expert in the field. On the other hand, sometimes the reviews themselves make it easy: Those that drip with vitriol, exuding an excessive amount of anger and aggression, disqualify themselves.

With the ever-increasing flood of new research in the publication pipeline, it is more important now than ever that we truly act as peers in the process of peer review.

References

View Abstract

Navigate This Article