I’ve been pretty critical of peer review in the past, arguing that it doesn’t accomplish much, contributes to status quo bias, etc. But a few recent experiences remind me of the value that peer review provides: in today’s scientific culture, peer review is essentially the only time that scientists get honest and unbiased feedback on their work.
How can this be true? In experimental science, scientists typically work alongside other students and postdocs under the supervision of a professor. This body of people forms a lab, also known as a research group, and it’s to these people that you present most frequently. Your lab generally knows the techniques and methods that you employ very well: so if you’ve misinterpreted a piece of data or designed an experiment poorly, group meeting is a great place to get feedback.
But a lab is also biased in certain ways. People are attracted to a lab because they think the science is exciting and shows promise, and so they’re likely to be credulous about positive results. Certain labs also develop beliefs or dogmas about how to conduct science: the best ways to perform a mechanistic study, or the most useful reaction conditions. To some extent, every lab is a paradigm unto itself. This means that paradigm-shifting criticism is hard to find among one’s coworkers, even if it’s common in the outside world.
Here are some examples of controversial-in-the-field statements that are unlikely to be controversial within given labs:
In each of these cases, it’s unlikely that criticism along these lines is available internally: people who’ve chosen to do their PhDs studying ML in chemistry aren’t likely to criticize your paper for overemphasizing the importance of ML in chemistry!
More generally, internal criticism works best when a lab serves as a shared repository of expertise, i.e. when everyone in the lab has roughly the same skillset. Some labs focus instead on a single overarching goal and employ many different tools to get to that point: a given chemical biology group might have a synthetic chemist, a MS specialist, a genomics guru, a mechanistic enzymologist, and someone specializing in cell culture. If this is the case, your techniques are opaque to your coworkers: what advice can someone who does cell culture give about improving Q-TOF signal-to-noise?
Ideally, one’s professor is well-versed enough in each of the techniques employed that he or she can dispense criticism as needed. But professors are often busy, aren’t always operational experts at each of the techniques they oversee, and suffer from the same viewpoint biases that their students do (perhaps even more so).
So, it’s important to solicit feedback from external sources. Unfortunately, at least in my experience most external feedback is too positive: “great talk,” “nice job,” etc. Our scientific culture tries so hard to be supportive that I almost never get any meaningful criticism from people outside my group, either publicly or privately. (Ideally one’s committee would help, but I never really got to present research results to my committee, and this doesn’t help postdocs anyhow.)
Peer review, then, serves as the last bastion against low-quality science: reviewers are outside the lab, have no incentive to be nice, and are tasked specifically with poking holes in your argument or pointing out extra experiments that would improve it. Peer review has improved each one of my papers, and I’m grateful for it.1
What’s a little sad is that the excellent feedback that reviewers give only comes at the bitter end of a project, which for me has often meant that the results are more than a year old and my collaborators have moved on. Much more useful would be critical feedback delivered early on in a project, when my own thinking is more flexible and the barrier to running additional experiments is lower. And more useful still would be high-quality criticism available at every step of the project, given not anonymously but by people whom you can talk to and learn from.
What might this practically look like?
I don’t know what the right solution looks like here: the burden of peer review is already substantial, and I don’t mean to suggest that this work ought to be arbitrarily multiplied for free. But I do worry that eliminating peer review, absent other changes, would simply mean that one of the only meaningful chances to get unfiltered feedback on one’s science would be eliminated, and that this would be bad.
Thanks to Croix Laconsay and Lucas Karas for helpful feedback on this piece.