Previously, I wrote about various potential future roles for journals. Several of the scenarios I discussed involved journals taking a much bigger role as editors and custodians of science, using their power to shape the way that science is conducted and exerting control over the scientific process.
I was thus intrigued when, last week, The Journal of Chemical Information and Modeling (JCIM; an ACS journal) released a set of guidelines for molecular dynamics simulations that future publications must comply with. These guidelines provoked a reaction from the community: various provisions (like the requirement that all simulations be performed in triplicate) were alleged to be arbitrary or unscientific, and the fact that these standards were imposed by editors and not determined by the community also drew criticism.
The authors say that the editorial “is *not* intended to instruct on how to run MD”, but this defense rings hollow to me. See, for instance, the section about choosing force fields:
JCIM will not accept simulations with old force field versions unless a clear justification is provided. Specialized force fields should be used when available (e.g., for intrinsically disordered proteins). In the case of the reparametrization or development of new parameters compatible with a given force field, please provide benchmark data to support the actual need for reparameterization, proper validation of novel parameters against experimental or high-level QM data…
I’m not a molecular dynamics expert, so I’m happy to stay out of the scientific side of things (although the editorial’s claim that “MD simulations are not suitable to sample events occurring between free energy barriers” seems clearly false for sufficiently low-barrier processes). Nor do I wish to overstate the size of the community’s reaction: a few people complaining on Twitter doesn’t really matter.
Rather, I want to use this vignette to reflect on the nature of scientific authority, and return to a piece I’ve cited before: Geoff Anders’ “The Transformations of Science.” Anders describes how the enterprise of science, initially intended to be free from authority, has evolved into a magisterium of knowledge that governments, corporations, and laypeople rely upon:
The original ideal of nullius in verba sometimes leads people to say that science is a never-ending exploration, never certain, and hence antithetical to claims on the basis of authority. This emphasizes one aspect of science, and indeed in theory any part of the scientific corpus could be overturned by further observations.
There is, however, another part of science—settled science. Settled science is safe to rely on, at least for now. Calling it into question should not be at the top of our priorities, and grant committees, for example, should typically not give money to researchers who want to question it again.
While each of these forms of science is fine on its own, they ought not to be conflated:
When scientists are meant to be authoritative, they’re supposed to know the answer. When they’re exploring, it’s okay if they don’t. Hence, encouraging scientists to reach authoritative conclusions prematurely may undermine their ability to explore—thereby yielding scientific slowdown. Such a dynamic may be difficult to detect, since the people who are supposed to detect it might themselves be wrapped up in a premature authoritative consensus.
This is tough, because scientists like settled science. We write grant applications describing how our research will bring clarity to long-misunderstood areas of reality, and fancy ourselves explorers of unknown intellectual realms. How disappointing, then, that so often science can only be relied upon when it settles, long after the original discoveries have been made! An intriguing experimental result might provoke further study, but it’s still “in beta” (to borrow the software expression) for years or decades, possibly even forever.
Applying the settled/unsettled framework of science to the JCIM question brings some clarity. I don’t think anyone would complain about settled science being used in editorial guidelines: I wouldn’t want to open JACS and read a paper that questioned the existence of electrons, any more than I want The Economist to publish articles suggesting that Switzerland is an elaborate hoax.
Scientific areas of active inquiry, however, are a different matter. Molecular dynamics might be a decades-old field, but the very existence of journals like JCIM and JCTC points to its unsettled nature—and AlphaFold2, discussed in the editorial, is barely older than my toddler. There are whole hosts of people trying to figure out how to run the best MD simulations, and editors giving them additional guidelines is unlikely to accelerate this process. (This is separate from mandating they report what they actually did, which is fair for a journal to require.)
Scientists, especially editors confronted with an unending torrent of low-quality work, want to combat bad science. This is a good instinct. And I’m sympathetic to the idea that journals need to become something more than a neutral forum in the Internet age—the editorial aspect of journals, at present, seems underutilized. But prematurely trying to dictate rules for exploring the frontier of human knowledge is, in my opinion, the wrong way to do this. What if the rules are wrong?
There may be a time when it’s prudent for editors to make controversial or unpopular decisions: demanding pre-registration in psychology, for instance, or mandating external validation of a new synthetic method. But I’m not sure that “how many replicates MD simulations need” is the hill I would choose to die on. In an age of declining journal relevance, wise editorial decisions might be able to set journals apart from the “anarchic preprint lake”—but poor decisions may simply hasten their decline into irrelevance.