Blog


Crystallization in Microgravity

June 16, 2023

Not Boring recently published a panegyric about Varda, a startup that’s trying to create “space factories making drugs in orbit.” When I first read this description, alarm bells went off in my head—why would anyone try to make drugs in space? Nevertheless, there’s more to this idea than I initially thought. In this piece, I want to dig a little deeper into the chemistry behind Varda, and discuss some potential advantages and challenges of the approach they’re exploring.

Much of my confusion was quickly resolved by realizing that Varda is not actually “making drugs in orbit,” or not in the way that an organic chemist would interpret that sentence. Varda’s proposal is actually much more specific: they aim to crystallize active pharmaceutical ingredients (APIs, i.e. finished drug molecules) in microgravity, allowing them to access crystal forms and particle size distributions which can’t be made under terrestrial conditions. To quote from their website:

Varda’s microgravity platform is grounded in decades of proven research conducted on NASA’s space stations. By eliminating factors such as natural convection and sedimentation, processing in a microgravity environment provides a unique path to formulating small molecules and biologics that traditional manufacturing processes cannot address. The resulting tunable particle size distributions, more ordered crystals, and new forms can lead to improved bioavailability, extended shelf-life, new intellectual property, and new routes of administration.

Crystallization is an excellent target for a new and expensive manufacturing process because it’s at once very important and very hard to control. The goal of crystallization is to grow crystals of a given compound, fitting the component molecules together into a perfect lattice that excludes impurities and can be easily handled. To the best of my knowledge, almost every small-molecule API is crystallized at one stage or another; it’s the best way to ensure that the material is extremely pure.

(Crystallizing proteins for administration is less common, since proteins are really difficult to crystallize, but it’s not unheard of—insulin is often administered subcutaneously as a microcrystalline suspension, which allows higher concentrations to be accessed without excessive viscosity.)

But crystallization is also something we can’t really control. We can’t physically put molecules into a lattice or force them to adopt ordered configurations; all we can do is dissolve them in some mixture, tweak the conditions a little bit, and hope that crystals form. Thus, finding good crystallization conditions basically amounts to randomly screening solvents and additives, leaving the solutions for a long time, and checking to see if crystals grow. In the words of Derek Lowe:

I'd like to open up the floor for nominations for the Blackest Art in All of Chemistry. And my candidate is a strong, strong contender: crystallization. When you go into a protein crystallography lab and see stack after stack after stack of plastic trays, each containing scores of different little wells, each with a slight variation on the conditions, you realize that you're looking at something that we just don't understand very well.

Why does microgravity matter for crystallization? Not Boring says that crystallization occurs “at the mesoscopic scale, the length scale on which objects are larger than nanoscale (on the order of atoms and molecules) but still small enough that quantum mechanical or other non-trivial ‘microscopic’ behavior becomes apparent.” I found this answer a little confusing—doesn’t crystallization begin on the nanoscale and end on the macroscopic scale?

Clearer to me was the explanation from a 2001 review by Kundrot and co-workers:

In zero gravity, a crystal is subject to Brownian motion as on the ground, but unlike the ground case, there is no acceleration inducing it to sediment [fall out of solution]. A growing crystal in zero gravity will move very little with respect to the surrounding fluid. Moreover, as growth units leave solution and are added to the crystal, a region of solution depleted in protein is formed. Usually this solution has a lower density than the bulk solution and will convect upward in a 1g field as seen by Schlerien photography (Figure 1). In zero gravity, the bouyant [sic] force is eliminated and no bouyancy-driven convection occurs. Because the positions of the crystal and its depletion zone are stable, the crystal can grow under conditions where its growing surface is in contact with a solution that is only slightly supersaturated. In contrast, the sedimentation and convection that occur under 1g place the growing crystal surface in contact with bulk solution that is typically several times supersaturated. Lower supersaturation at the growing crystal surface allows more high-energy misincorporated growth units to disassociate from the crystal before becoming trapped in the crystal by the addition of other growth units…

In short, promotion of a stable depletion zone in microgravity is postulated to provide a better ordered crystal lattice and benefit the crystal growth process.

To summarize, microgravity serves to immobilize the crystal with respect to the surrounding solution, preventing convection or sedimentation from bringing highly concentrated solutions into contact with the crystal. This slows crystal growth, which might sound bad but is actually really good: in general, the slower a crystal grows, the higher its purity. (See also this 2015 article for further discussion.)

What practical impact does this have? In most cases, crystals grown in space are better than their terrestrial congeners by a variety of metrics: larger, structurally better, and more uniform. To quote from the Not Boring piece:

Doing crystallization in space is like adding a gravity knob to your instrument—it opens up regions of process design space that would otherwise be inaccessible. Importantly, after the crystallization occurs in space, the drug retains its solid state upon re-entry. Manufacture in space; use on earth.

This is why pharma is going to space to experiment with a wide range of medicines. Formulations made in microgravity could open the door to improvements in drug shelf life, bioavailability, IP expansion, and even better approaches to drug delivery…

To date, there has been a major disconnect between microgravity research and manufacturing. While it’s been possible to hitch a ride to the ISS and collaborate with NASA on PCG experiments, there is no existing commercial offering to actually manufacture drugs in space. Merck used their research results on Keytruda® crystallization to tinker with their terrestrial approaches to formulation. What if they could actually just manufacture the crystals they discovered in microgravity at commercial scale?

This is Varda’s mission—to make widespread research and manufacturing in microgravity a reality.

One concern I have is that to date, the vast majority of space-based crystallization has been aimed at structural biology (elucidating the structure of a protein via crystallography), which only takes one crystal, one time. What Varda is aiming to do is preparative crystallography: crystallizing proteins and small molecules to isolate large quantities of them. Both processes obviously involve growing crystals, but otherwise they’re pretty different: in structural biology, all you care about is isolating a single large and very pure crystal, while uniformity and reproducibility are paramount in preparative crystallography.

There’s some precedent for preparative protein crystallization in microgravity: a 1996 Schering-Plough paper studied crystallization of zinc interferon-α2b on the Space Shuttle. The results are excellent: over 95% of the protein crystallized, and the resulting suspension showed good stability and improved serum half-life in Cynomolgus monkeys (ref, ref). The difference in crystal quality is huge:

Space-grown crystals are much, much larger than crystals grown on Earth.

More recently, scientists from Merck found that crystallization of pembrolizumab (Keytruda) in microgravity reproducibly formed a monomodal particle size distribution, as opposed to the bimodal particle size distribution formed under conventional conditions (ref), although the crystals didn’t seem any larger:

Comparison of pembrolizumab crystal size distribution under different conditions.

Of course, until recently access to space was very limited and there wasn’t much reason to study preparative protein crystallography in microgravity, so the lack of studies is hardly surprising. For all the reasons discussed above, it seems very likely that preparative crystallography will be generally better in microgravity, and that the resulting crystals will be more homogenous and more pure than the crystals you could grow on Earth. But it’s not 100% certain, and that’s something Varda will have to establish. The word “can” is doing a lot of work in this text from their website:

By eliminating factors such as natural convection and sedimentation, processing in a microgravity environment provides a unique path to formulating small molecules and biologics that traditional manufacturing processes cannot address. The resulting tunable particle size distributions, more ordered crystals, and new forms can lead to improved bioavailability, extended shelf-life, new intellectual property, and new routes of administration (emphasis added)

Another concern is that little work (that I’m aware of) has been done on small molecule crystallization in microgravity—the very task Varda intends to start with. Small molecules, in general, are much easier to crystallize than proteins, and there are more parameters for the experimental scientist to tune. While there are certainly cases where small molecule crystals can display unexpected or problematic behaviors (like the famous case of ritonavir, the molecule Varda is investigating first), in general crystallization of small molecules seems like an easier problem, and one for which there are better state-of-the-art workarounds.

The most likely failure mode to me, though, is just that microgravity crystallization is better than crystallization on Earth, but not that much better. Shooting APIs into space, waiting for them to crystallize, and then launching them back to Earth is going to be really expensive, and Varda will have to demonstrate extraordinary results to justify the added hassle—particularly for a technique that they hope to make a key part of the pharmaceutical manufacturing process. Talk about supply chain risk! (Is this all going to be GMP?)

And it’s worth pointing out a fairly obvious consideration too: what Varda is proposing to do in space is only one part of a vast, multi-step operation. No synthesis will take place in space, so all the manufacturing of either proteins or small molecules will take place on Earth. The purified products will then be launched into space, crystallized, and then the crystal-containing solution will undergo reentry and then be formulated into whatever final form the drug needs to be administered in.

So, although “there are a number of small molecules that fetch hundreds of thousands or millions of dollars per kg” (Not Boring), Varda can address only a small—albeit important—part of the manufacturing process. I doubt this is likely to change. On average, it takes 100–1000 kg of raw materials to manufacture 1 kg of a small molecule drug (ref), so shipping everything to orbit would massively raise costs, without any obvious advantages that I can think of. The TAM for Varda might be large enough to break even, but it’s not going to replace conventional pharmaceutical manufacturing anytime soon.

Varda’s pitch is perfect for venture capital: ambitious, risky, and potentially game-changing if it succeeds. And I wish them luck in their quest, since new and better ways to approach formulation would be awesome. But I can’t shake the nagging doubt that they’re so excited about the image of space-based manufacturing that they’re trying to invent a problem that their aerospace engineers have a solution for. We’ll find out soon enough if they’ve succeeded.

Seven Degrees of Screening for Generality

June 7, 2023

(with apologies to Maimonides and Nozick)

  1. Screening on only one substrate before assessing the substrate scope. This is the “ordinary means” in methods development.
  2. Screening on one substrate, but choosing a substrate that worked poorly in a previous study (e.g.). This can be thought of as serial multi-substrate screening, where each substrate is a separate project, but the body of work achieves greater generality over time.
  3. Screening on one substrate at a time, but rescreening catalysts when you find problematic substrates (e.g.). This amounts to serial multi-substrate screening within a single project.
  4. Intentionally choosing a variety of catalysts up front and screening this set of catalysts for each new substrate class (e.g.), thus achieving a high degree of generality with a family of catalysts, but without attempting to systematically quantify the generality of each catalyst in this set.
  5. Choosing a handful of model substrates instead of just one, but otherwise doing everything the same as one would normally (e.g., pages S24–S29).
  6. Intentionally choosing a large, diverse panel of substrates and screening against this panel to quantify catalyst generality over a given region of chemical space. This is essentially what we and the Miller group did recently (and others, etc).
  7. The same, but incorporating robotics, fancy analytical methods, generative AI, or whatever else.

When I present the “screening for generality” work, I often get the response “this is cool, but my reaction doesn’t work in 96-well plates/I don’t have an SFC-MS/my substrates are hard to make.” The point of this taxonomy is to illustrate that there are a lot of ways to move towards “screening for generality” that don’t involve 96-well plates.

If you have the time and resources for robotics or SFC-MS, that’s great—you’ll be able to screen more quickly and cover more ground. But you can still start to consider more than a single model substrate even without any specialized equipment. It’s a mindset, not a recipe.

Data Are Getting Cheaper

June 2, 2023

Much ink has been spilled on whether scientific progress is slowing down or not (e.g.). I don’t want to wade into that debate today—instead, I want to argue that, regardless of the rate of new discoveries, acquiring scientific data is easier now than it ever has been.

There are a lot of ways one could try to defend this point; I’ve chosen some representative anecdotes from my own field (organic chemistry), but I’m sure scientists in other fields could find examples closer to home.

NMR Spectroscopy

NMR spectroscopy is now the primary method for characterization and structural study of organic molecules, but it wasn’t always very good. The last half-century has seen a steady increase in the quality of NMR spectrometers (principally driven by the development of more powerful magnetic fields), meaning that even a relatively lackluster NMR facility today has equipment beyond the wildest dreams of a scientist in the 1980s:

Most powerful NMR magnet over time (Campbell, Figure 1). Broad adoption lags these records, but the trend is comparable.

Commercially Available Compounds

The number of compounds available for purchase has markedly increased in recent years. In the 1950s, the Aldrich Chemical Company’s listings could fit on a single page, and even by the 1970s Sigma-Aldrich only sold 40,000 or so chemicals (source).

But things have changed. Nowadays new reagents are available for purchase within weeks or months of being reported (e.g.), and companies like Enamine spend all their time devising new and quirky structures for medicinal chemists to buy.

One way to quantify how many compounds are for sale today is through the ZINC database, aimed at collecting compounds for virtual screening, which is updated every few years or so. The first iteration of ZINC, in 2005, had fewer than a million compounds: now there are almost 40 billion:
Number of compounds in the ZINC database over time (graphic made by me).

(Most compounds in the ZINC database aren’t available on synthesis scale, so it’s not like you can order a gram of all 40 billion compounds—there’s probably more like 3 million “building blocks” today, which is still a lot more than 40,000.)

Chromatography

Chromatography, the workhorse of synthetic and analytical chemistry, has also gotten a lot better. Separations that took almost an hour can now be performed in well under a minute, accelerating purification and analysis of any new material (this paper focuses on chiral stationary phase chromatography, but many of the advances translate to other forms of chromatography too).

Fastest chiral separation for a given compound by year (Regalado, Figure 1).

Computational Chemistry

Moore’s Law is powerful. In the 1970s, using semiempirical methods and minimal basis set Hartree–Fock to investigate an 8-atom system was cutting-edge, as demonstrated by this paper from Ken Houk:

Figure 3 from the 1977 Houk paper.

Now, that calculation would probably take only a few seconds on my laptop computer, and it’s becoming increasingly routine to perform full density-functional theory or post-Hartree–Fock studies on 200+ atom systems. A recent paper, also from Ken Houk, illustrates this nicely:

Figure 3E from the 2022 Houk paper.

The fact that it’s now routine to perform conformational searches and high-accuracy quantum chemical calculations on catalyst•substrate complexes with this degree of flexibility would astonish anyone from the past. (To be fair to computational chemists, it’s not all Moore’s Law—advances in QM tooling also play a big role.)

What Now?

There are lots of advances that I haven’t even covered, like the general improvement in synthetic methodology and the rise of robotics. Nevertheless, I think the trend is clear: it’s easier to acquire data than it’s ever been.

What, then, do we do with all this data? Most of the time, the answer seems to be “not much.” A recent editorial by Marisa Kozlowski observes that the average number of substrates in Organic Letters has increased from 17 in 2009 to 40 in 2021, even as the information contained in these papers has largely remained constant. Filling a substrate scope with almost identical compounds is a boring way to use more data; we can do better.

The availability of cheap data means that scientists can—and must—start thinking about new ways to approach experimental design. Lots of academic scientists still labor under the delusion that “hypothesis-driven science” is somehow superior to HTE, when in fact the two are ideal complements to one another. “Thinking in 96-well plates” is already common in industry, and should become more common in academia; why run a single reaction when you can run a whole screen?

New tools are needed to design panels, run reactions, and analyze the resultant data. One nice entry into this space is Tim Cernak’s phactor, a software package for high-throughput experimentation, and I’m sure lots more tools will spring up in the years to come. (I’m also optimistic that multi-substrate screening, which we and others have used in “screening for generality,” will become more widely adopted as HTE becomes more commonplace.)

The real worry, though, is that we will refuse to change our paradigms and just use all these improvements to publish old-style papers faster. All the technological breakthroughs in the world won’t do anything to accelerate scientific progress if we refuse to open our eyes to the possibilities they create. If present trends continue, it may be possible in 5–10 years to screen for a hit one week, optimize reaction conditions the next week, and run the substrate scope on the third week. Do we really want a world in which every methodology graduate student is expected to produce 10–12 low-quality papers per year?

Book Review: Chaos Monkeys

May 30, 2023

Recently, I wrote about how scientists could stand to learn a lot from the tech industry. In that spirit, today I want to share a book review of Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley, Antonio García Martínez’s best-selling memoir about his time in tech and “a guide to the spirit of Silicon Valley” (NYT).

Chaos Monkeys is one of the most literary memoirs I’ve read. The book itself is a clear homage to Hunter S. Thompson’s Fear and Loathing in Las Vegas; Martínez writes in the “gonzo journalism” style, blending larger-than-life personal exploits with frank accounting of the facts. But Antonio García Martínez’s writing, replete with GRE-level words and classical epigraphs, invites further literary comparisons.

Chaos Monkeys as The Odyssey

The first comparison that springs to mind is The Odyssey. Odysseus, the protagonist of The Odyssey, is frequently described as polytropos (lit. “many turns”), which denotes his wily and cunning nature. Antonio García Martínez (or, per the tech fondness for acronyms, “AGM”) certainly deserves the same epithet.

Chaos Monkeys is structured as a recounting of AGM’s escapades in Silicon Valley. In order, he (1) leaves his job at Goldman Sachs and joins an ad-tech company, (2) quits and founds his own company, AdGrok, (3) gets accepted to Y Combinator and survives a lengthy legal battle with his former employer, (4) sells AdGrok simultaneously to both Twitter and Facebook, eventually sending the other employees to Twitter and himself to Facebook, (5) becomes a PM at Facebook and engineers a scheme to fix their ad platform and make them profitable, (6) succeeds, sorta, but pisses people off and gets fired, and then (7) turns around and sells his expertise to Twitter.

His circuitous journey around the Bay Area has the rough form of an ancient epic: at each company, he’s faced with new challenges and new adversaries, and his fractious relationships with his superiors mean that he’s often at the mercy of capricious higher powers, not unlike Odysseus. Nevertheless, through a mixture of cunning and hard work he manages to escape with his skin intact every time, ready for the next episode. (And, best of all, he literally lives on a boat while working at Facebook.)

(His escapades have only continued since this book was published: he got hired at Apple, unceremoniously fired a few weeks later [for passages in Chaos Monkeys], made it on Joe Rogan, and has now founded spindl, a web3 ad-tech startup, while simultaneously converting to Judaism.)

Chaos Monkeys as Moby-Dick

Chaos Monkeys bears a structural resemblance to Moby-Dick. Narrative passages alternate with lengthy technical discussions about the minutiae of Silicon Valley: one chapter, you’re reading about how AGM flooded Mark Zuckerberg’s office trying to brew beer inside Facebook, while the next chapter is devoted to a discussion of how demand-side advertising platforms work.

The similarities run deeper, though. Venture capitalism, the funding model that dominates Silicon Valley, was originally developed to fund whaling expeditions in the 1800s (ref, ref). While once venture capitalists listened to prospective whaling captains advertising the quality of their crews in a New England tavern, today VCs hear pitches from thousands of startups hoping to develop the next killer app the best ChatGPT frontend and make millions.

This isn’t a coincidence: whaling expeditions and tech startups are both high-risk, high-reward enterprises that require an incredible amount of skill and hard work, but also a healthy dose of luck. Both operations have returns dominated by outliers, making picking winners much more important than making safe bets, and in both cases the investment remains illiquid for a long time, demanding trust from the investor.

Much like Ishmael in Moby-Dick, AGM’s adventures see him join forces with a motley crew of high-performing misfits from around the globe. And just as Ahab’s quest for the whale is foreshadowed to be a ruinous one, so too does the reader quickly come to understand that AGM’s tenure in Silicon Valley will not, ultimately, end well. A fatalistic haze hangs over the book, coloring his various hijinks with a sense of impending loss.

(And did I mention he lives on a boat?)

Chaos Monkeys as The Great Gatsby

The emotional tone of the book, however, is best compared to that favorite of high-school English classes, The Great Gatsby. AGM, like Nick Carraway, is an outsider in the world of the nouveau riche—opulent parties, high-speed road races, conspicuous consumption—and, over the course of the book, is alternately infatuated with and disgusted by his surroundings. When at last AGM retires to a quiet life on Ithaca the San Juan islands, it’s with feelings of disillusionment, betrayal, and frustration, not unlike Carraway withdrawing to the Midwest. As AGM writes in the penultimate paragraph of Chaos Monkeys’s acknowledgements:

To Paul Graham, Jessica Livingston, Sam Altman, and the rest of the Y Combinator partners and founders involved in the AdGrok saga. In a Valley world awash with mammoth greed and opportunism masquerading as beneficent innovation, you were the only real loyalty and idealism I ever encountered.

But Chaos Monkeys isn’t solely an indictment of Silicon Valley’s worst excesses. Not unlike The Wire, Chaos Monkeys manages to simultaneously portray the positive and negative aspects of its subject matter, refusing to be reduced to “tech good” or “tech bad.” The panoply of grifters, Dilbert-tier bosses, and Machiavellian sociopaths lambasted by AGM can exist only because of the immense value that their enterprises provide to society—and his faith in the ability of tech to create wonders persists even as his own efforts to do so are undermined.

So the underlying message of Chaos Monkeys, ultimately, is one of hope for tech. If the excesses of tech are worse than other industries, it's only because the underlying field itself is so much more fertile. Far from condemning it, the depths of the decadence spawned by Silicon Valley bear witness to the immense value it creates. Imitation is the highest form of flattery; grift is the surest sign of productivity.

Overall, Chaos Monkeys is an exhilarating and hilarious read, a gentle introduction to the world of term sheets, product managers, and non-competes, and a book replete with anecdotes sure to fulfill the stereotypes of tech-lovers and tech-haters alike.

Editorial Overreach and Scientific Authority

May 25, 2023

Previously, I wrote about various potential future roles for journals. Several of the scenarios I discussed involved journals taking a much bigger role as editors and custodians of science, using their power to shape the way that science is conducted and exerting control over the scientific process.

I was thus intrigued when, last week, The Journal of Chemical Information and Modeling (JCIM; an ACS journal) released a set of guidelines for molecular dynamics simulations that future publications must comply with. These guidelines provoked a reaction from the community: various provisions (like the requirement that all simulations be performed in triplicate) were alleged to be arbitrary or unscientific, and the fact that these standards were imposed by editors and not determined by the community also drew criticism.

The authors say that the editorial “is *not* intended to instruct on how to run MD”, but this defense rings hollow to me. See, for instance, the section about choosing force fields:

JCIM will not accept simulations with old force field versions unless a clear justification is provided. Specialized force fields should be used when available (e.g., for intrinsically disordered proteins). In the case of the reparametrization or development of new parameters compatible with a given force field, please provide benchmark data to support the actual need for reparameterization, proper validation of novel parameters against experimental or high-level QM data…

I’m not a molecular dynamics expert, so I’m happy to stay out of the scientific side of things (although the editorial’s claim that “MD simulations are not suitable to sample events occurring between free energy barriers” seems clearly false for sufficiently low-barrier processes). Nor do I wish to overstate the size of the community’s reaction: a few people complaining on Twitter doesn’t really matter.

Rather, I want to use this vignette to reflect on the nature of scientific authority, and return to a piece I’ve cited before: Geoff Anders’ “The Transformations of Science.” Anders describes how the enterprise of science, initially intended to be free from authority, has evolved into a magisterium of knowledge that governments, corporations, and laypeople rely upon:

The original ideal of nullius in verba sometimes leads people to say that science is a never-ending exploration, never certain, and hence antithetical to claims on the basis of authority. This emphasizes one aspect of science, and indeed in theory any part of the scientific corpus could be overturned by further observations.

There is, however, another part of science—settled science. Settled science is safe to rely on, at least for now. Calling it into question should not be at the top of our priorities, and grant committees, for example, should typically not give money to researchers who want to question it again.

While each of these forms of science is fine on its own, they ought not to be conflated:

When scientists are meant to be authoritative, they’re supposed to know the answer. When they’re exploring, it’s okay if they don’t. Hence, encouraging scientists to reach authoritative conclusions prematurely may undermine their ability to explore—thereby yielding scientific slowdown. Such a dynamic may be difficult to detect, since the people who are supposed to detect it might themselves be wrapped up in a premature authoritative consensus.

This is tough, because scientists like settled science. We write grant applications describing how our research will bring clarity to long-misunderstood areas of reality, and fancy ourselves explorers of unknown intellectual realms. How disappointing, then, that so often science can only be relied upon when it settles, long after the original discoveries have been made! An intriguing experimental result might provoke further study, but it’s still “in beta” (to borrow the software expression) for years or decades, possibly even forever.

Applying the settled/unsettled framework of science to the JCIM question brings some clarity. I don’t think anyone would complain about settled science being used in editorial guidelines: I wouldn’t want to open JACS and read a paper that questioned the existence of electrons, any more than I want The Economist to publish articles suggesting that Switzerland is an elaborate hoax.

Scientific areas of active inquiry, however, are a different matter. Molecular dynamics might be a decades-old field, but the very existence of journals like JCIM and JCTC points to its unsettled nature—and AlphaFold2, discussed in the editorial, is barely older than my toddler. There are whole hosts of people trying to figure out how to run the best MD simulations, and editors giving them additional guidelines is unlikely to accelerate this process. (This is separate from mandating they report what they actually did, which is fair for a journal to require.)

Scientists, especially editors confronted with an unending torrent of low-quality work, want to combat bad science. This is a good instinct. And I’m sympathetic to the idea that journals need to become something more than a neutral forum in the Internet age—the editorial aspect of journals, at present, seems underutilized. But prematurely trying to dictate rules for exploring the frontier of human knowledge is, in my opinion, the wrong way to do this. What if the rules are wrong?

There may be a time when it’s prudent for editors to make controversial or unpopular decisions: demanding pre-registration in psychology, for instance, or mandating external validation of a new synthetic method. But I’m not sure that “how many replicates MD simulations need” is the hill I would choose to die on. In an age of declining journal relevance, wise editorial decisions might be able to set journals apart from the “anarchic preprint lake”—but poor decisions may simply hasten their decline into irrelevance.