Recently, Kelsey Piper shared that o3 (at time of writing, one of the latest reasoning models from OpenAI) could guess where outdoor images were taken with almost perfect accuracy. Scott Alexander and others have since verified this claim.
I’ve been playing around with this too: with the prompt linked in Scott’s post, o3 can guess where my photos were taken almost every time, even when I intentionally avoid anything that looks like it might be too helpful. After inspecting the reasoning, I was surprised to learn that o3 can almost always estimate the latitude to within a few degrees, which vastly restricts the range of potential answers.
I didn’t think this was possible before doing the research for this post. Here’s three recent examples—see if you can estimate the latitude yourself. (You may want to open the images in a new tab to zoom in.)
o3 guessed that these were 40–45º N, 25–28 ºN, and 34–36º S; in every case the answer was within that range. (I make sure to only give o3 screenshots, so it can’t access EXIF data or otherwise cheat at this assessment.)
How is this possible? Here’s my best understanding of what o3 is doing, informed by a bunch of back-and-forth conversations with a friendly neighborhood AI model. (I’ll be assuming that the photo also has compass information, in keeping with standard GeoGuessr rules.)
Let’s make this as simple as possible to start. On the spring equinox, at noon, at the equator, the sun is directly overhead. As a result, tall objects without any overhang won’t cast any shadows.
If you’re not on the equator, then objects will still cast a shadow, and the length of the shadow S relative to the object’s height H tells you what latitude you’re at. Formally, we can define the solar elevation θ := arctan(H/S), and approximate the latitude φ as 90º − θ.
I don’t do much mental trigonometry these days, but it’s pretty easy to make a table with some key values:
S/H | θ (Solar Elevation) | φ (Latitude) |
---|---|---|
2:1 | 26º | 64º |
1.3:1 | 37º | 53º |
1:1 | 45º | 45º |
0.7:1 | 55º | 35º |
0.5:1 | 63º | 27º |
0 (no shadow) | 90º | 0º |
With a compass, figuring out which hemisphere you’re in is easy. In the northern hemisphere, the sun is south of you, so the shadows point north; in the southern hemisphere, the shadows point south.
Unfortunately, it’s more complex than this—we’re not always on the equinox, meaning that we also have to account for solar declination (δ). The solar declination reflects how far away from the equator the sun is on any given date; δ is +23.4º on the summer solstice and -23.4º on the winter solstice. We have to adjust the above formula to take this into account: now φ ≈ 90º − θ + δ.
Qualitatively, this means that shadows are shorter in summer than winter. A H/S ratio of 1:1 implies a latitude of 22º in winter or a latitude of 68º in summer, which is the difference between Cuba and Iceland. In practice, though, o3 can often figure out the season from how the trees look and how people are dressed.
When we move away from noon, things get a bit more complicated. We have to employ h, the hour angle, which is equal to 0º at local noon (when the sun is directly overhead) and increments by 15º every hour. Here’s the full equation:
sin(θ) = sin(φ)*sin(δ) + cos(φ)*cos(δ)*cos(h)
(o3 says “It’s just the spherical-law-of-cosines applied to the right-angled triangle on the celestial sphere.” Trivial! If you’re curious how we go from this to the simplified noon equation above, see Appendix A.)
This is a bit too involved for a GeoGuessr game—even o3 solves this in a Python environment. Qualitatively, though, this means that as we move away from noon and cos(h) becomes smaller, the solar elevation θ shrinks. Within an hour or two of noon, we’re only off by 1–2º, but after three hours we’re overestimating the latitude by 7–10º.
This seems bad, but with a compass it’s relatively easy to check how far from noon it is. Shadows point exactly north–south at local noon, and point increasingly east or west as the hour angle increases, so looking at the shadow orientation can tell you how much to update the latitude. In practice o3 recommends just ignoring shadow-related math after 3:00 PM or before 9:00 AM, since the error becomes too high.
Here’s another recent GeoGuessr picture. Can we solve this now?
Here’s my attempt to apply what we’ve learned: we’re looking east and we can see that shadows are pointing north, so the sun is south. This means that we’re in the northern hemisphere. The shadow–height ratio is a bit tough to estimate from this picture; based on the Prius, maybe 0.5:1. So that gives us an equinox latitude of 27º N, minus say 5º for time of day (since the shadows seem angled), which leaves us with a latitude of… 22º N ± 23º depending on the season. Not terribly helpful.
I gave o3 the same image, and it told me the latitude was 12–15º N. The correct answer is 11.5 ºN (Phnom Penh).
What did we do wrong? I asked o3 what went wrong with my reasoning, and this is what it told me (lightly edited for clarity):
The last point deserves further discussion. The impact of solar declination on solar elevation is modulated both by the latitude and the hour angle—the 23.4º swing is scaled by cos(h) and by the actual latitude. With some Python code (Appendix B), we can quickly confirm that the effect is smaller at near-equatorial latitudes:
Overall, though, there’s nothing here we haven’t already discussed; o3 just understands this material better than me and can do the math properly.
This is a fun activity for building AI-related intuition. o3 is very good at this and is able to do something that appears superhuman. Upon close inspection, the reasoning is legible, but I’m not really able to follow the same methods myself with any degree of precision; I’m just not quite able to do any step with sufficient accuracy. I’m hoping that I’ll be able to build up my skills over time—this would be a pretty fun party trick.
The full equation is:
sin(θ) = sin(φ)*sin(δ) + cos(φ)*cos(δ)*cos(h)
If we assume that we’re at local noon, cos(h) = 1. This lets us apply the following identity:
cos(α−β) = cos(α)*cos(β) + sin(α)*sin(β)
To get:
sin(θ) = cos(φ - δ)
sin(θ) = sin(90º - φ + δ)
θ = 90º - φ + δ
Which is the simplified equation I presented above.
Here's the Python code to generate the above plot.
import numpy as np import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' def solar_elevation_vs_month(hour_local: float, latitude_deg: float) -> np.ndarray: """ Return an array of solar-elevation angles (degrees) for each day at a given true-solar hour and latitude. """ lat = np.deg2rad(latitude_deg) doy = np.arange(365) # Approximate solar-declination model (±23.44° sine fit) delta = np.arcsin(np.sin(np.deg2rad(23.44)) * np.sin(2 * np.pi / 365 * (doy - 80))) # Hour angle: 0° at solar noon; +15° per hour in the afternoon H = np.deg2rad((hour_local - 12.0) * 15.0) # Solar-elevation formula h = np.arcsin(np.sin(lat) * np.sin(delta) + np.cos(lat) * np.cos(delta) * np.cos(H)) return np.rad2deg(h) if __name__ == "__main__": plt.figure(figsize=(6, 4)) months = np.array([15, 46, 75, 105, 135, 162, 198, 228, 258, 288, 318, 344]) for h, l in [(15, 0), (15, 15), (15, 30), (15, 45), (15,60)]: elevations = solar_elevation_vs_month(h, l) plt.plot(np.arange(365), elevations, label=f"{h:.1f} h, {l:.1f}° N") plt.xticks(months, ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']) plt.ylabel(f"Solar elevation (°)") plt.xlim(0,365) plt.ylim(0,60) plt.legend() plt.tight_layout() plt.show()
I did not enjoy John Mark Comer’s book The Ruthless Elimination of Hurry.
Comer’s book is written to people trying to find meaning in a world that feels rushed, distracted, and isolated. At the time of writing, Comer was a Protestant pastor in Portland, Oregon, but he’s since stepped back to focus on creating resources for spiritual formation (like this book).
The book takes its title from a quote by Christian philosopher Dallas Willard:
Hurry is the great enemy of spiritual life in our day. You must ruthlessly eliminate hurry from your life.
In the book, Comer argues that hurry is incompatible with love (p. 23), patience (p. 23), joy (p. 25), peace (p. 25), wisdom (p. 52), and gratitude (p. 52). Hurry is a sign that we aren’t accepting our God-given limitations (p. 65), and hurrying causes irritability, restlessness, distraction, and isolation (pp. 27, 58, 89).
The base state of man before the modern era, Comer argues, was unhurried (pp. 42–45). God rests in Genesis 1, and Jesus himself never hurried. The cure to our modern malaise, thus, is to embrace a life marked by slowness, solitude, simplicity, prayer, and rest. Comer advocates taking a literal 24-hour sabbath, rejecting consumerism, and trying to find a “perpetual Zen-like state of Jesus-derived joy and peace” (p. 251).
Much of what Comer says is good. But the central argument of his piece, that hurrying should be eliminated, doesn’t seem defensible to me. Comer makes very few direct arguments that hurrying itself is bad, instead using hurrying as a metonym for a vaguely defined bucket of negative modern traits: isolation, anxiety, fear, and so on.
This is a classic example of a motte-and-bailey argument, popularized by Scott Alexander on Slate Star Codex. In his post “All In All, Another Brick In The Motte,” he defines a motte-and-bailey argument thusly:
So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you retreat to an obvious, uncontroversial statement, and say that was what you meant all along, so you’re clearly right and they’re silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement.
Comer does exactly this. “You must ruthlessly eliminate hurrying” is a bold, controversial statement—but the statement he actually defends is more like “anxiety and isolation are bad,” which doesn’t have quite the same transformative implications for modern Christian living.
I’ll go a step further and try to defend the assertion that hurrying can be good, actually. Here’s Genesis 18:1–8 (ESV), when Abraham receives God at Mamre (emphasis added):
And the Lord appeared to Abraham by the oaks of Mamre, as he sat at the door of his tent in the heat of the day. He lifted up his eyes and looked, and behold, three men were standing in front of him. When he saw them, he ran from the tent door to meet them and bowed himself to the earth and said, “O Lord, if I have found favor in your sight, do not pass by your servant. Let a little water be brought, and wash your feet, and rest yourselves under the tree, while I bring a morsel of bread, that you may refresh yourselves, and after that you may pass on—since you have come to your servant.” So they said, “Do as you have said.” And Abraham went quickly into the tent to Sarah and said, “Quick! Three seahs of fine flour! Knead it, and make cakes.” And Abraham ran to the herd and took a calf, tender and good, and gave it to a young man, who prepared it quickly. Then he took curds and milk and the calf that he had prepared, and set it before them. And he stood by them under the tree while they ate.
In the above passage, Abraham hurries and tells others to hurry. I think it’s pretty clear from context that Abraham’s behavior here is correct hospitality, not sinful, since God immediately blesses Abraham (v. 10) and says he’s been chosen ”to keep the way of the Lord by doing righteousness and justice” (v. 19).
Here’s a few other passages defending the practice of hurrying, which I will summarize for brevity’s sake:
These points may seem obvious—clearly, if Amalekite raiders carried off your family, you would hurry after them! But even a trivial example like this demonstrates that hurrying is not intrinsically opposed to the will of God.
Comer also argues that Jesus himself never hurried (p. 92). This is debatable—Mark uses the word “immediately” to describe many of Jesus’s actions, but that may reflect a Markean narrative style more than the actual pace of actions. Jesus himself commands Zaccheus to hurry (Luke 19:5), and his anger cleansing the temple and agony at Gethsemane should be sufficient to dispel the idea that Jesus perpetually existed in a “Zen-like” (p. 251) stoic state. Stress is not incompatible with sanctification.
Comer further argues that we’re called to walk with God, not run with him (p. 23). This is quite literally false! In 1 Corinthians 9:25–27, Paul uses the metaphor of running to convey the discipline, self-control, and perseverance required for mature Christian living:
Do you not know that in a race all the runners run, but only one receives the prize? So run that you may obtain it. Every athlete exercises self-control in all things. They do it to receive a perishable wreath, but we an imperishable. So I do not run aimlessly; I do not box as one beating the air. But I discipline my body and keep it under control, lest after preaching to others I myself should be disqualified.
Equating maturity with a restful, non-stressed life sets Christians up for disappointment. Hebrews 11 specifically highlights the heroes “of whom the world was not worthy” who “went about in skins of sheep and goats, destitute, afflicted, mistreated” (vv. 37–38). A stressful life doesn’t mean, as Comer argues, that “something is out of whack” (p. 85). God’s rest will come, but it might not come today.
The conclusion here is not that we should hurry more. Most people hurry for bad reasons, stuck chasing selfish desires or climbing social ladders in pursuit of an elusive fulfillment. But the call of Christianity is not to abnegate these bad desires but to align them with God’s will. As Rene Girard says in I Saw Satan Fall Like Lightning (p. 13, emphasis original):
What Jesus invites us to imitate is his own desire, the spirit that directs towards the goal on which his intention is fixed: to resemble God the Father as much as possible.
What we are willing to hurry for reflects what we care about. We see this reflected in the passages above: Abraham hurries to show hospitality and welcome God into his life, David hurries to save the captives, and Mary & Philip hurry to share the good news of the gospel. We should care enough about these things that we’re willing to hurry for them.
Comer’s call to a life of virtue, peace, and rest is excellent—and at the margin, he’s probably right that most people should hurry less in their lives. But the central claim of the book is just not correct. The vision of Christian maturity contained in The Ruthless Elimination of Hurry seems a little too close to California-style Zen Buddhism and other modern mystical practices to fully align with Scripture, and I think this is bad.
Thanks to Taylor Wagen, Tony Robinson, Elias Mann, Jonathon Vandezande, and Chloe Wagen for helpful discussions, and for Ari Wagen for originally pointing me to read Girard.
In scientific computation, where I principally reside these days, there’s a cultural and philosophical divide between physics-based and machine-learning-based approaches.
Physics-based approaches to scientific problems typically start with defining a simplified model or theory for the system. For instance, a scientist attempting to simulate protein dynamics might start by defining how the energy of a system depends on its coordinates by choosing (or creating) a forcefield. Once a model’s been selected, numerical simulations are run using the model and the output of these simulations is analyzed to produce a prediction.
Physical simulations like these are extensively used today in climate modeling, orbital mechanics, computational fluid dynamics, and computational chemistry. These approaches are typically robust and relatively easy to interpret, since the underlying theory is perfectly known. Still, it’s often possible to get very complex behavior from a small set of underlying rules. Application of Newtonian gravitation, for instance, gives rise to the chaotic “three-body problem.” This combination of simple physical rules and complex emergent behavior is what makes physical simulations so powerful: starting from a well-understood model, it’s possible to gain non-trivial insights about complex systems.
ML-based approaches to scientific computation turn all of this upside down. Many ML models directly map input to output without any intermediate subtasks, making it difficult to interpret what’s going on. While physics-based methods can often extrapolate from simple test cases to complex emergent behaviors, ML-based methods frequently struggle to predict behaviors outside of what they’ve seen before. Still, ML-based methods are often far more efficient than physical methods and, since they’re not constrained by a specific theoretical framework, can handle complex phenomena where the underlying functional form is not known.
When these two subcultures collide, things often get nasty. Proponents of physics-based modeling frequently allege that ML is fundamentally unscientific, since science is all about extrapolation, and that the unreliability and interpretability of machine learning makes it ill-suited for anything except hacking benchmarks.
On the other side of the aisle, machine-learning advocates claim that physics-based modeling is basically a way for scientists to feel smart and look at pretty pictures and will never be able to capture sufficient complexity to be useful. If you think I’m exaggerating, I’m not—here’s Pranam Chatterjee discussing why structure-based models like AlphaFold are “irrelevant” to disease (emphasis added):
…if we believe Anfinsen’s hypothesis: the sequence should encode everything else, including structure. Why do you need to look at the structure of the protein to understand it? I have concluded people just feel more satisfied looking at a protein, rather than trusting an implicit language model to capture its features. Hence the Nobel Prize.
For Pranam, the physical assumptions made by models like AlphaFold—that proteins have a three-dimensional structure that’s relevant to their biology—mean that these models are incapable of describing the complexity of reality. Contra Pranam, the point that I want to make in this piece is that there’s no reason why physics- and ML-based approaches should be so opposed to one another. In fact, I’m becoming convinced that the future looks like some hybrid of both approaches.
Why? One of the big limitations of physics-based methods, at least in drug discovery, is that the underlying models often aren’t expressive enough and can’t easily be made more expressive. Molecular forcefields have only a handful of terms, which means that they can’t easily represent coupled dihedral rotation, π–π stacking, and so on—let alone bond-breaking or bond-forming processes—but even optimizing empirical parameters for all these terms quickly becomes a laborious and hard-to-scale process. In contrast, ML-based methods can scale to millions of empirical parameters or beyond without becoming intractably complex or overfit.
On the other hand, the big advantage of physics-based approaches is their generalizability—even simple rulesets, like cellular automata, can lead to immensely complex emergent behavior. This sort of extrapolation is rarely seen in scientific ML projects. A recent blog post from Pat Walters makes the observation that most cheminformatic models don’t seem capable of extrapolating outside the dynamic range of their training data. This is surprising, since even the simplest of “physical” models (like a linear equation) are capable of this extrapolation.
If a hybrid approach is to work, it must be able to capture the strengths of both methods. One of the simplest ways to do this is an energy-based approach, where scientists don’t predict an output parameter directly but instead learn an energy function which can be used to indirectly predict an output. (Yann LeCun has advocated for this.) This adds some test-time complexity, since additional computation is needed after inference to generate the final output, but also constrains the model to obey certain physical laws. (If you want to predict forces, for instance, it’s often best to do so through differentiating the predictions of an energy model, since doing this guarantees that the outputs will be a valid gradient field.)
Physics-informed ML approaches can also resolve the interpolation/extrapolation problem. Since physical equations are well-suited to extrapolation, applying a physical model to the output of an ML layer can convert interpolated intermediate states to extrapolated final results. This sort of extrapolation is well-documented in neural network potentials—for instance, ANI-1 was trained only on molecules with 8 or fewer heavy atoms but proved capable of extrapolating to significantly larger molecules with good accuracy.
One fair criticism of these hybrid physics–ML approaches is that they often require much more test-time compute than pure-ML methods: optimizing a geometry with a neural network potential can require hundreds of intermediate gradient calculations, while in theory an ML method could predict the correct geometry with a single step. But in practice I think this is an advantage. Scaling test-time compute has been an incredible boon for the LLM space, and chain-of-thought reasoning models like OpenAI’s o3 and DeepSeek-r1 are now the gold standard. To the extent that physics-informed ML methods give us the ability to spend more compute to get better answers, I suspect this will mostly be good for the field.
At Rowan, we’ve recently published a few papers in this area—Starling, an energy-based model for predicting microstate distributions and macroscopic pKa, and Egret-1, a family of neural network potentials for bioorganic simulation—and I expect we’ll keep working on hybrid physics–ML approaches in the future. Others seem to be moving in this direction too. The recent Boltz-1x model (now on Rowan!) from Gabri Corso and co-workers incorporates inference-time steering for superior physical accuracy, and Achira’s written about the merits of “a third way” in simulation built around “harmonization of the theoretical and the pragmatic.”
There’s immense room for creativity at the intersection of physics and machine learning. Virtually any part of a complex physics-based workflow can potentially be replaced with a bespoke ML model, leading to almost limitless combinatorial possibilities for experimentation. Which combinations of models, algorithms, and architectures will prove to be dead-ends, and which will unlock order-of-magnitude improvements in performance and accuracy? It’s a fun time to be working in this field.
I asked my co-founder Eli what he thought of this piece. He pointed out that the amount of physics one should incorporate into an ML model depends on the amount of data and the dimensionality of the problem. I asked him to explain more, and here’s what he wrote:
In ML terminology, incorporating information about the modality on which you are predicting into the design of the model is called adding inductive bias. One example of this for atomistic simulation is enforcing Euclidean equivariance into a model. This explicitly ensures that any spatial transformation to a model’s input is reflected in its output. This does ensure that physical laws are followed but also increases the computational complexity of the model, limiting system size at inference time and inference speed.
Some neural network potentials like Equiformer enforce equivariance in every operation within the model to ensure physical laws are always followed, while others enforce it only for inputs and outputs, relaxing constraints for intermediate model operations. Models like Orb-v3 don’t enforce equivariance at all, but incorporate dataset augmentation and unsupervised pre-training techniques to improve sample efficiency and learn physical laws.
As dataset size and diversity increase, we may see less of a need for physical inductive bias in models. One place we’ve seen this is with convolutional neural networks (CNNs) and vision transformers (ViTs) in computer vision: CNNs have built-in translational invariance whereas ViTs do not include any spatial inductive bias but are theoretically more flexible. We see ViTs outperforming CNNs only as dataset size and model size pass a certain threshold. If we assume that a similar pattern will hold for atomistic simulation, we might expect that the dataset size at which this threshold occurs will be greater, as molecular simulation has more degrees of freedom than computer vision.
I think this is a good point—thanks Eli! More abstractly, we might expect that incorporating explicit physics will help more in the low-data regime, while pure ML approaches will be more effective as data and compute approach infinity.
Thanks to Ari Wagen, Jonathon Vandezande, and Eli Mann for reading drafts of this post.
With apologies to Andrew Marvell. If you haven’t read “To His Coy Mistress”, I fear this won’t make much sense.
Had we but funding enough and time,
This coyness, founder, were no crime.
We would sit down, and think which way
To build, and pass our slow run-rate’s day.
Thou by the Potomac’s side
Shouldst SBIRs find; I by the tide
Of El Segundo would complain. I may
Fund you ten years before the first customers pay,
And you should, if you please, refuse
Till the market supports dual-use.
Your annual revenue should grow
Vaster than Boeing’s and more slow;
An hundred years should go to build
Thy initial demo, until the warehouse filled;
Two hundred years for full refactoring,
But thirty thousand for manufacturing;
An age at least for every ambition,
And the last age to show your full vision.
For, founder, you deserve funding most,
Nor would I invest at a lower post.
But at my back I always hear
Palmer’s wingèd chariot hurrying near;
And yonder all before us lie
Investors dreaming only of exits and AI.
Thy product shall no more be found;
Nor, in thy bankrupt state, shall sound
My echoing retweets; then competition shall try
That long-preserved TAM and KPI,
And your quaint slide deck fall to pieces,
And into ashes all my investment theses;
Raytheon’s a fine and funded prime,
But none, I think, work there overtime.
Now therefore, while the youthful hue
Sits on thy skin like morning dew,
And while thy willing employees perspire
At every hour of night with resolute fire,
Now let us push to master while we may,
And now, like YC back in the day
All at once our runway devour
Than languish awaiting pricing power.
Let us roll all our strength and all
Our funding up into one ball,
And push our MVP with rough excess
Through the government procurement process:
Thus, though we cannot make the buyer
Stand still, yet we will make Anduril acqu-hire.
In 2007, John Van Drie wrote a perspective on what the next two decades of progress in computer-assisted drug design (CADD) might entail. Ash Jogalekar recently looked back at this list, and rated the progress towards each of Van Drie’s goals on a scale from one to ten. There’s a lot in Jogalekar’s piece that’s interesting and worth discussing, but I was particularly intrigued by the sixth item on the list (emphasis added):
Outlook 6: today’s sophisticated CADD tools only in the hands of experts will be on the desktops of medicinal chemists tomorrow. The technology will disperse
Twenty-five years ago, modelers worked with million-dollar room-sized computers with 3D display systems half the size of a refrigerator. Today, the computer which sits on my lap is far more powerful, both in computation speed and in 3D display capabilities. Twenty-five years ago, the software running on those computers was arcane, with incomprehensible user interfaces; much of the function of modelers in those days was to serve as a user-friendly interface to that software, and their assistance was often duly noted in manuscripts, if not as a co-author then as a footnote. Today, scientists of all backgrounds routinely festoon their publications with the output of molecular graphics software, running on their desktop/laptop machines with slick easy-to-use graphical user interfaces, e.g. Pymol.
This is a trend that will accelerate. Things that seem sophisticated and difficult-to-use, but are truly useful, will in 20 years be routinely available on desktop/laptop machines (and even laptops may be displaced by palmtops, multi-functional cellphones, etc.). Too many modelers are still in the business of being ‘docking slaves’ for their experimental collaborators (i.e. the experimentalist asks the modeler ‘please dock my new idea for a molecule’, and waits for the result to see if it confirms their design); this will ultimately disappear, as that type of routine task will be handled by more sophisticated user interfaces to current docking algorithms, e.g. the software from Molsoft is well on its way to fill such a role. Whereas the ‘information retrieval specialists’ that once populated corporate libraries have disappeared, replaced by desktop Google searches, this trend of modeling-to-the-desktop should not be a source of job insecurity for CADD scientists—this will free us up from the routine ‘docking slave’ tasks to focus our energies on higher-valued-added work. As a rule, things today that seem finicky and fiddly to use (e.g. de novo design software), or things that take large amount of computer resources (e.g. thermodynamic calculations, or a docking run on the full corporate database) are things that one can easily imagine will in the future sit on the desktops of chemists, used by them with minimal intervention by CADD scientists
Jogalekar gives the field a 6/10 on this goal, which I find optimistic. In his words:
From tools like Schrödinger’s Live Design to ChemAxon’s Design Hub, medicinal chemists now use more computational tools than they ever did. Of course, these tools are used in fundamental part because the science has gotten better, leading to better cultural adoption, but the rapidly dwindling cost of both software and hardware enabled the cloud has played a huge rule in making virtual screening and other CADD tools accessible to medicinal chemists.
It’s true that there are more computational tools available to non-computational scientists than there once were—but based on the conversations we’ve had with industry scientists (which also informed this piece), the role of computational chemists as “docking slaves” (Van Drie’s phrase, not mine) to their experimental colleagues still rings true. The number of experimental scientists able to also run non-trivial computational studies remains vanishingly low, despite the improvements in computing hardware and software that Van Drie and Jogalekar discussed.
Why hasn’t our field made more progress here? In my view, there are three principal reasons: immature scientific tools demand expert supervision, poorly designed technology deters casual usage, and cultural inertia slows adoption even further.
Most scientific tools optimize for performance and tunability, not robustness or ease of use. Quantum chemistry software forces users to independently select a density functional, a basis set, any empirical corrections, and (for the brave) allows them to tune dozens of additional parameters with obscure and poorly documented meanings. (“Oh, the default settings for transition states aren’t very good… you need to configure the initial Hessian guess, the integral tolerance, the optimizer step size, and a few other things… I’ll email you a couple scripts.”)
And these issues aren’t unique to quantum chemistry; virtually every area of scientific simulation or modeling has its own highly specialized set of tools, customs, and tricks, so switching fields even as a PhD-level computational chemist is challenging and treacherous. Some of this complexity is inherent to the subject matter—there are lots of unsolved computational problems out there for which no simple solution is yet known. For instance, handing changes in ionization state or tautomerization during free-energy-perturbation (FEP) simulations is (to my knowledge) just intrinsically difficult right now, and no robust solution exists that can be plainly put into code.
But better hardware and better methods can alleviate these issues. Searching through different conformers of a complex molecule used to be a challenging task that demanded chemical expertise and considerable software skill—now, metadynamics programs like CREST make it possible to run conformer searches simply from a set of starting coordinates. These new “mindless” methods are less efficient than the old methods that relied on chemical intuition, but in many cases the simulations are fast enough that we no longer care.
Similarly, the increasing speed of quantum chemistry makes it simpler to run high-accuracy simulations without extensive sanity checks. In my PhD research, I carefully benchmarked different tiny basis sets against high-level coupled cluster calculations to find a method that was fast enough to let me study the reaction dynamics of a catalytic transition state—now, methods like r2SCAN-3c give better accuracy in virtually every case and avoid the dangerous basis-set pathologies I used to worry about, making it possible to use them as a sane default for virtually every project.
Other fields have undergone similar transformations. Writing assembly code, when done right, produces substantially faster and more efficient programs than writing a compiled language like C, and writing C produces faster code than writing a high-level language like Python. But computers are fast enough now that writing assembly code is now uncommon. Python is much more forgiving, and makes it possible for all sorts of non-experts (like me) to write useful code that addresses their problems. Back in the days of the PDP-10, every FLOP was precious—but with today’s computers, it’s worth accepting some degree of inefficiency to make our tools quicker to learn, easier to use, and far more robust.
Computational chemistry needs to make the same transition. There will always be cutting-edge computational problems that demand specific expertise, and these problems will invariably remain the rightful domain of experts. But vast improvements in the speed and accuracy of computational chemistry promise to move more and more problems into a post-scarcity regime where maximum efficiency is no longer required and the field’s impact will no longer predominately be determined by performance.
Once a method becomes robust enough to be routinely used without requiring expert supervision, it’s safe to turn over to the non-experts. I’d argue that this is true of a decent proportion of computational workflows today, and advances in simulation and machine learning promise to make this true for a much greater proportion in the next decade.
Sadly, scientific considerations aren’t all that prevents molecular modeling from being more widely employed. The second underlying reason limiting the reach of computational tools is that most of the tools are, frankly, just not very good software. Scientific software frequently requires users to find and manage their own compute, write scripts to parse their output files and extract the data, and do plenty of needless work in post-processing—in many respects, being a computational chemist means stepping back in time to 1970s-era software.
These difficulties are considerable even for full-time computational chemists; for experimental scientists without coding experience, they’re insurmountable. No medicinal chemist should need to understand rsync
, sed
, or malloc
to do their job! Some of the error messages from computational chemistry software are so obtuse that there are entire web pages devoted to decrypting them:
RFO could not converge Lambda in 999 iterations. Linear search skipped for unknown reason. Error termination via Lnk1e in /disc30/g98/l103.exe. Job cpu time: 0 days 7 hours 9 minutes 17.0 seconds. File lengths (MBytes): RWF= 21 Int= 0 D2E= 0 Chk= 6 Scr= 1
Why is so much scientific software so bad? Academic software development prioritizes complexity and proof-of-concepts because these are the features that lead to publications. More prosaic considerations like robustness, maintainability, and ease of use are secondary considerations at best, and it’s hard for academic research groups to attract or maintain the sort of engineering talent required for most impactful work in scientific software. In a piece for New Science, Elliot Hirshberg documents the consequences of this situation (emphasis added):
…most life sciences software development happens in academic labs. These labs are led by principal investigators who spend a considerable portion of their effort applying for competitive grants, and the rest of their time teaching and supervising their trainees who carry out the actual research and engineering. Because software development is structured and funded in the same way as basic science, citable peer-reviewed publications are the research outputs that are primarily recognized and rewarded. Operating within this framework, methods developers primarily work on building new standalone tools and writing papers about them, rather than maintaining tools or contributing to existing projects….
This organizational structure for developing methods and software has resulted in a tsunami of unusable tools…. Scientists need to learn how to download and install a large number of executable programs, battle with Python environments, and even compile C programs on their local machine if they want to do anything with their data at all. This makes scientists new to programming throw up their hands in confusion, and seasoned programmers tear their hair out with frustration. There is a reason why there is a long-running joke that half of the challenge of bioinformatics is installing software tools correctly, and the rest is just converting between different file formats.
Frustratingly, relatively few academic scientists seem to view this as a problem. In a thread discussing the lack of graphical user interfaces (GUIs) for scientific software on the Matter Modeling Stack Exchange, a user writes about how GUIs are not just a distraction but actively harmful for scientific software (emphasis added):
[GUI development takes time] that could be spent on other tasks, like developing more functionality in the core program, developing different programs for different tasks, or even doing other things like lab research that has clearer advantages for one’s career… But then, after the GUI has been designed and created, it’s a new source of maintenance burden. That means a program with a GUI will have to have time dedicated to fixing GUI issues for users, especially if an OS (or other system library) update breaks it. That’s time that could be spent on other things more productive to one’s career or research aspirations.
This is a textbook case of misaligned incentives. Researchers who create scientific software aren’t rewarded for making it easy for others to build on or use, only for making it increasingly powerful and complex—as a result, there are hundreds of complex and impossible-to-use scientific software packages floating around on Github. Almost all the scientific software projects which defy this trend are commercial or supported by commercial entities: at least from the users’ point of view, the incentives of a for-profit company seem superior to academic incentives here.
Better tools are the solution to the ever-increasing scientific burden of knowledge. Every day, experimental scientists use tools without fully understanding their internal workings—how many chemists today could build a mass spectrometer from scratch, or an HPLC? We accept that experimental tools can be productively used by non-experts who don’t understand their every detail—but when it comes to computational chemistry, we expect every practitioner to build their own toolkit practically from scratch.
This has to change. If we want scientific software to be more widely used, our field needs to find a way to make software that’s as elegant and user-friendly as the software that comes out of Silicon Valley. This can happen through any number of different avenues—improved academic incentives, increased commercial attention, and so on—but without this change, large-scale democratization of simulation will never be possible.
But even with robust methods and well-designed software products, cultural differences between computational and experimental scientists persist. Generations of PhD students have been taught that they’re either “computational” or “experimental,” with the attendant stereotypes and communication barriers that accompany all such dichotomies. In industry, scientists are hired and promoted within a given skillset; while scientists occasionally hop from experiment to computation, it’s rare to meet truly interdisciplinary scientists capable of contributing original research insights in both areas.
Many scientists, both computational and experimental, are happy with this state-of-the-art. Experimental scientists can avoid having to learn a set of confusing skills and delegate them to a colleague, while maintaining a comfortable skepticism of any computational predictions. Computational scientists, in contrast, get to serve as “wizards” who summon insights from the Platonic realm of the computer.
Some computational scientists even come to take pride in their ability to navigate a confusing web of scripts, tools, and interfaces—it becomes their craft, and a culture to pass along to the next generation. On Stack Exchange, one professor writes in response to a beginner asking about graphical user interfaces:
Trust me: it is better to learn the command line… I began using UNIX when I was 9 years old. It’s time for you to learn it too.
As Abhishaike Mahajan put in his poster about Rowan—“enough”! It doesn’t have to be this way.
Why care about democratizing simulation? We think that putting simulation into the hands of every scientist will enable innovation across the chemical sciences. As of 2025, it seems clear that computation, simulation, and ML will play a big role in the future of drug discovery. But as long as “computation” remains a siloed skillset distinct from the broader activity of drug discovery, the impact that these breakthroughs can have will remain limited by cultural and organizational factors.
If the importance of computer-assisted drug discovery continues to increase but the tools remain unusable by the masses, will computational chemists and biologists simply grow in importance more and more? Taken to the extreme, one can envision what Alice Maz terms “a priesthood of programmers,” a powerful caste dedicated to interceding between man and computer. Perhaps computational tools will remain inaccessible forever, and those who excel at drug discovery will be those who can best deploy a litany of arcane scripts. Perhaps the future of chemistry will be run by CS majors, and today’s drug hunters will merely be employed to synthesize compounds and run biological assays in service of the new elite.
But one can envision a future in which computational chemistry becomes a tool to aid drug designers, not supplant them. In 2012, Mark Murcko and Pat Walters (distinguished industry scientists both) wrote “Alpha Shock,” a speculative short story about drug discovery in the year 2037. I want to highlight a scene in which Sanjay (the protagonist) uses structure-based drug design to discover a new candidate and avoid paying his rival Dmitri royalties:
With the structures and custom function in hand, Sanjay was ready to initiate the docking study. But despite recent advances in the TIP32P** water model, Sanjay still didn’t completely trust the predicted protein-ligand binding energetics. Next, he transferred the experimental data into the Google Predictive Analytics engine and quickly designed a new empirical function to fit the experimental data. Now he launched the dynamic docking simulator, dropping the empirical function into the hopper... A progress bar appeared in front of him showing “10^30 molecules remaining, 2,704 h 15 min to completion.” Sanjay quickly stopped the process and constrained the search to only those molecules that fell within the applicability domain of his empirical function. This reduced the search to 10^12 molecules and allowed the analysis to complete in a few minutes.
After a bit of visual inspection to confirm the results of his docking study, Sanjay moved on to the next step. He knew that slow binding kinetics could provide a means of lowering the dose for his compound. To check this, he ran a few seconds of real-time MD on each of the top 50,000 hits from the docking study. A quick scan of the results turned up 620 structures that appeared to have the required residence time. Sanjay submitted all these structures to PPKPDS, the Primate Pharmacokinetic and Pharmacodynamic Simulator, a project developed through a collaboration of industry, academia, and the World Drug Approval Agency. Of the compounds submitted, 52 appeared to have the necessary PK profile, including the ability to be actively transported into the brain. All but a few were predicted to be readily synthesizable.
In “Alpha Shock,” a drug designer like Sanjay can leverage interactive, intuitive software to quickly test his hypotheses and move towards important conclusions. Sanjay’s tools serve to augment his own intuition and vastly increase his productivity, yet don’t require him to use bespoke scripts or memorize arcane incantations. To anyone with any experience with computer-assisted drug design, this will read like science fiction—but that is exactly the point. The world of “Alpha Shock” gives us a vision of where we need to go as a field, and highlights where we’re deficient today.
Better instrumentation and analytical tooling has revolutionized chemistry over the past sixty years, and better design & simulation tools can do the same over the next sixty years. But as we’ve seen with NMR and mass spectrometry, enabling technologies must become commonplace tools usable by lots of people, not arcane techniques reserved for a rarefied caste of experts. Only when computational chemistry undergoes the same transition can we fulfill the vision that Van Drie outlined years ago—one in which every bench scientist can employ the predictive tools once reserved for specialists, and in which computers can amplify the ingenuity of expert drug designers instead of attempting to supplant it.
Thanks to Ari Wagen for feedback on drafts of this piece.