Blog


Why New Ventures Are So Important

April 11, 2023

This Easter week, I’ve been thinking about why new ventures are so important. Whether in private industry, where startups are the most consistent source of innovative ideas, or in academia, where new assistant professors are hired each year, newcomers are often the most consistent source of innovation. Why is this?

One explanation is the Arrow replacement effect (named after Kenneth Arrow), which states that “preinvention monopoly power acts as a strong disincentive to further innovation.” Arrow’s argument goes like this: suppose there’s an organization that is earning profit Pold, and there is some innovation that will increase profit to Pnew (Pnew > Pold). If the existing organization pursues the innovation, their profits will thus increase by ∆P := Pnew - Pold. But a new organization will see its profits increase by Pnew: since the startup has no existing profit to replace, the rewards to innovation are higher. Thus innovation is more appealing for those without any economic stake in the status quo.1

We can see this play out today in the dynamic between Google and OpenAI/Microsoft: Google already has a virtual monopoly in search, and so is hesitant to replace what they have, whereas Microsoft has already been losing in search and so is eager to replace Bing with Sydney an AI-powered alternative. (It’s to Apple’s credit that they so eagerly pursued the iPhone when it meant effectively destroying the iPod, one of their top money-makers.2)

One can also see this scenario in academia—plenty of established labs have programs built up around studying specific systems, and are thus disincentivized to study areas which might obviate projects they’ve spent decades working on. For instance, labs dedicated to “conventional” synthetic methodology might be slower to turn to biocatalysis than a new assistant professor with nothing to lose; labs that have spent decades studying protein folding might be slower to turn to AlphaFold than they ought to.

Another reason is that new entrants often have an advantage in understanding the status quo. In The Art of Doing Science and Engineering (book review coming, eventually), computing legend Richard Hamming discusses how there’s often a disadvantage to being a pioneer in a field. Hamming’s argument, essentially, is that those who’ve had to invent something new never understand it as intuitively as those who have simply learned to take it for granted:

The reason this happens is that the creators have to fight through so many dark difficulties, and wade through so much misunderstanding and confusion, they cannot see the light as others can, now the door is open and the path made easy…. in time the same will probably be true of you.

In Hamming’s view, it’s the latecomers to a field who can see more clearly the new possibilities opened by various innovations, and take the next steps towards previously unimaginable frontiers. There’s a deep irony in this: the very act of inventing something new makes you less able to see the innovations enabled by your own work. The process of invention thus acts like a relay race, where newer generations continually take the baton and push things forward before in turn dropping back.

I’ve heard these ideas discussed in terms of naïvete before—the idea being that innovation requires a sort of “beginner’s luck,” a blind optimism about what’s possible that the experienced lack—but I think that’s wrong. A belief in naïvete as the key driver of innovation implies that excessive knowledge is detrimental: that it’s possible to “know too much” and cripple oneself. If anything, the opposite is true in my experience. The most creative and productive people I’ve met are those with an utter mastery of the knowledge in their domain.

Hamming’s proposal, which is more cognitive/subconscious, is thus complementary to the more calculated logic of the Arrow replacement theorem: existing organizations are both less incentivized to innovate and less able to see potential innovations. These ideas should be encouraging to anyone at the beginning of their career: you are uniquely poised to discover and exploit new opportunities! So consider this an exhortation to go out and do so now (rather than waiting until you are older and more secure in your field).

Credit to Alex Tabarrok for introducing me to the Arrow replacement effect, and ChatGPT for some edits.

Footnotes

  1. This is a bit of a cartoonish depiction of the Arrow replacement theorem—the original paper (linked above) is quite readable, and performs a more sophisticated analysis. See the heading “Competition, Monopoly, and the Incentive to Innovate” on page 12 of the PDF (journal page 619).
  2. Tony Fadell discusses this in Build: suffice it to say this was not an internally popular decision at Apple.

Industry Research Seems Underrated

April 3, 2023

While scientific companies frequently publish their research in academic journals, it seems broadly true that publication is not incentivized for companies the same way it is for academic groups. Professors need publications to get tenure, graduate students need publications to graduate, postdocs need publications to get jobs, and research groups need publications to win grants. So the incentives of everyone in the academic system are aligned towards publishing papers, and lots of papers get published.

In contrast, the success or failure of a private company is—to a first approximation—unrelated to its publication record. Indeed, publication might even be harmful for companies, insofar as time spent preparing manuscripts and acquiring data only needed for publication is time that could be spent on more mission-critical activities.

That’s why I generally believe industry publications, especially those where no academic co-authors are involved, are underrated, and are probably better than the journal they’re in might indicate. Getting a publication into a prestigious journal like Science or Nature is pretty random, requires a lot of effort, and frequently has a slow turnaround time, whereas lower-tier journals are likely to accept your work, and typically review and publish papers much, much faster. (In particular, ACS is among the fastest of all scientific publishers, and is generally a pleasure to work with.)

The above reflections were prompted by reading an absolute gem of a paper in J. Med. Chem., a collaboration between X-Chem, ZebiAI, and Google Research. The paper is entitled “Machine Learning on DNA-Encoded Libraries: A New Paradigm for Hit Finding” and describes how data from DNA-encoded libraries (DELs) can be used to train ML models to predict commercially available compounds with activity against a given target. This is a really, really big deal. As the authors put it in their conclusion:

[Our approach] avoids the time-consuming and expensive process of building new chemical matter into a DEL library and performing new selections or incorporating new molecules into a HTS screening library. This ability to consider compounds outside of the DEL is the biggest advantage of our approach; notably, this approach can be used at a fraction of the cost of a traditional DEL screening follow-up, driven primarily by the large difference in synthesis cost.

Now, the precise impact of this discovery will of course be determined in the years to come; Derek Lowe raises some fair concerns on his blog, pointing out that the targets chosen are relatively easy to drug, and so probably wouldn’t be the subject of a high-tech DEL screen anyway, and it’s entirely possible that there will be other unforeseen complications with this technology that are only revealed in the context of a real-world discovery pipeline. (Given that Relay acquired ZebiAI for $85M in 2021 essentially on the strength of this paper alone, I’m guessing plenty of real-world testing is already underway.)

The point I want to make is that if this paper had come from an academic group, I would be very, very surprised to see it in J. Med Chem. This project has everything that one expects in a Science paper: a flashy new piece of technology, a problem that’s understandable to a broad audience, clear clincal relevance, even a domain arbitrage angle. Yet this paper is not in Science, nor ACS Central Science, nor even JACS, but in J. Med. Chem., a journal I don’t even read regularly.

My conclusions from this are (1) to remember that not everyone is incentivized to market their own findings as strongly as academics are and (2) to try and look out for less-hyped industry results that I might neglect otherwise.

You Should Read The Literature More

March 29, 2023

If you are a scientist, odds are you should be reading the literature more. This might not be true in every case—one can certainly imagine someone who reads the literature too much and never does any actual work—but as a heuristic, my experience has been that most people would benefit from reading more than they do, and often much more. Despite the somewhat aggressive title, my hope in writing this is to encourage you to read the literature more: to make you excited about reading the literature, not to guilt you into it or provoke anxiety.

Why You Should Read The Literature More

You should read the literature because you are a scientist, and your business is ideas. The literature is the vast, messy, primal chaos that contains centuries of scientific ideas. If you are an ideas worker, this is your raw material—this is what you work with. Not reading the literature as an ideas worker is like not going to new restaurants as a new chef, or not looking at other people’s art as an artist, or not listening to music as a composer. Maybe the rare person has an internal creativity so deep that they don’t need any external sources of inspiration, but I’m not sure I know anyone like that.

If you buy the concept of “domain arbitrage” I outlined last week, then reading the literature becomes doubly important for up-and-coming arbitrageurs. Not only do you need to stay on top of research in your own field, but you also need to keep an eye on other fields, to look for unexpected connections. It was only after months of reading physical chemistry papers about various IR spectroscopy techniques, with no direct goal in mind, that I realized I could use in situ IR to pin down the structure of ethereal HCl; simply reading organic chemistry papers would not have given me that insight.

How You Can Read The Literature More

If you don’t read the literature at all—like me, when I started undergrad—then you should start small. I usually recommend JACS to chemists. Just try to read every paper in your subfield in JACS for a few months; I began by trying to read every organic chemistry paper in JACS. At the beginning, probably only 10–20% will make sense. But if you push through and keep trying to make sense of things, eventually it will get easier. You’ll start to see the same experiments repeated, understand the structure of different types of papers, and even recognize certain turns of phrase. (This happened to me after about a year and a half of reading the literature.)

Reading more papers makes you a faster reader. Here’s Tyler Cowen on how he reads so quickly (not papers specifically, but still applicable):

The best way to read quickly is to read lots. And lots. And to have started a long time ago. Then maybe you know what is coming in the current book. Reading quickly is often, in a margin-relevant way, close to not reading much at all.

Note that when you add up the time costs of reading lots, quick readers don’t consume information as efficiently as you might think. They’ve chosen a path with high upfront costs and low marginal costs. "It took me 44 years to read this book" is not a bad answer to many questions about reading speed.

All of Tyler’s advice applies doubly to scientific writing, which is often jargon-filled and ordered in arcane ways. After 7ish years of reading the scientific literature, I can “skim” a JACS paper pretty quickly and determine what, if anything, is likely to be novel or interesting to me, which makes staying on top of the literature much easier than it used to be.

Once you are good with a single journal, you can expand to multiple journals. A good starting set for organic chemistry is JACS, Science, Nature, Nature Chemistry, and Angewandte. If you already know how to read papers quickly, it will not be very hard to read more and more papers. But expanding to new journals brings challenges: how do you keep up with all of them at once? Lots of people use an RSS feed to aggregate different journals—I use Feedly, as do several of my coworkers. (You can also get this blog on Feedly!)

I typically check Feedly many times a day on my phone; I can look at the TOC graphic, the abstract, and the title, and then if I like how the paper looks I’ll email it to myself. Every other day or so, I sit down at my computer with a cup of coffee and read through the papers I’ve emailed to myself. This is separate from my “pursuing ideas from my research”/”doing a literature dive for group meeting” way of reading the literature—this is just to keep up with all the cool stuff that I wouldn’t otherwise hear about.

“Inbox Zero” often proves elusive.

(I also use Google Scholar Alerts to email me when new labs publish results—I have probably 20-30 labs set up this way, just to make sure I don’t miss results that might be important just because they’re not in a high-profile journal.)

Keeping track of papers you actually like and want to remember is another challenge. For the past two years, I’ve put the URLs into a Google Sheet, along with a one-sentence summary of the paper, which helps me look through my “most liked” papers when I want to find something. Sadly, I didn’t do this earlier, so I’m often tormented by papers I dimly remember but can no longer locate.

What Literature You Should Read

This obviously depends on what you’re doing, but I tend to think about literature results in three categories:

  1. Things every scientist should know about
  2. Things I am supposed to be an expert on
  3. Things I’m not supposed to be an expert on, but would still like to know about

Category 1 basically covers the highest profile results (Science and Nature), and these days Twitter makes that pretty easy.

Category 2 covers things “in-field” or directly related to my projects—anything it would be somewhat embarrassing not to know about. For me, this means JACS, Angewandte, ACS Catalysis, Org. Lett., OPRD, Organometallics, J. Org. Chem., and Chem. Sci. (I also follow Chem. Rev. and Chem. Soc. Rev., because review articles are nice.)

Category 3 covers things that I am excited to learn about. Right now, that’s JCTC and J. Phys. Chem. A–C. In the past, that’s included ACS Chem. Bio., Nature Chem. Bio., and Inorganic Chemistry. (Writing this piece made me realize I should follow JCIM and J. Chem. Phys., so I just added them to Feedly.)

Conclusion

Reading the literature is—in the short term—pointless, sometimes frustrating, and just a waste of time. It’s rare that the article you read today will lead to an insight on the problem you’re currently facing! But the gains to knowledge compound over time, so spending time reading the literature today will make you a much better scientist in the long run.

Thanks to Ari Wagen and Joe Gair for reading drafts of this post.

Domain Arbitrage

March 20, 2023

It’s a truth well-established that interdisciplinary research is good, and we all should be doing more of it (e.g. this NSF page). I’ve always found this to be a bit uninspiring, though. “Interdisciplinary research” brings to mind a fashion collaboration, where the project is going to end up being some strange chimera, with goals and methods taken at random from two unrelated fields.

Rather, I prefer the idea of “domain arbitrage.”1 Arbitrage, in economics, is taking advantage of price differences in different markets: if bread costs $7 in Cambridge but $4 in Boston, I can buy in Boston, sell in Cambridge, and pocket the difference. Since this is easy and requires very little from the arbitrageur, physical markets typically lack opportunities for substantial arbitrage. In this case, the efficient market hypothesis works well.

Knowledge markets, however, are much less efficient than physical markets—many skills which are cheap in a certain domain are expensive in other domains. For instance, fields that employ organic synthesis, like chemical biology or polymer chemistry, have much less synthetic expertise than actual organic synthesis groups. The knowledge of how to use a Schlenk line properly is cheap within organic chemistry but expensive everywhere else. And organic chemists certainly don’t have a monopoly on scarce skills: trained computer scientists are very scarce in most scientific fields, as are statisticians, despite the growing importance of software and statistics to almost every area of research.

Domain arbitrage, then, is taking knowledge that’s cheap in one domain to a domain where it’s expensive, and profiting from the difference. I like this term better because it doesn’t imply that the goal of the research has to be interdisciplinary—instead, you’re solving problems that people have always wanted to solve, just now with innovative methods. And the concept of arbitrage highlights how this can be beneficial for the practitioner. You’re bringing new insights to your field so you can help your own research and make cool discoveries, not because you’ve been told that interdisciplinary work is good in an abstract way.

There are many examples of domain arbitrage,2 but perhaps my favorite is the recent black hole image, which was largely due to work by Katie Bouman (formerly a graduate student at MIT, now a professor at Caltech):

The black hole picture, extracted from noisy radio telescope data by Bouman’s new algorithms.

What’s surprising is that Bouman didn’t have a background in astronomy at all: she “hardly knew what a black hole was” (in her words) when she started working on the project. Instead, Bouman’s work drew on her background in computer vision, adapting statistical image models to the task of reconstructing astronomical images. In a 2016 paper, she explicitly credits computer vision with the insights that would later lead to the black hole image, and concludes by stating that “astronomical imaging will benefit from the crossfertilization of ideas with the computer vision community.”

If we accept that domain arbitrage is important, why doesn’t it happen more often? I can think of a few reasons: some fundamental, some structural, and some cultural. On the fundamental level, domain arbitrage requires knowledge of two fields of research at a more-than-superficial level. This is relatively common for adjacent fields of research (like organic chemistry and inorganic chemistry), but becomes increasingly rare as the distance between the two fields grows. It’s not enough to just try and read journals from other fields occasionally—without the proper context, other fields are simply unintelligible. Given how hard achieving competence in a single area of study can be, we should not be surprised that those with a working knowledge of multiple fields are so scarce.

The structure of our research institutions also makes domain arbitrage harder. In theory, a 15-person research group could house scientists from a variety of backgrounds: chemists, biologists, mathematicians, engineers, and so forth, all focused on a common research goal. In practice, the high rate of turnover in academic positions makes this challenging. Graduate students are only around for 5–6 years, postdocs for fewer, and both positions are typically filled by people hoping to learn things, not by already competent researchers. Thus, senior lab members must constantly train newer members in various techniques, skills, and ways of thinking so that institutional knowledge can be preserved.

This is hard but doable for a single area of research, but quickly becomes untenable as the number of fields increases. A lab working in two fields has to pass down twice as much knowledge, with the same rate of personnel turnover. In practice, this often means that students end up deficient in one (or both) fields. As Derek Lowe put it when discussing chemical biology in 2007:

I find a lot of [chemical biology] very interesting (though not invariably), and some of it looks like it could lead to useful and important things. My worry, though, is: what happens to the grad students who do this stuff? They run the risk of spending too much time on biology to be completely competent chemists, and vice versa.

To me, this seems like a case in which the two goals of the research university—to teach students and to produce research—are at odds. It’s easier to teach students in single-domain labs, but the research that comes from multiple domains is superior. It’s not easy to think about how to address this without fundamental change to the structure of universities (although perhaps others have more creative proposals than I).3

But, perhaps most frustratingly, cultural factors also contribute to the rarity of domain arbitrage. Many scientific disciplines today define themselves not by the questions they’re trying to solve but by the methods they employ, which disincentivizes developing innovative methods. For example, many organic chemists feel that biocatalysis shouldn’t be considered organic synthesis, since it employs enzymes and cofactors instead of more traditional catalysts and reagents, even though organic synthesis and biocatalysis both address the same goal: making molecules. While it’s somewhat inevitable that years of lab work leaves one with a certain affection for the methods one employs, it’s also irrational.

Now, one might reasonably argue that precisely delimiting where one scientific field begins and another ends is a pointless exercise. Who’s to say whether biocatalysis is better viewed as the domain of organic chemistry or biochemistry? While this is fair, it’s also true that the scientific field one formally belongs to matters a great deal. If society deems me an organic chemist, then overwhelmingly it is other organic chemists who will decide if I get a PhD, if I obtain tenure as a professor, and if my proposals are funded.4

Given that the success or failure of my scientific career thus depends on the opinion of other organic chemists, it starts to become apparent why domain arbitrage is difficult. If I attempt to solve problems in organic chemistry by introducing techniques from another field, it’s likely that my peers will be confused or skeptical by my work, and hesitate to accept it as “real” organic chemistry (see, for instance, the biocatalysis scenario above). Conversely, if I attempt to solve problems in other domains with the tools of organic chemistry, my peers will likely be uninterested in the outcome of the research, even if they approve of the methods employed. So from either angle domain arbitrage is disfavored.

The factors discussed here don’t serve to completely halt domain arbitrage, as successful arbitrageurs like Katie Bouman or Frances Arnold demonstrate, but they do act to inhibit it. If we accept the claim that domain arbitrage is good, and we should be working to make it more common, what then should we do to address these problems? One could envision a number of structural solutions, which I won’t get into here, but on a personal level the conclusion is obvious: if you care about performing cutting-edge research, it’s important to learn things outside the narrow area that you specialize in and not silo yourself within a single discipline.

Thanks to Shlomo Klapper and Darren Zhu for helpful discussions. Thanks also to Ari Wagen, Eric Gilliam, and Joe Gair for editing drafts of this post; in particular, Eric Gilliam pointed out the cultural factors discussed in the conclusion of the post.

Footnotes

  1. I did not coin this term—credit goes to Shlomo Klapper. I do think this is the first time it's been used in writing, however.
  2. I'll actually go a step farther and propose the Strong Theorem of Domain Arbitrage: All non-incremental scientific discoveries arise either from domain arbitrage or random chance. I don't want to defend it here, but I think there's a reasonable chance that this is true.
  3. Collaborations between differently skilled labs help with this problem, but the logistical and practical challenges involved in collaboration make this an inefficient solution. Plus, the same cultural challenges still confront the individual contributors.
  4. This tweet, quoting the Nielsen/Qiu metascience essay which I wrote about before, seems relevant.

Computational NMR Prediction: A Microreview

March 14, 2023

Recently, I’ve been working to assign the relative configuration of some tricky diastereomers, which has led me to do a bit of a deep dive into the world of computational NMR prediction. Having spent the last week or so researching the current state-of-the-art in simulating experimental 1H NMR spectra, I’m excited to share some of my findings.

My main resource in this quest has been a new NMR benchmarking paper, published on March 7th by authors from Merck (and a few other places). Why this paper in particular? Although there have been many NMR benchmarks, not all of these papers are as useful as they seem. Broadly speaking, there are two ways to benchmark NMR shifts: (1) against high-level computed results or (2) against experimental NMR shifts.

The first strategy seems to be popular with theoretical chemists: NMR shifts at a very high level of theory are presumably very accurate, and so if we can just reproduce those values with a cheap method, we will have solved the NMR prediction problem. Of course, effects due to solvation and vibrational motion will be ignored, but these effects can always be corrected for later. In contrast, the second strategy is more useful for experimental chemists: if the calculation is going to be compared to experimental NMR spectra in CDCl3 solution, the match with experiment is much more important than the gas-phase accuracy of the functional employed.

Not only are these two approaches different in theory, they yield vastly different results in practice, as is nicely illustrated by the case of the double-hybrid functional DSD-PBEP86. DSD-PBEP86 was first reported in 2018 by Frank Neese and coworkers, who found it to be much superior to regular DFT methods or MP2-type wavefunction methods at reproducing CCSD(T) reference data.1 A subsequent benchmark by Kaupp and coworkers looked at a much larger set of compounds and confirmed that DSD-PBEP86 was indeed superior at reproducing CCSD(T) data, with a mean absolute error (MAE) for 1H of 0.06 ppm. In contrast, de Oliveira and coworkers found that DSD-PBEP86 and related double-hybrid methods were much worse at predicting experimental 1H NMR shifts, with a MAE of 0.20 ppm, making them no better than conventional DFT approaches.

The difference between these two mindsets is nicely demonstrated by Kaupp’s paper, which dismisses de Oliveira’s work as suffering from “methodological inadequacies” and states:

[Benchmarking] can be done by comparing approximative calculations to experimental data or to data computed using high-level ab initio methodologies. The latter helps to eliminate a number of factors that often complicate the direct comparison against experiment, such as environmental, ro-vibrational, or thermal contributions (possibly also relativistic effects, see below).

While Kaupp is correct that using gas-phase CCSD(T) data does eliminate “environmental” effects (e.g. from solvent), it’s not clear that these effects always ought to be eliminated! Although directly optimizing a computational method to reproduce a bunch of ill-defined environmental effects is perhaps inelegant, it’s certainly pragmatic.

The authors of the 2023 benchmark create a new set of well-behaved reference compounds that avoid troublesome heavy-atom effects (poorly handled by most conventional calculations) or low-lying conformational equilibria, and re-acquire experimental spectra (in chloroform) for every compound in the set. They then score a wide variety of computational methods against this dataset: functionals, basis sets, implicit solvent methods, and more.

In the end, Cramer’s WP04 functional is found to be best, which is perhaps unsurprising given that it was specifically optimized for the prediction of 1H shifts in chloroform.2 The WP04/6-311++G(2d,p)/PCM(chloroform) level of theory is optima, giving an MAE of 0.08 ppm against experiment, but WP04/jul-CC-PVDZ/PCM(chloroform) is cheaper and not much worse. B3LYP-D3/6-31G(d) works fine for geometry optimization, as do wB97X-D/6-31G(d) and M06-2X/6-31G(d).

Based on these results, my final workflow for predicting experimental proton spectra is:

  1. Run a conformational search using crest.
  2. Optimize each conformer using B3LYP-D3BJ/6-31G(d).
  3. Remove duplicate conformers with cctk.ConformationalEnsemble.eliminate_redundant().
  4. Predict NMR shifts for each conformer using WP04/6-311++G(2d,p)/PCM(chloroform).
  5. Combine conformer predictions through Boltzmann weighting, and apply a linear correction.

For small molecules, this workflow runs extremely quickly (just a few hours from start to finish), and has produced good-quality results that solved the problem I was trying to solve.

Nevertheless, the theoreticians have a point—although WP04 can account for a lot of environmental effects (essentially by overfitting to experimental data), there are plenty of systems for which this pragmatic approach cannot succeed. For instance, the DELTA50 dataset intentionally excludes molecules which might exhibit concentration-dependent aggregation behavior, which includes basically anything capable of hydrogen bonding or π–π stacking! If we hope to get beyond a certain level of accuracy, it seems likely that physically correct models of NMR shieldings, solvent effects, and aggregation will be necessary.

Footnotes

  1. CCSD(T) NMR shifts have to be computed in CFOUR.
  2. The WP04 functional is not technically in Gaussian, but can be employed with the following route card: #p nmr=giao BLYP IOp(3/ 76=1000001189,3/77=0961409999,3/78=0000109999) 6-311++G(2d,p) scrf=(pcm,solvent=chloroform).