top of page
Search
  • samrodriques

Key problems to accelerate medicine

Progress in medicine is slow. Exact numbers depend on subfield and source, but it takes ~5-10 years to test a new drug, ~93% of drugs fail, and each failed attempt costs ~$200M-$300M. The primary reasons why it is hard to develop new medicines are that we have extremely high standards for any new medicines — they must be both provably safe and provably more effective than the status quo — and we have a very limited understanding of human biology, so creating new, highly safe and effective medicines requires significant trial and error. We need to accelerate the design-build-test cycle in biomedicine. To what extent can we do so using technology? And what are the key technologies we would need to develop?


1. We need to be able to conduct experiments in human biology.

The core reason why it is hard to cure disease is that the experiments we conduct in laboratories do not translate reliably to experiments conducted on humans. The fastest way to cure a human disease would be to take humans with the disease and conduct experiments on them directly. (To this point: we can cure just about any disease in the mouse, because it is easy to conduct experiments rapidly on mice.) Instead, all experiments in the lab today are conducted on model systems such as monkeys, mice, cultured human cells, or organoids, which inevitably reproduce some aspects of human biology, and not others.


As a specific example: mice do not get Alzheimer’s disease. Modeling Alzheimer’s disease in a mouse involves modifying the mouse in some way to induce the disease, which requires us to make assumptions about how the disease works in humans. If those assumptions are wrong, the resulting model loses its predictive value. Similarly, an organ in a dish may replicate some aspects of the organ in its native environment, but other aspects (for example, delivery of the drug to the organ) may be completely neglected. If you are still skeptical of the value of testing directly on humans, consider that natural experiments in single humans (e.g. brain lesions, genetic disorders) can often tell us more than arbitrarily large numbers of experiments in mice.


One way or another, we need to figure out how to conduct more experiments in human biology. There are a few options:


1A. Identify ethical opportunities to conduct experiments on humans. We should expand the opportunities to conduct experiments on ethically consented braindead patients, or even recently deceased patients, that would be too dangerous for healthy patients. Similarly, we should also improve access to organs for experimentation, even if they do not recapitulate the full human biology. Progress on either front could greatly accelerate the design-build-test cycle in biomedical research.


To be clear: this topic is very challenging from an ethical and logistical perspective. The families of recently deceased, relatively healthy patients often have other concerns than research; and patients who were dying for a long time are rarely good models for medical research. But even a handful of relatively-healthy, ethically-consented patients could be transformative for medicine.


1B. Improve predictions of drug safety. Imagine a world in which we could predict with perfect accuracy whether a drug would be safe for a patient to take, both from an acute perspective and from a chronic perspective. If we had exceptionally good predictive models of safety (and PK/PD), I expect that it would be possible to accelerate the design-build-test cycle significantly; if we had perfect models of safety, one could even imagine a world in which drugs are tested directly in a “direct-to-phase-2” regime, i.e., with proof-of-concept efficacy being the first step, rather than a step that occurs only after tens of millions of dollars of development.


Existing datasets for toxicity are generally low quality, and are limited in their coverage of chemical space, so it is unlikely that a high quality predictive model for toxicity can be trained directly from existing data. Gathering better datasets in animals and in vitro models will be important, but gathering large toxicology datasets for humans is unlikely to be possible. Instead, we may need to leverage inductive biases, for example by making predictions based on molecule-protein interactions. Promising approaches can be validated on new compounds in animals.


1C. Explore medical uses for drugs that are known to be safe. There is a large category of compounds that are generally recognized as safe (GRAS), or that fall outside of the FDA’s definition of novel chemical matter (such as endogenous signaling peptides). Significant effort has gone into the former category certainly, and the latter category has received particular attention recently due to the success of GLP-1 agonists. However, the space of endogenous human signaling peptides is only very sparsely explored. I suspect that many new therapeutic compounds could be found simply by studying endogenous human regulatory mechanisms.



2. We need to have access to the ground truth.

Whenever we conduct experiments in biology, we never actually know what is happening at the molecular level. Our readouts are always indirect: if an engineered virus fails to infect a particular target cell, it is exceptionally difficult to tell if the infection failed in receptor engagement, internalization, or expression, for example, or to tell why. To infer the function of a drug, we should be able to “see” what the drug does: where it goes in the body, which molecules it interacts with, and how it changes what those molecules do. Today, however, all three of those experiments are often virtually impossible even in laboratory samples, let alone in clinical samples. The closer we can get to the ground truth of biological systems, the fewer experiments we will have to conduct, and the faster the design-build-test cycle will go.


2A. Develop a molecular microscope. One way to access the ground truth would be to build a microscope that is capable of resolving it: every molecule, in situ, no exceptions, no labels. Many labs have worked over the past 10-20 years on developing better tools to access the ground truth, and much progress has been made. Efforts in electron microscopy (such as cryoET) and expansion microscopy have been particularly heroic in pushing towards the true ground truth. Borrowing from material science may also help. Recently, atom probe tomography for the first time enabled direct imaging of a single protein at atomic resolution, which also allowed the structure of IgG to be inferred from a single molecule [1]. If these methods could scale, they would completely transform our understanding of biology at the molecular level. However, such methods, which work for conductive metals, require significant additional development to work for non-conductive organics.


2B. Simulate the cell. An alternative way to get to the ground truth is to determine it by simulation. The ability to simulate cells with full molecular detail would have a similar effect to molecular microscopy: it would allow us to “see what is going on down there,” or at least to generate a shortlist of hypotheses for what is going on. With the advent of protein structure models, which can make some predictions about protein-protein interactions, many people are optimistic that full cell simulations may soon be possible. However, significant advances in methodology are necessary, and some basic numerical estimates are necessary here, which have not yet been performed. Simulations would have to last milliseconds at least. Today, molecular dynamics simulations on the millisecond scale for individual proteins are just becoming possible, using Anton; docking-based simulations of large protein aggregates on the second timescale have been performed, but likely miss most of the important conformational details of protein-protein interactions[2]. It is still unclear what bounds, if any, can be placed on the amount of compute needed to simulate an entire cell at sufficient resolution and detail over millisecond timescales.



3. We need to make better use of the experiments we are able to perform. 

Even when we have something close to the ground truth, and even when we are conducting experiments in human biology, it is very difficult for humans to integrate all of the available evidence on a specific topic and make conclusions about the “optimal” experiment. Often, critical information from past experiments is missing or difficult to retrieve. Practicing biologists quickly learn that they are always some minor optimization away from a method that works 10x better, and that the information they need is inevitably contained in the literature somewhere, if only they could find it. We need to be able to record more information about the experiments we perform, and we need to have better systems for integrating available data to generate hypotheses. Here, automation will help substantially. Automation in the clinic and in the wet lab will help us to gather substantially more information about every detail of every experiment, and to document it in a way that makes it easy to retrieve later. Automation will also help to assemble and analyze raw data, gather relevant literature, and generate informed hypotheses.

This is what we are working on at FutureHouse. More on this part soon...




384 views0 comments

Recent Posts

See All

Protocols are long (and full of horrors)

I think the reason automating science is hard is that protocols often have 20+ steps; but each individual step will fail or require modification in 5% or 10% of cases. So the probability you get all t

bottom of page