Derek Lowe over at In the Pipeline (which I would unhesitatingly recommend to read if you’re remotely interested in how medicinal chemistry happens) discusses a new paper which addresses some of the systemic issues in drug discovery, namely the inconsistent use of predictive models (PMs) for making decisions about how drugs proceed through the discovery pipeline.
A big chunk of the problem is that drug companies look to diseases with no effective treatments, but part of the reason why no effective treatments exist is because the models of the disease are poorly constructed or just even plain wrong (because biology). One of the big culprits here is Alzheimer’s, the mechanism of which is still not well understood, but that doesn’t stop Eli Lilly (for example) throwing billions of dollars at it in the hope that something will stick (spoiler: it probably won’t).
I’m familiar with the drug discovery industry: I spent 10 years developing and working with chemical informatics (cheminformatics) tools to support research chemists who were working in precisely this muddy world.
But since 2010, I’ve been working in the world of genetic testing which, for a variety of reasons, seems to have been able to deal with the issue of getting their PMs right a lot better than their cousins in the pharmaceutical industry. This is partly because in genetic testing, you’re not looking for a cure; you’re looking for a genetic signal that strongly correlates with a disease state which in the world of high-throughput DNA sequencing is almost trivial to do. The hard part is, as always, getting enough statistical support for your model from independent clinical trials.
It’s interesting to see discussions of PPVs and NPVs from the perspective of the drug discovery model, though, and I think this paper is going to end up being cited almost as much as Lipinski’s “Rule of Five” paper in the world of medicinal chemistry.