Biopharma & Investing
Alexion is kicking butt. Being this good is almost boring.
Gilead, both Johns are stepping down. Running a big pharma and being that good is boring too, I guess.
Keytruda approved in China. Big problem for Beigene.
BAN2401 is not a viable drug. It does not work. No way, no how. I think people who write about biotech for a living should write less quickly (and less from a company script) and read some textbooks on statistics, medicine and pharmacology. Friends email me news here and I was aghast to see nearly every biotech ‘observer’ pathetically record that BANB2401 produced a promising result. “Stat” (whatever that is), CNBC (predictable), Endpoints (this is that weird guy’s blog), etc. All the usual suspects just don’t know how to read clinical data. Why even try to chronicle the history of an industry if you just don’t get it? Maybe wait until the dust settles before writing an embarassing headline. It is really emblematic of an epidemic: media companies don’t have big budgets, so they hire whoever they can to write whatever they want on a field they don’t comprehend. I saw some financial journalists completely fail to comprehend amortization recently. A silent smile is all I can produce. So, I understand predicting the future is too tough for biotech writers, but at least chronicle the past correctly.
Anyway, the stock market promptly reacted to the BANB2401 data for the failure that it was. Antibodies don’t enter the brain, let alone the cortical areas, the parenchyma, etc. It’s fucking physics. F=MA? Tight junctions? Next, we saw what happens when a-beta antibodies are dosed in AD: bapi, sola, etc. Finally, THIS data is a piece of work. Don’t trust any p>0.01. That is the same as p>0.05. For Christ’s sake, don’t trust data at p>0.01. Drugs don’t work by chance, ever. At least drugs I care about. Next, the idea that one drug dose worked and one didn’t is humorous. Unless you have a clear explanation as to why one dose wouldn’t work and one would, you have to group and average the cohorts. The company wouldn’t waste precious power and resources if they thought there would be no activity in the dose cohort. Most antibodies stick around. It’s plausible that more frequent dosing would do the trick, but unlikely. Same thing with timing of therapeutic effect: the separation that occurs at 18 months has no trace at 12 months. Is there a plausible reason for that? Sure but I’d be more convinced if it persisted at 24 months. Very, very few drugs have a 12 month delayed therapeutic onset. FInally, this isn’t a clinically meaningful result (hence the p=0.016 or whatever it was). ADAS-COG is a 70 point scale. A 2 point improvement is exactly what these antibodies were invented to NOT produce. I’ve wasted enough transistor state changes writing this. I’m sorry, electrons.
Book Review – How Not to Be Wrong: The Power of Mathematical Thinking – Jordan Ellenberg
It’s hard for me to review a book like HNTBW. One lens to look at it through is, “could I have done better”? I’ll go through areas I think were lacking, but this is a paean to math that deserves your attention. Of course, One of Bill Gates’s “10 Favorite Books”, which I’m sure is a subset of a larger list of his favored reads, which is apparently, everything he reads.
The title of this book could not have been written by its author. It is largely meaningless. The title should have been: “My random thoughts on Math and Statistics which will hopefully get you interested in Math”. There is no theme that I could discern other than the author’s obsession with math history. So Ellenberg’s structural organization is very poor. He meanders from topic to topic, staying far too long on some (statistics), ignoring others completely. On the plus side, he is imaginative with references to F Scott Fitzgerald, the erstwhile mathematician Wallace (David Foster!) and various other compelling orthogonals. His actual writing style is excellent. Clearly a keen mind, he restrains himself from overpowering the reader with the standard philosophical/mathematical overwrought vocabulary. “He’s just an average joe math professor!” is the feeling you get and it keeps you engaged.
Ellenberg tries to do a good deed. His message is that this book will somehow help you think more structurally. It won’t do so, directly. There are very few (maybe two?) proofs in the book and other than a brief explanation of reductio ad absurdum, very few logical techniques are actually employed. Despite that, Ellenberg tackles hundreds of problems with a sneaky mathematical armamentarium. I fear his secret spies could have been more direct: on battalion, a little more actual math wouldn’t have scared the reader and empowered the work. Some of HNTBW feels like a parlor trick, with the reader forced to trust Ellenberg that “there’s math in here, don’t worry! I’m not going to show it to you, but it’s there!”.
Understandably HNTBW has a strong focus on statistics but here Ellenberg makes a very poor showing. In the classic example of multiplicity errors gone haywire, Ellenberg introduces the GWAS experiments that yours truly reviews on a daily basis but doesn’t describe p-value correction. This and other glaring omissions, like any discussion of why people insist on making post-hoc observations that fail to repeat themselves, could have served readers well.
The second half of the book is a more poetic journey through math history. While he is no Newman and this is no anthology, Ellenberg’s near lyricism is enchanting and awe-inspiring. The last chapter in the book is a monument to humility, creativity and achievement in maths. Still, HNTBW is not Godel-Escher-Bach, nor does it try to be. Ellenberg just teases us with math, often namedropping greats and taking us on a tour meant to enthrall us and learn more. A much-needed manual on how to actually think in a structurally correct way was a titular trick I’m happy I fell for. I highly recommend this book. 9/10.
Glossary – Dedicated to various journalists at Bloomberg, CNBC, Stat, etc. who I wouldn’t hire to change my cat litter because they apparently are unaware of the following:
a priori – generally used as a synonym for “pre-specified” in statistics.
alpha – the likelihood of making a type I error, or rejecting the null hypothesis when it is true
beta – the likelihood of making a type II error, or incorrectly concluding the null hypothesis is correct
clinical significance – as opposed to statistical significance, the degree to which a medicine is clinically relevant to a patient. 2 points on a 70 point scale, for instance, is not clinically relevant.
co-primary endpoint – if you split alpha a priori, you can examine two endpoints at once. however, both endpoints must be met with the reduced alpha to infer the rejection of ANY null hypothesis.
deductive reasoning – using the rules of logic to form inferences with certain conclusions.
Fisher, Sir Ronald – British statistician who was the father of statistics. Probably the first person you could call a statistician.
Fisher’s Exact Test – A personal favorite, a categorical statistical test for contingency tables.
Gauss, Carl Friedrich – Mathematical deity who the normal distribution is named after
inductive reasoning – find patterns in empirical data. any inference where the premise is giving us some evidence of truth, resulting in a probabilistic inference
inference – something many, many liberal arts majors are incapable of
mechanistic plausibility – The plausibility of an investigational drug’s mechanism of action. Similar drugs having failed to elicit a beneficial response in a similar patient population would impinge negatively on plausibility.
null hypothesis – the hypothesis we seek to invalidate with an experiment, a reductio ad absurdum technique
Pearson, Karl – another father of statistics, see Pearson’s chi-squared test.
p-value – the quantification of statistical significance, where the p-value must be less than alpha.
pre-specified endpoint – Typically, a between-group comparison using a statistical method that is articulated in the SAP prior to trial initiation.
primary endpoint – The ONE a priori statistical test hypothesized in the SAP. A clinical trial can only interrogate ONE hypothesis so as to avoid unduly respecting post hoc observations. IF the primary endpoint is met with statistical significance, a secondary endpoint may be evaluated as per the SAP with the same alpha level as the primary endpoint (no alpha is considered spent). Dose-ranging studies make pre-specified endpoints extremely hard to meet given the limited power of making each dose a co-primary endpoint. One may group all or some doses and retain full alpha, but one may not assign full alpha (0.05) for all doses. If 5 doses are being interrogated, the alpha must be SPLIT between these doses (roughly 0.01 each).
post hoc analysis – An after-the-fact analysis of data which is hypothesis-generating ONLY. Typically used by companies and characters of ill repute to bolster clinical trials which have failed to reach statistical significance. “Shooting an arrow and painting the bullseye after”.
power: 1 – beta
probability distribution: a description of probability of all possible outcomes in an experiment
statistical analysis plan (SAP) – the statistical protocol for a clinical trial
statistical significance – when p < alpha, the probability that the results obtained if the null hypothesis is true, were due to chance
type I error: rejecting a true null-hypothesis
type II error: failure to reject a false null hypothesis
Spend more time reading books and less time giving out an unearned opinion. I doubt many “communications” majors (or most other liberal arts majors) are intelligent enough (yes, I am going there) to have done well in mathematics and statistics. As Dalio says, ask yourself if you’ve earned the right to have an opinion. You should not opine on biopharmaceuticals unless the above is facile and simple to you. Statistics is the lens with which we see the modern data-driven world. Go back to school and actually learn something, if you have to. The above are trivially basic–we don’t go into Bayes vs frequentist, ANOVA, actual math of a statistical test, stratification methods, parameterization, multiplicity correction techniques, LOCF/BOCF and missing data and other still simple topics.