Paul Alper points to this news article in Health News Review, which says:
A news release or story that proclaims a new treatment is “just as effective” or “comparable to” or “as good as” an existing therapy might spring from a non-inferiority trial.
Technically speaking, these studies are designed to test whether an intervention is “not acceptably worse” in terms of its effectiveness than what’s currently used. . . .
These trials have proliferated as drug and device makers find it harder to improve upon existing treatments. So instead, they devise products they hope work just as well but with an extra benefit, such as more convenient dosing, lower cost, or fewer side effects.
If a company can show its product is just as effective as the current standard treatment but with an added perk, it might gain a marketing edge.
Sounds like no problem so far: Why not have some drug that performs as well as its competitor but is better in some secondary way?
But the article continues:
Problem is, the studies used to generate that edge often aren’t considered trustworthy.
Generally speaking, non-inferiority trials are considered less credible than a more common trial design, the superiority trial, which determines whether one treatment outperforms another treatment or a placebo. That’s because non-inferiority trials are often based on murky assumptions that could favor the new product being tested.
Rarely do non-inferiority trials conclude that a new treatment is not non-inferior . . . That scarcity of negative findings “raises the provocative questions of whether industry-sponsored non-inferiority trials offer any value—aside from capturing market share,” wrote Vinay Prasad, MD, in an editorial in the Journal of Internal Medicine entitled “Non-Interiority Trials in Medicine: Practice Changing or a Self-Fulfilling Prophecy?”
In a separate concern, ethical issues have been raised about whether some non-inferiority trials should be conducted at all, because they might expose patients to potentially worse treatments in order to advance a commercial goal.
From an article, “Non-inferiority trials: are they inferior? A systematic review of reporting in major medical journals,” by Sunita Rehal et al.:
Reporting and conduct of non-inferiority trials is inconsistent and does not follow the recommendations in available statistical guidelines, which are not wholly consistent themselves.
There’s a lot of discussion of “type 1 error rate,” which I don’t care about. True effects, or population differences, are never zero.
The general point is that non-inferiority trials, like clinical trials in general, can be gamed, and they are gamed.
The way it looks to me is that non-inferiority trials do have a lot of problems, and that these are problems that “regular” clinical trials have also. The problems include:
1. A statistical framework that is focused on the uninteresting question of zero true effect and zero systematic error,
2. A desire and an expectation to come up with certain conclusions from noisy data,
3. Incentives to cheat.
Regarding point 2: it’s worse than you might think. It’s not just that “statistical significance” is typically taken as tantamount to a certain claim that a treatment is effective. It’s also that non-significance is commonly taken as a certain claim that a treatment has no effect (see for example our discussion of stents). Since every result is either statistically significant or not, this gives you automatic certainty, no matter what the data are!
P.S. Full disclosure: I’ve had business relationships with Novartis, Astrazeneca and other drug companies.
The post These 3 problems destroy many clinical trials (in context of some papers on problems with non-inferiority trials, or problems with clinical trials in general) appeared first on Statistical Modeling, Causal Inference, and Social Science.