Paul Alper points us to this article in Health News Review—I can’t figure out who wrote it—warning of problems with the use of surrogate outcomes for policy evaluation:
“New drug improves bone density by 40%.”
At first glance, this sounds like great news. But there’s a problem: We have no idea if this means the drug also cuts the risk of bone fractures, which is the outcome that we really care about.
So why do researchers measure bone density instead of fractures? For several reasons, it can be difficult to determine whether a treatment will result in a clear benefit for patients, such as preventing death or improving quality of life. It may take decades, for example, to see if a new osteoporosis drug ultimately reduces fractures, so researchers look for what are hopefully reliable indirect markers to measure, such as bone density.
These substitutes, which go by several names–surrogate measures, markers or endpoints–ideally can be assessed quickly and easily and are expected to correlate with a meaningful outcome.
OK, so what’s the problem? The article explains:
Not all surrogate measures have turned out to be good ones. Often a drug that influences a surrogate measure turns out to produce no meaningful result for patients, referred to as a clinical outcome.
In some cases, there is even harm. In the landmark Cardiac Arrhythmia Suppression Trial (CAST), drugs approved for their ability to suppress a surrogate marker — abnormal heartbeats — were found to actually increase rather than decrease the risk of death. . . .
But surrogate markers are common: Between 2010 and 2012, the FDA says it approved 45 percent of new drugs on the basis of a surrogate endpoint. . . . One analysis showed that 67 percent of cancer drug approvals over a five-year period were based on surrogates. From 2003 to 2012 the FDA used surrogates for seven out of nine drugs approved for chronic obstructive pulmonary disease, all 26 approved drugs for diabetes, and all nine drugs approved for glaucoma . . .
The use of weak surrogate-based evidence has flooded the market with expensive duds, many argue. . . .
Here are some examples:
Drugs are frequently approved on the basis of uncertain markers such as “progression free survival,” which is the amount of time between treatment and worsening of symptoms. The drug Avastin won accelerated FDA approval to treat metastatic breast cancer based on its ability to delay tumor growth, but that approval was revoked when multiple randomized trials showed the drug didn’t improve survival and had significant side effects. . . .
Several stories reporting on a drug called evolucumab, known as a PCSK9 inhibitor, said it dramatically lowered LDL cholesterol in a 24-week trial, but didn’t note that LDL is a surrogate marker for heart disease. . . .
A news release claiming that blueberry concentrate improves brain function in older people failed to point out that brain blood flow and other biomarkers were “not a measurable clinical benefit,” according to our review.
This is not to say that it’s a bad idea to measure surrogate outcomes, just that we should keep our eye on the ball and we should report things accurately. From a statistical perspective, the challenge is to build and estimate models connecting background variables, treatments, intermediate, and ultimate outcomes.
The post Problems with surrogate markers appeared first on Statistical Modeling, Causal Inference, and Social Science.