The Case for More False Positives in Anti-doping Testing

December 8, 2012
By

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

Kaiser Fung was ahead of the curve on Lance Armstrong:

The media has gotten the statistics totally backwards.

On the one hand, they faithfully report the colorful stories of athletes who fail drug tests pleading their innocence. (I have written about the Spanish cyclist Alberto Contador here.) On the other hand, they unquestioningly report athletes who claim “hundreds of negative tests” prove their honesty. Putting these two together implies that the media believes that negative test results are highly reliable while positive test results are unreliable.

The reality is just the opposite. When an athlete tests positive, it’s almost sure that he/she has doped. Sure, most of the clean athletes will test negative but what is often missed is that the majority of dopers will also test negative.

We don’t need to do any computation to see that this is true. In most major sports competitions, the proportion of tests declared positive is typically below 1%. If you believe that the proportion of dopers is higher than 1%, then it is 100% certain that some dopers got away. If you believe 10% are dopers, then at least 9 out of 10 dopers will test negative!

As Kaiser points out in the case of Lance Armstrong, passing 500 tests is not as impressive as it might sound:

The independence assumption is the key here. If I were a doper, and I pass the test, this tells me that my doping regimen is pretty good; if I pass two tests, it increases my confidence that my doping regimen is good; the more tests I pass, the more I feel good about the expertise of my doping advisors.

Another way to think about this is the fact that every athlete who have confessed and/or failed a positive test will have had a long string of negative tests prior to failing. Unless one believes these athletes (like Andy Pettite) who claim that the only time they took steroids was the time they got caught, it is very difficult to make the case that a string of negatives means much.

Also here:

The anti-doping agencies are so concerned about not falsely accusing anyone that they leave a gigantic hole for dopers to walk through. . . . While we think about Armstrong’s plight, let’s not forget about this fact: every one of those who now confessed passed hundreds of tests in their careers, just like Armstrong did. In fact, fallen stars like Tyler Hamilton and Floyd Landis also passed lots of tests before they got caught. In effect, dopers face a lottery with high odds of winning and low odds of losing. . . .

Another myth shattered by this scandal is the idea that stars don’t need to cheat. It is most likely the opposite. At the very top of any sport, especially a sport that pays, the difference between the number 1 and the number 2 is vast in terms of financial reward but infinitestimal in terms of physics. Every little advantage counts. Placebos count.

It’s hard to imagine why someone who has no chance of winning anything would take drugs that might kill them. So, when they say everyone was cheating, I [Kaiser] wonder if they meant everyone who was competitive was cheating.



Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

Tags: , ,


Subscribe

Email:

  Subscribe