Posts Tagged ‘ p values ’

Canada Day Reading List

July 1, 2017
By
Canada Day Reading List

I was tempted to offer you a list of 150 items, but I thought better of it!Hamilton, J. D., 2017. Why you should never use the Hodrick-Prescott filter. Mimeo., Department of Economics, UC San Diego.Jin, H. and S. Zhang, 2017. Spurious regression betwee...

Read more »

If you’re seeing limb-sawing in P-value logic, you’re sawing off the limbs of reductio arguments

April 15, 2017
By
If you’re seeing limb-sawing in P-value logic, you’re sawing off the limbs of reductio arguments

I was just reading a paper by Martin and Liu (2014) in which they allude to the “questionable logic of proving H0 false by using a calculation that assumes it is true”(p. 1704).  They say they seek to define a notion of “plausibility” that “fits the way practitioners use and interpret p-values: a small p-value means […]

Read more »

Er, about those other approaches, hold off until a balanced appraisal is in

April 1, 2017
By
Er, about those other approaches, hold off until a balanced appraisal is in

I could have told them that the degree of accordance enabling the ASA’s “6 principles” on p-values was unlikely to be replicated when it came to most of the “other approaches” with which some would supplement or replace significance tests– notably Bayesian updating, Bayes factors, or likelihood ratios (confidence intervals are dual to hypotheses tests). [My commentary […]

Read more »

The ASA Document on P-Values: One Year On

March 8, 2017
By
The ASA Document on P-Values: One Year On

I’m surprised it’s a year already since posting my published comments on the ASA Document on P-Values. Since then, there have been a slew of papers rehearsing the well-worn fallacies of tests (a tad bit more than the usual rate). Doubtless, the P-value Pow Wow raised people’s consciousnesses. I’m interested in hearing reader reactions/experiences in connection with […]

Read more »

Hocus pocus! Adopt a magician’s stance, if you want to reveal statistical sleights of hand

February 8, 2017
By
Hocus pocus! Adopt a magician’s stance, if you want to reveal statistical sleights of hand

Here’s the follow-up post to the one I reblogged on Feb 3 (please read that one first). When they sought to subject Uri Geller to the scrutiny of scientists, magicians had to be brought in because only they were sufficiently trained to spot the subtle sleight of hand shifts by which the magician tricks by misdirection. We, […]

Read more »

Specification Testing With Very Large Samples

December 27, 2016
By
Specification Testing With Very Large Samples

I received the following email query a while back:"It's my understanding that in the event that you have a large sample size (in my case, > 2million obs) many tests for functional form mis-specification will report statistically significant results ...

Read more »

Be careful evaluating model predictions

December 3, 2016
By
Be careful evaluating model predictions

One thing I teach is: when evaluating the performance of regression models you should not use correlation as your score. This is because correlation tells you if a re-scaling of your result is useful, but you want to know if the result in your hand is in fact useful. For example: the Mars Climate Orbiter … Continue reading Be careful evaluating model predictions

Read more »

The unfortunate one-sided logic of empirical hypothesis testing

October 17, 2016
By

I’ve been thinking a bit on statistical tests, their absence, abuse, and limits. I think much of the current “scientific replication crisis” stems from the fallacy that “failing to fail” is the same as success (in addition to the forces of bad luck, limited research budgets, statistical naiveté, sloppiness, pride, greed and other human qualities … Continue reading The unfortunate one-sided logic of empirical hypothesis testing

Read more »

Proofing statistics in papers

October 2, 2016
By
Proofing statistics in papers

Recently saw a really fun article making the rounds: “The prevalence of statistical reporting errors in psychology (1985–2013)”, Nuijten, M.B., Hartgerink, C.H.J., van Assen, M.A.L.M. et al., Behav Res (2015), doi:10.3758/s13428-015-0664-2. The authors built an R package to check psychology papers for statistical errors. Please read on for how that is possible, some tools, and … Continue reading Proofing statistics in papers

Read more »

Little Debate: defining baseline

June 5, 2016
By
Little Debate: defining baseline

In an April 30, 2015 note in Nature (vol 520, p. 612), Jeffrey Leek and Roger Peng note that p-values get intense scrutiny, while all the decisions that lead up to the p-values get little debate. I wholeheartedly agree, and so I'm creating a Littl...

Read more »


Subscribe

Email:

  Subscribe