Posts Tagged ‘ Miscellaneous Statistics ’

Forking paths vs. six quick regression tips

February 8, 2016
By
Forking paths vs. six quick regression tips

Bill Harris writes: I know you’re on a blog delay, but I’d like to vote to raise the odds that my question in a comment to http://andrewgelman.com/2015/09/15/even-though-its-published-in-a-top-psychology-journal-she-still-doesnt-believe-it/gets discussed, in case it’s not in your queue. It’s likely just my simple misunderstanding, but I’ve sensed two bits of contradictory advice in your writing: fit one complete model all at […] The post Forking paths vs. six quick regression tips appeared first on Statistical Modeling,…

Read more »

You’ll never guess what I say when I have nothing to say

February 7, 2016
By

A reporter writes: I’m a reporter working on a story . . . and I was wondering if you could help me out by taking a quick look at the stats in the paper it’s based on. The paper is about paedophiles being more likely to have minor facial abnormalities, suggesting that paedophilia is a […] The post You’ll never guess what I say when I have nothing to say…

Read more »

What’s the difference between randomness and uncertainty?

February 6, 2016
By

Julia Galef mentioned “meta-uncertainty,” and how to characterize the difference between a 50% credence about a coin flip coming up heads, vs. a 50% credence about something like advanced AI being invented this century. I wrote: Yes, I’ve written about this probability thing. The way to distinguish these two scenarios is to embed each of […] The post What’s the difference between randomness and uncertainty? appeared first on Statistical Modeling,…

Read more »

The Notorious N.H.S.T. presents: Mo P-values Mo Problems

February 4, 2016
By

Alain Content writes: I am a psycholinguist who teaches statistics (and also sometimes publishes in Psych Sci). I am writing because as I am preparing for some future lessons, I fall back on a very basic question which has been worrying me for some time, related to the reasoning underlying NHST [null hypothesis significance testing]. […] The post The Notorious N.H.S.T. presents: Mo P-values Mo Problems appeared first on Statistical…

Read more »

“Null hypothesis” = “A specific random number generator”

January 26, 2016
By

In an otherwise pointless comment thread the other day, Dan Lakeland contributed the following gem: A p-value is the probability of seeing data as extreme or more extreme than the result, under the assumption that the result was produced by a specific random number generator (called the null hypothesis). I could care less about p-values […] The post “Null hypothesis” = “A specific random number generator” appeared first on Statistical…

Read more »

The time-reversal heuristic—a new way to think about a published finding that is followed up by a large, preregistered replication (in context of Amy Cuddy’s claims about power pose)

January 26, 2016
By
The time-reversal heuristic—a new way to think about a published finding that is followed up by a large, preregistered replication (in context of Amy Cuddy’s claims about power pose)

[Note to busy readers: If you’re sick of power pose, there’s still something of general interest in this post; scroll down to the section on the time-reversal heuristic. I really like that idea.] Someone pointed me to this discussion on Facebook in which Amy Cuddy expresses displeasure with my recent criticism (with Kaiser Fung) of […] The post The time-reversal heuristic—a new way to think about a published finding that…

Read more »

2 new reasons not to trust published p-values: You won’t believe what this rogue economist has to say.

January 24, 2016
By

Political scientist Anselm Rink points me to this paper by economist Alwyn Young which is entitled, “Channelling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results,” and begins, I [Young] follow R.A. Fisher’s The Design of Experiments, using randomization statistical inference to test the null hypothesis of no treatment effect in a […] The post 2 new reasons not to trust published p-values: You won’t believe what…

Read more »

2 new reasons not to trust published p-values: You won’t believe what this rogue economist has to say.

January 24, 2016
By

Political scientist Anselm Rink points me to this paper by economist Alwyn Young which is entitled, “Channelling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results,” and begins, I [Young] follow R.A. Fisher’s The Design of Experiments, using randomization statistical inference to test the null hypothesis of no treatment effect in a […] The post 2 new reasons not to trust published p-values: You won’t believe what…

Read more »

Paxil: What went wrong?

January 12, 2016
By
Paxil:  What went wrong?

Dale Lehman points us to this news article by Paul Basken on a study by Joanna Le Noury, John Nardo, David Healy, Jon Jureidin, Melissa Raven, Catalin Tufanaru, and Elia Abi-Jaoude that investigated what went wrong in the notorious study by Martin Keller et al. of the GlaxoSmithKline drug Paxil. Lots of ethical issues here, […] The post Paxil: What went wrong? appeared first on Statistical Modeling, Causal Inference, and…

Read more »

Read this to change your entire perspective on statistics: Why inversion of hypothesis tests is not a general procedure for creating uncertainty intervals

January 8, 2016
By
Read this to change your entire perspective on statistics:  Why inversion of hypothesis tests is not a general procedure for creating uncertainty intervals

Dave Choi writes: A reviewer has pointed me something that you wrote in your blog on inverting test statistics. Specifically, the reviewer is interested in what can happen if the test can reject the entire assumed family of models, and has asked me to consider discussing whether it applies to a paper that I am […] The post Read this to change your entire perspective on statistics: Why inversion of…

Read more »


Subscribe

Email:

  Subscribe