Posts Tagged ‘ Miscellaneous Statistics ’

Some natural solutions to the p-value communication problem—and why they won’t work

March 21, 2017
By

Blake McShane and David Gal recently wrote two articles (“Blinding us to the obvious? The effect of statistical training on the evaluation of evidence” and “Statistical significance and the dichotomization of evidence”) on the misunderstandings of p-values that are common even among supposed experts in statistics and applied social research. The key misconception has nothing […] The post Some natural solutions to the p-value communication problem—and why they won’t work…

Read more »

Lady in the Mirror

March 14, 2017
By
Lady in the Mirror

In the context of a report from a drug study, Stephen Senn writes: The bare facts they established are the following: The International Headache Society recommends the outcome of being pain free two hours after taking a medicine. The outcome of being pain free or having only mild pain at two hours was reported by […] The post Lady in the Mirror appeared first on Statistical Modeling, Causal Inference, and…

Read more »

“Beyond Heterogeneity of Effect Sizes”

March 12, 2017
By
“Beyond Heterogeneity of Effect Sizes”

Piers Steel writes: One of the primary benefits of meta-analytic syntheses of research findings is that researchers are provided with an estimate of the heterogeneity of effect sizes. . . . Low values for this estimate are typically interpreted as indicating that the strength of an effect generalizes across situations . . . Some have […] The post “Beyond Heterogeneity of Effect Sizes” appeared first on Statistical Modeling, Causal Inference,…

Read more »

How is preregistration like random sampling and controlled experimentation

March 9, 2017
By
How is preregistration like random sampling and controlled experimentation

In the discussion following my talk yesterday, someone asked about preregistration and I gave an answer that I really liked, something I’d never thought of before. I started with my usual story that preregistration is great in two settings: (a) replicating your own exploratory work (as in the 50 shades of gray paper), and (b) […] The post How is preregistration like random sampling and controlled experimentation appeared first on…

Read more »

How to do a descriptive analysis using regression modeling?

March 7, 2017
By

Freddy Garcia writes: I read your post Vine regression?, and your phrase “I love descriptive data analysis!” make me wonder: How to do a descriptive analysis using regression models? Maybe my question could be misleading to an statistician, but I am a economics student. So we are accustomed to think in causal terms when we […] The post How to do a descriptive analysis using regression modeling? appeared first on…

Read more »

Advice when debugging at 11pm

March 6, 2017
By

Add one feature to your model and test and debug with fake data before going on. Don’t try to add two features at once. The post Advice when debugging at 11pm appeared first on Statistical Modeling, Causal Inference, and Social Science.

Read more »

Checkmate

March 5, 2017
By

Sandro Ambuehl writes: As an avid reader of your blog, I thought you might like (to hate) the attached PNAS paper with the following findings: (i) sending two flyers about the importance of STEM fields to the parents of 81 kids improves ACT scores by 12 percentile points (intent-to-treat effect… a bit large, perhaps?) and […] The post Checkmate appeared first on Statistical Modeling, Causal Inference, and Social Science.

Read more »

Yes, it makes sense to do design analysis (“power calculations”) after the data have been collected

March 3, 2017
By

This one has come up before but it’s worth a reminder. Stephen Senn is a thoughtful statistician and I generally agree with his advice but I think he was kinda wrong on this one. Wrong in an interesting way. Senn’s article is from 2002 and it is called “Power is indeed irrelevant in interpreting completed […] The post Yes, it makes sense to do design analysis (“power calculations”) after the…

Read more »

Theoretical statistics is the theory of applied statistics: how to think about what we do (My talk Wednesday—today!—4:15pm at the Harvard statistics dept)

March 1, 2017
By

Theoretical statistics is the theory of applied statistics: how to think about what we do Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Working scientists and engineers commonly feel that philosophy is a waste of time. But theoretical and philosophical principles can guide practice, so it makes sense for us to […] The post Theoretical statistics is the theory of applied statistics: how to think about…

Read more »

Ethics and the Replication Crisis and Science (my talk Tues 6pm)

February 27, 2017
By

I’ll be speaking on Ethics and the Replication Crisis and Science tomorrow (Tues 28 Feb) 6-7:30pm at room 411 Fayerweather Hall, Columbia University. I don’t plan to speak for 90 minutes; I assume there will be lots of time for discussion. Here’s the abstract that I whipped up: Busy scientists sometimes view ethics and philosophy […] The post Ethics and the Replication Crisis and Science (my talk Tues 6pm) appeared…

Read more »


Subscribe

Email:

  Subscribe