Posts Tagged ‘ Bayesian statistics ’

The Night Riders

December 14, 2017
By

Gilbert Chin writes: After reading this piece [“How one 19-year-old Illinois man Is distorting national polling averages,” by Nate Cohn] and this Nature news story [“Seeing deadly mutations in an new light,” by Erika Hayden], I wonder if you might consider blogging about how this appears to be the same issue in two different disciplines. […] The post The Night Riders appeared first on Statistical Modeling, Causal Inference, and Social…

Read more »

Ed Jaynes outta control!

December 9, 2017
By

A commmenter points to a chapter of E. T. Jaynes’s book on probability and inference that contains the following amazing bit: The information we get from the TV evening news is not that a certain event actually happened in a certain way it is that some news reporter has claimed that it did. Even seeing […] The post Ed Jaynes outta control! appeared first on Statistical Modeling, Causal Inference, and…

Read more »

“Little Data” etc.: My talk at NYU this Friday, 8 Dec 2017

December 5, 2017
By

I’ll be talking at the NYU business school, in the department of information, operations, and management sciences, this Fri, 8 Dec 2017, at 12:30, in room KMC 4-90 (wherever that is): Little Data: How Traditional Statistical Ideas Remain Relevant in a Big-Data World; or, The Statistical Crisis in Science; or, Open Problems in Bayesian Data […] The post “Little Data” etc.: My talk at NYU this Friday, 8 Dec 2017…

Read more »

Oooh, I hate all talk of false positive, false negative, false discovery, etc.

November 30, 2017
By

A correspondent writes: I think this short post on p value, bayes, and false discovery rate contains some misinterpretations. My reply: Oooh, I hate all talk of false positive, false negative, false discovery, etc. I posted this not because I care about someone, somewhere, being “wrong on the internet.” Rather, I just think there’s so […] The post Oooh, I hate all talk of false positive, false negative, false discovery,…

Read more »

Computational and statistical issues with uniform interval priors

November 28, 2017
By

There are two anti-patterns* for prior specification in Stan programs that can be sourced directly to idioms developed for BUGS. One is the diffuse gamma priors that Andrew’s already written about at length. The second is interval-based priors. Which brings us to today’s post. Interval priors An interval prior is something like this in Stan […] The post Computational and statistical issues with uniform interval priors appeared first on Statistical…

Read more »

Asymptotically we are all dead (Thoughts about the Bernstein-von Mises theorem before and after a Diamanda Galás concert)

November 27, 2017
By
Asymptotically we are all dead (Thoughts about the Bernstein-von Mises theorem before and after a Diamanda Galás concert)

They say I did something bad, then why’s it feel so good–Taylor Swift It’s a Sunday afternoon and I’m trying to work myself up to the sort of emotional fortitude where I can survive the Diamanda Galás concert that I was super excited about a few months ago, but now, as I stare down the […] The post Asymptotically we are all dead (Thoughts about the Bernstein-von Mises theorem before…

Read more »

Poisoning the well with a within-person design? What’s the risk?

November 25, 2017
By

I was thinking more about our recommendation that psychology researchers routinely use within-person rather than between-person designs. The quick story is that a within-person design is more statistically efficient because, when you compare measurements within a person, you should get less variation than when you compare different groups. But researchers often use between-person designs out […] The post Poisoning the well with a within-person design? What’s the risk? appeared first…

Read more »

Using output from a fitted machine learning algorithm as a predictor in a statistical model

November 24, 2017
By

Fred Gruber writes: I attended your talk at Harvard where, regarding the question on how to deal with complex models (trees, neural networks, etc) you mentioned the idea of taking the output of these models and fitting a multilevel regression model. Is there a paper you could refer me to where I can read about […] The post Using output from a fitted machine learning algorithm as a predictor in…

Read more »

Stan is a probabilistic programming language

November 23, 2017
By

See here: Stan: A Probabilistic Programming Language. Journal of Statistical Software. (Bob Carpenter, Andrew Gelman, Matthew D. Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, Allen Riddell) And here: Stan is Turing Complete. So what? (Bob Carpenter) And, the pre-stan version: Fully Bayesian computing. (Jouni Kerman and Andrew Gelman) Apparently […] The post Stan is a probabilistic programming language appeared first on Statistical Modeling, Causal…

Read more »

Wine + Stan + Climate change = ?

November 22, 2017
By

Pablo Almaraz writes: Recently, I published a paper in the journal Climate Research in which I used RStan to conduct the statistical analyses: Almaraz P (2015) Bordeaux wine quality and climate fluctuations during the last century: changing temperatur...

Read more »


Subscribe

Email:

  Subscribe