I happened to come across this post from 2012 and noticed a point I’d like to share again. I was discussing an article by David Cox and Deborah Mayo, in which Cox wrote: [Bayesians’] conceptual theories are trying to do two entirely different things. One is trying to extract information from the data, while the […]
Dean Eckles pointed me to this recent report by Andrew Mercer, Arnold Lau, and Courtney Kennedy of the Pew Research Center, titled, “For Weighting Online Opt-In Samples, What Matters Most? The right variables make a big difference for accuracy. Complex statistical methods, not so much.” I like most of what they write, but I think […]
The post “The most important aspect of a statistical analysis is not what you do with the data, it’s what data you use” (survey adjustment edition) appeared first on Statistical Modeling, Causal Inference, and Social Science.
Introduction Zacco asked in Stan discourse whether LOO is valid for phylogenetic models. He also referred to Dan’s excellent blog post which mentioned iid assumption. Instead of iid it would be better to talk about exchangeability assumption, but I (Aki) got a bit lost in my discourse answer (so don’t bother to go read it). […]
The post When LOO and other cross-validation approaches are valid appeared first on Statistical Modeling, Causal Inference, and Social Science.
Yuling prepared this poster summarizing our recent work on path sampling using a continuous joint distribution. The method is really cool and represents a real advance over what Xiao-Li and I were doing in our 1998 paper. It’s still gonna have problems in high or even moderate dimensions, and ultimately I think we’re gonna need […]
The post Continuous tempering through path sampling appeared first on Statistical Modeling, Causal Inference, and Social Science.
Sean Talts and Bob Carpenter pointed us to this awesome MCMC animation site by Chi Feng. For instance, here’s NUTS on a banana-shaped density. This is indeed super-cool, and maybe there’s a way to connect these with Stan/ShinyStan/Bayesplot so as to automatically make movies of Stan model fits. This would be great, both to help […]
The post Awesome MCMC animation site by Chi Feng! On Github! appeared first on Statistical Modeling, Causal Inference, and Social Science.
tl;dr If you have bad models, bad priors or bad inference choose the simplest possible model. If you have good models, good priors, good inference, use the most elaborate model for predictions. To make interpretation easier you may use a smaller model with similar predictive performance as the most elaborate model. Merijn Mestdagh emailed me […]
Rolf Zwaan (who we last encountered here in “From zero to Ted talk in 18 simple steps”), Alexander Etz, Richard Lucas, and M. Brent Donnellan wrote an article, “Making replication mainstream,” which begins: Many philosophers of science and methodologists have argued that the ability to repeat studies and obtain similar results is an essential component […]
The post “The idea of replication is central not just to scientific practice but also to formal statistics . . . Frequentist statistics relies on the reference set of repeated experiments, and Bayesian statistics relies on the prior distribution which represents the population of effects.” appeared first on Statistical Modeling, Causal Inference, and Social Science.
Chad Kiewiet De Jonge, Gary Langer, and Sofi Sinozich write: This paper presents state-level estimates of the 2016 presidential election using data from the ABC News/Washington Post tracking poll and multilevel regression with poststratification (MRP). While previous implementations of MRP for election forecasting have relied on data from prior elections to establish poststratification targets for […]
The post Mister P wins again appeared first on Statistical Modeling, Causal Inference, and Social Science.
Pierre Jacob, Lawrence Murray, Chris Holmes, Christian Robert write: In modern applications, statisticians are faced with integrating heterogeneous data modalities relevant for an inference, prediction, or decision problem. In such circumstances, it is convenient to use a graphical model to represent the statistical dependencies, via a set of connected “modules”, each relating to a specific […]
The post Joint inference or modular inference? Pierre Jacob, Lawrence Murray, Chris Holmes, Christian Robert discuss conditions on the strength and weaknesses of these choices appeared first on Statistical Modeling, Causal Inference, and Social Science.
The basics of Bayesian inference is p(parameters|data) proportional to p(parameters)*p(data|parameters). And, for predictions, p(predictions|data) = integral_parameters p(predictions|parameters,data)*p(parameters|data). In these expressions (and the corresponding simpler versions for maximum likelihood), “parameters” and “data” are unitary objects. Yes, it can be helpful to think of the parameter objects as being a list or vector of individual parameters; and […]
The post Divisibility in statistics: Where is it needed? appeared first on Statistical Modeling, Causal Inference, and Social Science.