Posts Tagged ‘ Statistical computing ’

Fitting hierarchical GLMs in package X is like driving car Y

April 17, 2017
By
Fitting hierarchical GLMs in package X is like driving car Y

Given that Andrew started the Gremlin theme (the car in the image at the right), I thought it would only be fitting to link to the following amusing blog post: Chris Brown: Choosing R packages for mixed effects modelling based on the car you drive (on the seascape models blog) It’s exactly what it says […] The post Fitting hierarchical GLMs in package X is like driving car Y appeared…

Read more »

Bayesian Posteriors are Calibrated by Definition

April 12, 2017
By
Bayesian Posteriors are Calibrated by Definition

Time to get positive. I was asking Andrew whether it’s true that I have the right coverage in Bayesian posterior intervals if I generate the parameters from the prior and the data from the parameters. He replied that yes indeed that is true, and directed me to: Cook, S.R., Gelman, A. and Rubin, D.B. 2006. […] The post Bayesian Posteriors are Calibrated by Definition appeared first on Statistical Modeling, Causal…

Read more »

Stacking, pseudo-BMA, and AIC type weights for combining Bayesian predictive distributions

April 11, 2017
By
Stacking, pseudo-BMA, and AIC type weights for combining Bayesian predictive distributions

This post is by Aki. We have often been asked in the Stan user forum how to do model combination for Stan models. Bayesian model averaging (BMA) by computing marginal likelihoods is challenging in theory and even more challenging in practice using only the MCMC samples obtained from the full model posteriors. Some users have […] The post Stacking, pseudo-BMA, and AIC type weights for combining Bayesian predictive distributions appeared…

Read more »

“Scalable Bayesian Inference with Hamiltonian Monte Carlo” (Michael Betancourt’s talk this Thurs at Columbia)

April 4, 2017
By

Scalable Bayesian Inference with Hamiltonian Monte Carlo Despite the promise of big data, inferences are often limited not by sample size but rather by systematic effects. Only by carefully modeling these effects can we take full advantage of the data—big data must be complemented with big models and the algorithms that can fit them. One […] The post “Scalable Bayesian Inference with Hamiltonian Monte Carlo” (Michael Betancourt’s talk this Thurs…

Read more »

Running Stan with external C++ code

March 31, 2017
By

Ben writes: Starting with the 2.13 release, it is much easier to use external C++ code in a Stan program. This vignette briefly illustrates how to do so. He continues: Suppose that you have (part of) a Stan program that involves Fibonacci numbers, such as functions { int fib(int n); int fib(int n) { if […] The post Running Stan with external C++ code appeared first on Statistical Modeling, Causal…

Read more »

Ensemble Methods are Doomed to Fail in High Dimensions

March 15, 2017
By
Ensemble Methods are Doomed to Fail in High Dimensions

Ensemble methods By ensemble methods, I (Bob, not Andrew) mean approaches that scatter points in parameter space and then make moves by inteprolating or extrapolating among subsets of them. Two prominent examples are: Ter Braak’s differential evolution   Goodman and Weare’s walkers There are extensions and computer implementations of these algorithms. For example, the Python […] The post Ensemble Methods are Doomed to Fail in High Dimensions appeared first on…

Read more »

Expectation propagation as a way of life: A framework for Bayesian inference on partitioned data

March 11, 2017
By
Expectation propagation as a way of life:  A framework for Bayesian inference on partitioned data

After three years, we finally have an updated version of our “EP as a way of life” paper. Authors are Andrew Gelman, Aki Vehtari, Pasi Jylänki, Tuomas Sivula, Dustin Tran, Swupnil Sahai, Paul Blomstedt, John Cunningham, David Schiminovich, and Christian Robert. Aki deserves credit for putting this all together into a coherent whole. Here’s the […] The post Expectation propagation as a way of life: A framework for Bayesian inference…

Read more »

A fistful of Stan case studies: divergences and bias, identifying mixtures, and weakly informative priors

March 7, 2017
By

Following on from his talk at StanCon, Michael Betancourt just wrote three Stan case studies, all of which are must reads: Diagnosing Biased Inference with Divergences: This case study discusses the subtleties of accurate Markov chain Monte Carlo estimation and how divergences can be used to identify biased estimation in practice.   Identifying Bayesian Mixture […] The post A fistful of Stan case studies: divergences and bias, identifying mixtures, and…

Read more »

Advice when debugging at 11pm

March 6, 2017
By

Add one feature to your model and test and debug with fake data before going on. Don’t try to add two features at once. The post Advice when debugging at 11pm appeared first on Statistical Modeling, Causal Inference, and Social Science.

Read more »

Facebook’s Prophet uses Stan

March 1, 2017
By
Facebook’s Prophet uses Stan

Sean Taylor, a research scientist at Facebook and Stan user, writes: I wanted to tell you about an open source forecasting package we just released called Prophet:  I thought the readers of your blog might be interested in both the package and the fact that we built it on top of Stan. Under the hood, […] The post Facebook’s Prophet uses Stan appeared first on Statistical Modeling, Causal Inference, and…

Read more »


Subscribe

Email:

  Subscribe