# Category: Bayesian Statistics

## Active learning and decision making with varying treatment effects!

In a new paper, Iiris Sundin, Peter Schulam, Eero Siivola, Aki Vehtari, Suchi Saria, and Samuel Kaski write: Machine learning can help personalized decision support by learning models to predict individual treatment effects (ITE). This work studies the reliability of prediction-based decision-making in a task of deciding which action a to take for a target […]

## Some Stan and Bayes short courses!

Robert Grant writes: I have a couple of events coming up that people might be interested in. They are all at bayescamp.com/courses Stan Taster Webinar is on 15 May, runs for one hour and is only £15. I’ll demo Stan through R (and maybe PyStan and CmdStan if the interest is there on the day), […]

## What’s a good default prior for regression coefficients? A default Edlin factor of 1/2?

The punch line “Your readers are my target audience. I really want to convince them that it makes sense to divide regression coefficients by 2 and their standard errors by sqrt(2). Of course, additional prior information should be used whenever available.” The background It started with an email from Erik van Zwet, who wrote: In […]

## Here’s an idea for not getting tripped up with default priors . . .

I put this in the Prior Choice Recommendations wiki awhile ago: “The prior can often only be understood in the context of the likelihood”: http://www.stat.columbia.edu/~gelman/research/published/entropy-19-00555-v2.pdf Here’s an idea for not getting tripped up with default priors: For each parameter (or other qoi), compare the posterior sd to the prior sd. If the posterior sd for […]

## Ben Lambert. 2018. A Student’s Guide to Bayesian Statistics.

Ben Goodrich, in a Stan forums survey of Stan video lectures, points us to the following book, which introduces Bayes, HMC, and Stan: Ben Lambert. 2018. A Student’s Guide to Bayesian Statistics. SAGE Publications. If Ben Goodrich is recommending it, it’s bound to be good. Amazon reviewers seem to really like it, too. You may […]

## Mister P for surveys in epidemiology — using Stan!

Jon Zelner points us to this new article in the American Journal of Epidemiology, “Multilevel Regression and Poststratification: A Modelling Approach to Estimating Population Quantities From Highly Selected Survey Samples,” by Marnie Downes, Lyle Gurrin, Dallas English, Jane Pirkis, Dianne Currier, Matthew Spittal, and John Carlin, which begins: Large-scale population health studies face increasing difficulties […]

## My two talks in Montreal this Friday, 22 Mar

McGill University Biostatistics seminar, Purvis Hall, 102 Pine Ave. West, Room 25, 1-2pm Fri 22 Mar: Resolving the Replication Crisis Using Multilevel Modeling In recent years we have come to learn that many prominent studies in social science and medicine, conducted at leading research institutions, published in top journals, and publicized in respected news outlets, […]

Ed Bein writes: I’m hoping you can clarify a Bayesian “metaphysics” question for me. Let me note I have limited experience with Bayesian statistics. In frequentist statistics, probability has to do with what happens in the long run. For example, a p value is defined in terms of what happens if, from now till eternity, […]

## Estimating treatment effects on rates of rare events using precursor data: Going further with hierarchical models.

Someone points to my paper with Gary King from 1998, Estimating the probability of events that have never occurred: When is your vote decisive?, and writes: In my area of early childhood intervention, there are certain outcomes which are rare. Things like premature birth, confirmed cases of child-maltreatment, SIDS, etc. They are rare enough that […]

## R package for Type M and Type S errors

Andy Garland Timm writes: My package for working with Type S/M errors in hypothesis testing, ‘retrodesign’, is now up on CRAN. It builds on the code provided by Gelman and Carlin (2014) with functions for calculating type S/M errors across a variety of effect sizes as suggested for design analysis in the paper, a function […]

## “We’ve Got More Than One Model: Evaluating, comparing, and extending Bayesian predictions”

I was asked to speak at the American Association of Pharmaceutical Scientists Predictive Modeling Workshop, and a title was needed. This is what I came up with:
We’ve Got More Than One Model: Evaluating, comparing, and extending Bayesian predic…

## HMC step size: How does it scale with dimension?

A bunch of us were arguing about how the Hamiltonian Monte Carlo step size should scale with dimension, and so Bob did the Bob thing and just ran an experiment on the computer to figure it out. Bob writes: This is for standard normal independent in all dimensions. Note the log scale on the x […]

## “Do you have any recommendations for useful priors when datasets are small?”

Someone who wishes to remain anonymous writes: I just read your paper with Daniel Simpson and Michael Betancourt, The Prior Can Often Only Be Understood in the Context of the Likelihood, and I find it refreshing to read that “the practical utility of a prior distribution within a given analysis then depends critically on both […]

## I’m getting the point

A long-winded X validated discussion on the [textbook] mean-variance conjugate posterior for the Normal model left me [mildly] depressed at the point and use of answering questions on this forum. Especially as it came at the same time as a catastrophic outcome for my mathematical statistics exam.  Possibly an incentive to quit X validated as […]

## I’m getting the point

A long-winded X validated discussion on the [textbook] mean-variance conjugate posterior for the Normal model left me [mildly] depressed at the point and use of answering questions on this forum. Especially as it came at the same time as a catastrophic outcome for my mathematical statistics exam.  Possibly an incentive to quit X validated as […]

## Our hypotheses are not just falsifiable; they’re actually false.

Everybody’s talkin bout Popper, Lakatos, etc. I think they’re great. Falsificationist Bayes, all the way, man! But there’s something we need to be careful about. All the statistical hypotheses we ever make are false. That is, if a hypothesis becomes specific enough to make (probabilistic) predictions, we know that with enough data we will be […]

## Fitting multilevel models when the number of groups is small

Matthew Poes writes: I have a question that I think you have answered for me before. There is an argument to be made that HLM should not be performed if a sample is too small (too small level 2 and too small level 1 units). Lot’s of papers written with guidelines on what those should […]

## Of multiple comparisons and multilevel models

Kleber Neves writes: I’ve been a long-time reader of your blog, eventually becoming more involved with the “replication crisis” and such (currently, I work with the Brazilian Reproducibility Initiative). Anyway, as I’m now going deeper into statistics, I feel like I still lack some foundational intuitions (I was trained as a half computer scientist/half experimental […]

## Transforming parameters in a simple time-series model; debugging the Jacobian

So. This one is pretty simple. But the general idea could be useful to some of you. So here goes. We were fitting a model with an autocorrelation parameter, rho, which was constrained to be between 0 and 1. The model looks like this: eta_t ~ normal(rho*eta_{t-1}, sigma_res), for t = 2, 3, … T […]

## Data partitioning as an essential element in evaluation of predictive properties of a statistical method

In a discussion of our stacking paper, the point came up that LOO (leave-one-out cross validation) requires a partitioning of data—you can only “leave one out” if you define what “one” is. It is sometimes said that LOO “relies on the data-exchangeability assumption,” but I don’t think that’s quite the right way to put it, […]