Everybody’s talkin bout Popper, Lakatos, etc. I think they’re great. Falsificationist Bayes, all the way, man! But there’s something we need to be careful about. All the statistical hypotheses we ever make are false. That is, if a hypothesis becomes specific enough to make (probabilistic) predictions, we know that with enough data we will be […]

# Category: Bayesian Statistics

## Fitting multilevel models when the number of groups is small

Matthew Poes writes: I have a question that I think you have answered for me before. There is an argument to be made that HLM should not be performed if a sample is too small (too small level 2 and too small level 1 units). Lot’s of papers written with guidelines on what those should […]

## Of multiple comparisons and multilevel models

Kleber Neves writes: I’ve been a long-time reader of your blog, eventually becoming more involved with the “replication crisis” and such (currently, I work with the Brazilian Reproducibility Initiative). Anyway, as I’m now going deeper into statistics, I feel like I still lack some foundational intuitions (I was trained as a half computer scientist/half experimental […]

## Transforming parameters in a simple time-series model; debugging the Jacobian

So. This one is pretty simple. But the general idea could be useful to some of you. So here goes. We were fitting a model with an autocorrelation parameter, rho, which was constrained to be between 0 and 1. The model looks like this: eta_t ~ normal(rho*eta_{t-1}, sigma_res), for t = 2, 3, … T […]

## Data partitioning as an essential element in evaluation of predictive properties of a statistical method

In a discussion of our stacking paper, the point came up that LOO (leave-one-out cross validation) requires a partitioning of data—you can only “leave one out” if you define what “one” is. It is sometimes said that LOO “relies on the data-exchangeability assumption,” but I don’t think that’s quite the right way to put it, […]

## AIQ [book review]

AIQ was my Christmas day read, which I mostly read while the rest of the household was still sleeping. The book, written by two Bayesians, Nick Polson and James Scott, was published before the ISBA meeting last year, but I only bought it on my last trip to Warwick [as a Xmas present]. This is […]

## “The Book of Why” by Pearl and Mackenzie

Judea Pearl and Dana Mackenzie sent me a copy of their new book, “The book of why: The new science of cause and effect.” There are some things I don’t like about their book, and I’ll get to that, but I want to start with a central point of theirs with which I agree strongly. […]

The post “The Book of Why” by Pearl and Mackenzie appeared first on Statistical Modeling, Causal Inference, and Social Science.

## “The Book of Why” by Pearl and Mackenzie

Judea Pearl and Dana Mackenzie sent me a copy of their new book, “The book of why: The new science of cause and effect.” There are some things I don’t like about their book, and I’ll get to that, but I want to start with a central point of theirs with which I agree strongly. […]

The post “The Book of Why” by Pearl and Mackenzie appeared first on Statistical Modeling, Causal Inference, and Social Science.

## Did she really live 122 years?

Even more famous than “the Japanese dude who won the hot dog eating contest” is “the French lady who lived to be 122 years old.” But did she really? Paul Campos writes: Here’s a statistical series, laying out various points along the 100 longest known durations of a particular event, of which there are billions […]

The post Did she really live 122 years? appeared first on Statistical Modeling, Causal Inference, and Social Science.

## Did she really live 122 years?

Even more famous than “the Japanese dude who won the hot dog eating contest” is “the French lady who lived to be 122 years old.” But did she really? Paul Campos points us to this post, where he writes: Here’s a statistical series, laying out various points along the 100 longest known durations of a […]

The post Did she really live 122 years? appeared first on Statistical Modeling, Causal Inference, and Social Science.

## Objective Bayes conference in June

Christian Robert points us to this Objective Bayes Methodology Conference in Warwick, England in June. I’m not a big fan of the term “objective Bayes” (see my paper with Christian Hennig, Beyond subjective and objective in statistics), but the conference itself looks interesting, and there are still a few weeks left for people to submit […]

The post Objective Bayes conference in June appeared first on Statistical Modeling, Causal Inference, and Social Science.

## Objective Bayes conference in June

Christian Robert points us to this Objective Bayes Methodology Conference in Warwick, England in June. I’m not a big fan of the term “objective Bayes” (see my paper with Christian Hennig, Beyond subjective and objective in statistics), but the conference itself looks interesting, and there are still a few weeks left for people to submit […]

The post Objective Bayes conference in June appeared first on Statistical Modeling, Causal Inference, and Social Science.

## “Principles of posterior visualization”

What better way to start the new year than with a discussion of statistical graphics. Mikhail Shubin has this great post from a few years ago on Bayesian visualization. He lists the following principles: Principle 1: Uncertainty should be visualized Principle 2: Visualization of variability ≠ Visualization of uncertainty Principle 3: Equal probability = Equal […]

The post “Principles of posterior visualization” appeared first on Statistical Modeling, Causal Inference, and Social Science.

## “Principles of posterior visualization”

What better way to start the new year than with a discussion of statistical graphics. Mikhail Shubin has this great post from a few years ago on Bayesian visualization. He lists the following principles: Principle 1: Uncertainty should be visualized Principle 2: Visualization of variability ≠ Visualization of uncertainty Principle 3: Equal probability = Equal […]

The post “Principles of posterior visualization” appeared first on Statistical Modeling, Causal Inference, and Social Science.

## “Check yourself before you wreck yourself: Assessing discrete choice models through predictive simulations”

Timothy Brathwaite sends along this wonderfully-titled article (also here, and here’s the replication code), which begins: Typically, discrete choice modelers develop ever-more advanced models and estimation methods. Compared to the impressive progress in model development and estimation, model-checking techniques have lagged behind. Often, choice modelers use only crude methods to assess how well an estimated […]

The post “Check yourself before you wreck yourself: Assessing discrete choice models through predictive simulations” appeared first on Statistical Modeling, Causal Inference, and Social Science.

## “Check yourself before you wreck yourself: Assessing discrete choice models through predictive simulations”

Timothy Brathwaite sends along this wonderfully-titled article (also here, and here’s the replication code), which begins: Typically, discrete choice modelers develop ever-more advanced models and estimation methods. Compared to the impressive progress in model development and estimation, model-checking techniques have lagged behind. Often, choice modelers use only crude methods to assess how well an estimated […]

The post “Check yourself before you wreck yourself: Assessing discrete choice models through predictive simulations” appeared first on Statistical Modeling, Causal Inference, and Social Science.

## What is probability?

This came up in a discussion a few years ago, where people were arguing about the meaning of probability: is it long-run frequency, is it subjective belief, is it betting odds, etc? I wrote: Probability is a mathematical concept. I think Martha Smith’s analogy to points, lines, and arithmetic is a good one. Probabilities are […]

The post What is probability? appeared first on Statistical Modeling, Causal Inference, and Social Science.

## What is probability?

This came up in a discussion a few years ago, where people were arguing about the meaning of probability: is it long-run frequency, is it subjective belief, is it betting odds, etc? I wrote: Probability is a mathematical concept. I think Martha Smith’s analogy to points, lines, and arithmetic is a good one. Probabilities are […]

The post What is probability? appeared first on Statistical Modeling, Causal Inference, and Social Science.

## Binomial vs Bernoulli

An interesting confusion on X validated where someone was convinced that using the Bernoulli representation of a sequence of Bernoulli experiments led to different posterior probabilities of two possible models than when using their Binomial representation. The confusion actually stemmed from using different conditionals, namely N¹=4,N²=1 in the first case (for a model M¹ with […]

## Exploring model fit by looking at a histogram of a posterior simulation draw of a set of parameters in a hierarchical model

Opher Donchin writes in with a question: We’ve been finding it useful in the lab recently to look at the histogram of samples from the parameter combined across all subjects. We think, but we’re not sure, that this reflects the distribution of that parameter when marginalized across subjects and can be a useful visualization. It […]

The post Exploring model fit by looking at a histogram of a posterior simulation draw of a set of parameters in a hierarchical model appeared first on Statistical Modeling, Causal Inference, and Social Science.