Posts Tagged ‘ Bayesian statistics ’

Representists versus Propertyists: RabbitDucks – being good for what?

April 19, 2017
By
Representists versus Propertyists: RabbitDucks – being good for what?

It is not that unusual in statistics to get the same statistical output (uncertainty interval, estimate, tail probability,etc.) for every sample, or some samples or the same distribution of outputs or the same expectations of outputs or just close enough expectations of outputs. Then, I would argue one has a variation on a DuckRabbit. In […] The post Representists versus Propertyists: RabbitDucks – being good for what? appeared first on…

Read more »

The Efron transition? And the wit and wisdom of our statistical elders

April 15, 2017
By
The Efron transition?  And the wit and wisdom of our statistical elders

Stephen Martin writes: Brad Efron seems to have transitioned from “Bayes just isn’t as practical” to “Bayes can be useful, but EB is easier” to “Yes, Bayes should be used in the modern day” pretty continuously across three decades. http://www2.stat.duke.edu/courses/Spring10/sta122/Handouts/EfronWhyEveryone.pdf http://projecteuclid.org/download/pdf_1/euclid.ss/1028905930 http://statweb.stanford.edu/~ckirby/brad/other/2009Future.pdf Also, Lindley’s comment in the first article is just GOLD: “The last example […] The post The Efron transition? And the wit and wisdom of our statistical elders…

Read more »

Causal inference conference at Columbia University on Sat 6 May: Varying Treatment Effects

April 14, 2017
By
Causal inference conference at Columbia University on Sat 6 May:  Varying Treatment Effects

Hey! We’re throwing a conference: Varying Treatment Effects The literature on causal inference focuses on estimating average effects, but the very notion of an “average effect” acknowledges variation. Relevant buzzwords are treatment interactions, situational effects, and personalized medicine. In this one-day conference we shall focus on varying effects in social science and policy research, with […] The post Causal inference conference at Columbia University on Sat 6 May: Varying Treatment…

Read more »

Bayesian Posteriors are Calibrated by Definition

April 12, 2017
By
Bayesian Posteriors are Calibrated by Definition

Time to get positive. I was asking Andrew whether it’s true that I have the right coverage in Bayesian posterior intervals if I generate the parameters from the prior and the data from the parameters. He replied that yes indeed that is true, and directed me to: Cook, S.R., Gelman, A. and Rubin, D.B. 2006. […] The post Bayesian Posteriors are Calibrated by Definition appeared first on Statistical Modeling, Causal…

Read more »

Stacking, pseudo-BMA, and AIC type weights for combining Bayesian predictive distributions

April 11, 2017
By
Stacking, pseudo-BMA, and AIC type weights for combining Bayesian predictive distributions

This post is by Aki. We have often been asked in the Stan user forum how to do model combination for Stan models. Bayesian model averaging (BMA) by computing marginal likelihoods is challenging in theory and even more challenging in practice using only the MCMC samples obtained from the full model posteriors. Some users have […] The post Stacking, pseudo-BMA, and AIC type weights for combining Bayesian predictive distributions appeared…

Read more »

Beyond subjective and objective in statistics: my talk with Christian Hennig tomorrow (Wed) 5pm in London

April 11, 2017
By

Christian Hennig and I write: Decisions in statistical data analysis are often justified, criticized, or avoided using concepts of objectivity and subjectivity. We argue that the words “objective” and “subjective” in statistics discourse are used in a mostly unhelpful way, and we propose to replace each of them with broader collections of attributes, with objectivity […] The post Beyond subjective and objective in statistics: my talk with Christian Hennig tomorrow…

Read more »

Combining independent evidence using a Bayesian approach but without standard Bayesian updating?

April 9, 2017
By

Nic Lewis writes: I have made some progress with my work on combining independent evidence using a Bayesian approach but eschewing standard Bayesian updating. I found a neat analytical way of doing this, to a very good approximation, in cases where each estimate of a parameter corresponds to the ratio of two variables each determined […] The post Combining independent evidence using a Bayesian approach but without standard Bayesian updating?…

Read more »

Tech company wants to hire Stan programmers!

April 5, 2017
By

Ittai Kan writes: I started life as an academic mathematician (chaos theory) but have long since moved into industry. I am currently Chief Scientist at Afiniti, a contact center routing technology company that connects agent and callers on the basis of various factors in order to globally optimize the contact center performance. We have 17 […] The post Tech company wants to hire Stan programmers! appeared first on Statistical Modeling,…

Read more »

It’s not so hard to move away from hypothesis testing and toward a Bayesian approach of “embracing variation and accepting uncertainty.”

April 5, 2017
By

There’s been a lot of discussion, here and elsewhere, of the problems with null hypothesis significance testing, p-values, deterministic decisions, type 1 error rates, and all the rest. And I’ve recommended that people switch to a Bayesian approach, “embracing variation and accepting uncertainty,” as demonstrated (I hope) in my published applied work. But we recently […] The post It’s not so hard to move away from hypothesis testing and toward…

Read more »

“Scalable Bayesian Inference with Hamiltonian Monte Carlo” (Michael Betancourt’s talk this Thurs at Columbia)

April 4, 2017
By

Scalable Bayesian Inference with Hamiltonian Monte Carlo Despite the promise of big data, inferences are often limited not by sample size but rather by systematic effects. Only by carefully modeling these effects can we take full advantage of the data—big data must be complemented with big models and the algorithms that can fit them. One […] The post “Scalable Bayesian Inference with Hamiltonian Monte Carlo” (Michael Betancourt’s talk this Thurs…

Read more »


Subscribe

Email:

  Subscribe