Incorporating Bayes factor into my understanding of scientific information and the replication crisis

March 10, 2018
By

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

I was having this discussion with Dan Kahan, who was arguing that my ideas about type M and type S error, while mathematically correct, represent a bit of a dead end in that, if you want to evaluate statistically-based scientific claims, you’re better off simply using likelihood ratios or Bayes factors. Kahan would like to use the likelihood ratio to summarize the information from a study and then go from there. The problem with type M and type S errors is that, to determine these, you need some prior values for the unknown parameters in the problem.

I have a lot of problems with how Bayes factors are presented in textbooks and articles by various leading Bayesians, but I have nothing against Bayes factors in theory.

So I thought it might help for me to explain, using an example, how I’d use Bayes factors in a scenario where one could also use type M and type S errors.

The example is the beauty-and-sex-ratio study described here, and the is that the data are really weak (not a power=.06 study but a power=.0500001 .0501 study or something like that). The likelihood for the parameter is something like normal(.08, .03^2)–that is, there’s a point estimate of 0.08 (an 8 percentage point difference in Pr(girl birth), comparing children of beautiful parents to others) with a se of 0.03 (that is, 3 percentage points). From the literature and some math reasoning (not shown here) having to do with measurement error in the predictor, reasonable effect sizes are anywhere between 0 and, say, +/- 0.001 (one-tenth of a percentage points); see the above-linked paper.

The relevant Bayes factor here is not theta=0 vs theta!=0. Rather, it’s theta=-0.001 (say) vs. theta=0 vs. theta=+0.001. Result will show Bayes factors very close to 1 (i.e., essentially zero evidence); also relevant is the frequentist calculation of how variable the Bayes factors might be under the null hypothesis that theta=0.

I better clarify that last point: The null hypothesis is not scientifically interesting, nor do I learn anything useful about sex ratios from learning that the p-value of the data relative to the null hypothesis is 0.20, or 0.02, or 0.002, or whatever. However, the null hypothesis can be useful as a device for approximating the sampling distribution of a statistical procedure.

P.S. See here for more from Kahan.

The post Incorporating Bayes factor into my understanding of scientific information and the replication crisis appeared first on Statistical Modeling, Causal Inference, and Social Science.



Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

Tags: ,


Subscribe

Email:

  Subscribe