Nick Patterson writes:
I am a scientist/data analyst, still working, who has been using Bayesian methods since 1972 (getting on for 50 years), I was initially trained at the British code-breaking establishment GCHQ, by intellectual heirs of Alan Turing.
I’ve been accused of being a Bayesian fanatic, but in fact a good deal of my work is frequentist—either because this is easier and “good enough” or because specifying a decent model seems too hard. Currently I work in genetics and here is an example of the latter problem. We want to know if there is strong evidence against a null that 2 populations have split off with basically no gene-flow after the split. Already this a bit hard to handle with Bayesian methods as we haven’t really got any kind of prior on how much flow might occur. But it is even harder than that. It turns out that the distribution of relevant statistics depends on the extent of “LD” in the genome, and the detailed structure of LD is uncertain. [LD is jargon for non-independence of close genomic regions.] I use a “block jackknife” which (largely) deals with this issue but is a frequentist technique. This is a case where frequentist methods are simple and mostly work well, and the Bayesian analogs look unpleasant, requiring inference on lots of nuisance parameters that frequentists can bypass.
I’d love to have a dialog here. In general I feel a lot of practical statistical problems/issues are messier than the textbook or academic descriptions imply.
I don’t have experience in this genetics problem, but speaking in general terms I think the Bayesian version with lots of so-called nuisance parameters can work fine in Stan. The point is that by modeling these parameters you can do better than methods that don’t model them. Or, to put it another way, how does the non-Bayesian method finesse those “nuisance parameters”? If this is done by integrating them out, then you already have a probability model for these parameters, and you’re already doing some form of Bayesian inference, so then it just comes down to computation. If you’re bypassing the nuisance parameters without modeling them, then I suspect you’re throwing away some information.
This is not to say that the existing methods have problems. If your existing methods work well, fine. The statement, though, is that the existing methods “mostly” work well. So maybe the starting point is to focus on the problems where the existing methods don’t work well, as these are the problems where an investment in Bayesian modeling could pay off.
And recall footnote 1 of this paper.