Andrew Gelman’s new favorite example of the hidden dangers of noninformative priors is the following. If we observe data y ~ N(theta,1) and get y=1, then this is consistent with being pure noise, but the posterior probability for theta>0 is .84. ...

Andrew Gelman’s new favorite example of the hidden dangers of noninformative priors is the following. If we observe data y ~ N(theta,1) and get y=1, then this is consistent with being pure noise, but the posterior probability for theta>0 is .84. ...

I sent this rejection letter this morning about a paper submitted to the International Journal of Forecasting. Dear XXXXX. I am writing to you regarding manuscript ????? entitled “xxxxxxxxxxxx” which you submitted to the International Journal of Forecasting. It so happens that I am aware that this paper was previously reviewed for the YYYYYYY journal. It seems that you have not bothered to make any of the changes recommended by…

Here are two exercises I wrote for my R mid-term exam in Paris-Dauphine around Buffon’s needle problem. In the end, the problems sounded too long and too hard for my 3rd year students so I opted for softer questions. So recycle those if you wish (but do not ask for solutions!) Filed under: Books, Kids, […]

auto·di·dact n. A self-taught person. From Greek autodidaktos, self-taught : auto-, auto- + didaktos, taught; + sim·u·late v. To create a representation or model of (a physical system or particular situation, for example). From Latin simulre, simult-, from similis, like; = (If you can get past the mixing of Latin and Greek roots) sim·u·di·dactic adj. To learn by creating a representation or model of a physical system or […]

I received the following email from someone who would like to remain anonymous: A journal editor made me change all my figures into tables. I complied, but I sent along one of your papers on the topic of figures versus tables. I got the following email in response which I thought you’d find funny: Yes, […]The post Tables > figures yet again appeared first on Statistical Modeling, Causal Inference, and…

Another number theory Le Monde mathematical puzzles: Find 2≤n≤50 such that the sequence {1,…,n} can be permuted into a sequence such that the sum of two consecutive terms is a prime number. Now this is a problem with an R code solution: which returns the solution as and so it seems there is no solution beyond N=12… […]

I’ve always thought that it’s silly, in most cases, source compiling software that’s already available in binary form. To the end of making more binary packages available to Mac users, I just started contributing to a project that is creating a repository of 64 bit builds of pkgsrc’s (NetBSD's portable package manager) over 12,000 packages. »more

Jeff and I had an opportunity to sit down with Daphne Koller, Co-Founder of Coursera and Rajeev Motwani Professor of Computer Science at Stanford University. Jeff and I both teach massive open online courses using the Coursera platform and it was … Continue reading →

Following up on yesterday’s post, here’s David Chudzicki’s story (with graphs and Stan/R code!) of how he fit a model for an increasing function (“isotonic regression”). Chudzicki writes: This post will describe a way I came up with of fitting a function that’s constrained to be increasing, using Stan. If you want practical help, standard […]The post A Bayesian model for an increasing function, in Stan! appeared first on Statistical…

The New York Times is recruiting a chief data scientist.

Andrew Gelman has some interesting comments on non-informative priors this morning. Rather than thinking of the prior as a static thing, think of it as a way to prime the pump. … a non-informative prior is a placeholder: you can use the non-informative prior to get the analysis started, then if your posterior distribution is […]

Following up on Christian’s post [link fixed] on the topic, I’d like to offer a few thoughts of my own. In BDA, we express the idea that a noninformative prior is a placeholder: you can use the noninformative prior to get the analysis started, then if your posterior distribution is less informative than you would […]The post Hidden dangers of noninformative priors appeared first on Statistical Modeling, Causal Inference, and…

Today is Erich Lehmann’s birthday. The last time I saw him was at the Second Lehmann conference in 2004, at which I organized a session on philosophical foundations of statistics (including David Freedman and D.R. Cox). I got to know Lehmann, Neyman’s first student, in 1997. One day, I received a bulging, six-page, handwritten letter […]

Tenure track faculty opening at the Center for the Promotion of Research Involving Innovative Statistical Methodology, with Jennifer Hill, Marc Scott, and other world-class researchers. It looks like a great opportunity. The post That’s crazy ta...