A glimps of Inverse Problems

November 15, 2012
By

(This article was originally published at Statisfaction » Statistics, and syndicated at StatsBlogs.)

Image

Hi folks !

Last Tuesday a seminar on Bayesian procedure for inverse problems took place at CREST. We had time for two presentations of young researchers Bartek Knapik and Kolyan Ray. Both presentations deal with the problem of observing a noisy version of a linear transform of the parameter of interest

Y_i = K\mu + \frac{1}{\sqrt{n}} Z
where K is a linear operator and Z a Gaussian white noise.  Both presentations considered asymptotic properties of the posterior distribution (Their papers can be found on arxiv, here for Bartek’s, and here for Kolyan’s). There is a wide literature on asymptotic properties of the posterior distribution in direc models. When looking at the concentration of f toward a true distribution f_0  given the data, with respect to some distance d(.,.),  well known problem is to derive concentration rates, that is the rate \epsilon_n such that

\pi(d(f,f_0) > \epsilon_n | X^n) \to 0.

For inverse problems, the usual methods as introduced by Ghosal, Ghosh and van der Vaart (2000) usually fails, and thus results in this settings are in general difficult to obtain.

Bartek presented some very refined results in the conjugate case. He manages to get some results on the concentration rates of the posterior distribution, on Bayesian Credible Sets and Bernstein – Von Mises theorems – that states that the posterior is asymptotically Gaussian – when estimating a linear functional of the parameter of interest. Kolyan got some general conditions on the prior to achieve concentration rate, and prove that these techniques leads to optimal concentration rates for classical models.

I only knew little about inverse problems but both talks were very accessible and I will surely get more involved in this field !




Please comment on the article here: Statisfaction » Statistics

Tags: , , , , ,


Subscribe

Email:

  Subscribe