**O**n X validated, I got pointed to this recent paper by He, Wang, Lee and Tiang, that proposes a new form of Bayesian GAN. Although I do not see it as really Bayesian, as explained below.

“[The]existing Bayesian method (Saatchi & Wilson, 2017) may lead to incompatible conditionals, which suggest that the underlying joint distribution actually does not exist.”

The difference with the Bayesian GANs of Saatchi & Wilson (2017) [with Saatchi’s name being consistently misspelled throughout] is in the definition of the likelihood function, function of both generative and discriminatory parameters. As in Bissiri et al. (2013), the likelihood is replaced by the exponentiated loss function, or rather functions, which are computed with expected or pluggin distributions or discriminating functions. Expectations under the respective priors and for the observed data (?). Meaning there are “two likelihoods” for the same data, one being the inverse of the other in the minimax GAN case. Further, the prior on the generative parameter is actually of the prior feedback category: at each iteration, the authors “use the generator distribution in the previous time step as a prior for the next time step”. Which makes me wonder how they avoid ending up with a Dirac “prior”. (Even curiouser, the prior on the discriminating parameter, which is not a genuine component of the statistical model, is a flat prior across iterations.) The convergence result established in the paper is that, if the (true) data-generating model can be written as the marginal of the assumed parametric generative model against an “optimal” distribution, then the discriminating function converges to non-discrimination between (true) data-generating model and the assumed parametric generative model. This somehow negates the Bayesian side of the affair, as convergence to a point mass does not produce a Bayesian outcome on the parameters of the model, or on the comparison between true and assumed models. The paper also demonstrates the incompatibility of the two conditionals used by Saatchi & Wilson (2017) and provides a formal example [missing any form of data?] where the associated Bayesian GAN does not converge to the true value behind the model. But the issue is more complex in my opinion in that using two incompatible conditionals does not mean that the associated Markov chain is transient (as e.g. on a compact space).