Category: Books

truncated Normal moments

An interesting if presumably hopeless question spotted on X validated: a lower-truncated Normal distribution is parameterised by its location, scale, and truncation values, μ, σ, and α. There exist formulas to derive the mean and variance of the resulting distribution,  that is, when α=0, and but there is no easy way to choose (μ, σ) […]

selecting summary statistics [a tale of two distances]

As Jonathan Harrison came to give a seminar in Warwick [which I could not attend], it made me aware of his paper with Ruth Baker on the selection of summaries in ABC. The setting is an ABC-SMC algorithm and it relates with Fearnhead and Prangle (2012), Barnes et al. (2012), our own random forest approach, […]

Garrigue administrative

A central page in Le Monde of this week (May 08), about the conundrum of dealing with the dozens of thousands of handwritten pages left by Alexandre Grothendiek, from trying to make sense of the contents to assessing the monetary value (!) of such documents. Mentioning that the most reasonable solution would be to extend […]

Bertrand-Borel debate

On her blog, Deborah Mayo briefly mentioned the Bertrand-Borel debate on the (in)feasibility of hypothesis testing, as reported [and translated] by Erich Lehmann. A first interesting feature is that both B mathematicians discuss the probability of causes in the Bayesian spirit of Laplace. With Bertrand considering that the prior probabilities of the different causes are […]

abolitionist

A very interesting piece about prison abolition in the NYT. Centering on Ruth Wilson Gilmore an US advocate for the abolition of prison sentences and a geographer at Berkeley. Interesting because the very notion of abolition sounds anathema to many and I rarely meet people sharing the conviction that prison sentences are counter-productive, often in […]

MCMC importance samplers for intractable likelihoods

Jordan Franks just posted on arXiv his PhD dissertation at the University of Jyväskylä, where he discuses several of his works: M. Vihola, J. Helske, and J. Franks. Importance sampling type estimators based on approximate marginal MCMC. Preprint arXiv:1609.02541v5, 2016. J. Franks and M. Vihola. Importance sampling correction versus standard averages of reversible MCMCs in […]

did variational Bayes work?

An interesting ICML 2018 paper by Yuling Yao, Aki Vehtari, Daniel Simpson, and Andrew Gelman I missed last summer on [the fairly important issue of] assessing the quality or lack thereof of a variational Bayes approximation. In the sense of being near enough from the true posterior. The criterion that they propose in this paper […]

over-confident about mis-specified models?

Ziheng Yang and Tianqui Zhu published a paper in PNAS last year that criticises Bayesian posterior probabilities used in the comparison of models under misspecification as “overconfident”. The paper is written from a phylogeneticist point of view, rather than from a statistician’s perspective, as shown by the Editor in charge of the paper [although I […]

over-confident about mis-specified models?

Ziheng Yang and Tianqui Zhu published a paper in PNAS last year that criticises Bayesian posterior probabilities used in the comparison of models under misspecification as “overconfident”. The paper is written from a phylogeneticist point of view, rather than from a statistician’s perspective, as shown by the Editor in charge of the paper [although I […]

visualising bias and unbiasedness

A question on X validated led me to wonder at the point made by Christopher Bishop in his Pattern Recognition and Machine Learning book about the MLE of the Normal variance being biased. As it is illustrated by the above graph that opposes the true and green distribution of the data (made of two points) […]

Le Monde puzzle [#1099]

A simple 2×2 Le Monde mathematical puzzle: Arielle and Brandwein play a game out of two distinct even integers between 1500 and 2500,  and y. Providing one another with either the pair (x/2,y+x/2) or the pair (x+y/2,y/2) until they run out of even possibilities or exceed 6 rounds. When x=2304, what is the value of […]

likelihood free nested sampling

A recent paper by Mikelson and Khammash found on bioRxiv considers the (paradoxical?) mixture of nested sampling and intractable likelihood. They however cover only the case when a particle filter or another unbiased estimator of the likelihood function. Unless I am missing something in the paper, this seems a very costly and convoluted approach when […]

posterior distribution missing the MLE

An X validated question as to why the MLE is not necessarily (well) covered by a posterior distribution. Even for a flat prior… Which in restrospect highlights the fact that the MLE (and the MAP) are invasive species in a Bayesian ecosystem. Since they do not account for the dominating measure. And hence do not […]

a resolution of the Jeffreys-Lindley paradox

“…it is possible to have the best of both worlds. If one allows the significance level to decrease as the sample size gets larger (…) there will be a finite number of errors made with probability one. By allowing the critical values to diverge slowly, one may catch almost all the errors.” (p.1527) When commenting […]

Le Monde puzzle [#1094]

A rather blah number Le Monde mathematical puzzle: Find all integer multiples of 11111 with exactly one occurrence of each decimal digit.. Which I solved by brute force, by looking at the possible range of multiples (and  borrowing stringr:str_count from Robin!) > combien=0 > for (i in 90001:900008){ j=i*11111 combien=combien+(min(stringr::str_count(j,paste(0:9)))==1)} > combien [1] 3456 And […]

survivalists [a Riddler’s riddle]

A neat question from The Riddler on a multi-probability survival rate: Nine processes are running in a loop with fixed survivals rates .99,….,.91. What is the probability that the first process is the last one to die? Same question with probabilities .91,…,.99 and the probability that the last process is the last one to die. […]

holistic framework for ABC

An AISTATS 2019 paper was recently arXived by Kelvin Hsu and Fabio Ramos. Proposing an ABC method “…consisting of (1) a consistent surrogate likelihood model that modularizes queries from simulation calls, (2) a Bayesian learning objective for hyperparameters that improves inference accuracy, and (3) a posterior surrogate density and a super-sampling inference algorithm using its […]