I love Lego. And I love making up mathematics and statistics activities for people of all levels of attainment. So it makes sense that I would make up maths discussion activities using Lego. Whenever I have posted my ideas on … Continue reading →

The University of Melbourne is advertising for a “Professor in Statistics (Data Science)”. Melbourne (the city) is fast becoming a vibrant centre for data science and applied statistics, with more than 4700 people signed up for the Data Science Meetup Group, a thriving start-up scene, the group at Monash Business School (including Di Cook and […]

Metropolis Adjusted Langevin Algorithm (MALA) Haven’t dumped much code here in a while. Here’s a Julia implementation of MALA with an arbitrary preconditioning matrix M. Potentially I might use this in the future. Generic Julia Implementation Arguments are a function to evaluate the logdensity, function to evaluate the gradient, a step size h, a preconditioning […] The post MALA – Metropolis Adjusted Langevin Algorithm in Julia appeared first on Lindons…

Nina Zumel prepared an excellent article on the consequences of working with relative error distributed quantities (such as wealth, income, sales, and many more) called “Living in A Lognormal World.” The article emphasizes that if you are dealing with such quantities you are already seeing effects of relative error distributions (so it isn’t an exotic … Continue reading Relative error distributions, without the heavy tail theatrics

Beginning analysts and data scientists often ask: “how does one remember and master the seemingly endless number of classifier metrics?” My concrete advice is: Read Nina Zumel’s excellent series on scoring classifiers. Keep notes. Settle on one or two metrics as you move project to project. We prefer “AUC” early in a project (when you … Continue reading A budget of classifier evaluation measures