# Posts Tagged ‘ statistics ’

## Machine Learning Lesson of the Day – Introduction to Linear Basis Function Models

$Machine Learning Lesson of the Day – Introduction to Linear Basis Function Models$

Given a supervised learning problem of using inputs () to predict a continuous target , the simplest model to use would be linear regression.  However, what if we know that the relationship between the inputs and the target is non-linear, but we are unsure of exactly what form this relationship has? One way to overcome […]

## where did the normalising constants go?! [part 1]

March 10, 2014
By

When listening this week to several talks in Banff handling large datasets or complex likelihoods by parallelisation, splitting the posterior as and handling each term of this product on a separate processor or thread as proportional to a probability density, then producing simulations from the mi‘s and attempting at deriving simulations from the original product, […]

## Andrew Gelman, the Early Years

March 9, 2014
By

Andrew Gelman reminisced recently some early research (see here, here, and here). One of those earlier links mentioned a conference Gelman went early in his career which included Jaynes. I have the proceedings to that conference and was able to grab th...

## Can a classifier that never says “yes” be useful?

March 8, 2014
By

Many data science projects and presentations are needlessly derailed by not having set shared business relevant quantitative expectations early on (for some advice see Setting expectations in data science projects). One of the most common issues is the common layman expectation of “perfect prediction” from classification projects. It is important to set expectations correctly so […] Related posts: Setting expectations in data science projects More on ROC/AUC On Being a…

## Applied Statistics Lesson of the Day – Additive Models vs. Interaction Models in 2-Factor Experimental Designs

$Applied Statistics Lesson of the Day – Additive Models vs. Interaction Models in 2-Factor Experimental Designs$

In a recent “Machine Learning Lesson of the Day“, I discussed the difference between a supervised learning model in machine learning and a regression model in statistics.  In that lesson, I mentioned that a statistical regression model usually consists of a systematic component and a random component.  Today’s lesson strictly concerns the systematic component. An […]

## Advances in scalable Bayesian computation [day #4]

March 7, 2014
By

Final day of our workshop Advances in Scalable Bayesian Computation already, since tomorrow morning is an open research time ½ day! Another “perfect day in paradise”, with the Banff Centre campus covered by a fine snow blanket, still falling…, and making work in an office of BIRS a dream-like moment. Still looking for a daily theme, […]

## On replacing calculus with statistics

March 7, 2014
By

Russ Roberts had this to say about the proposal to replacing the calculus requirement with statistics for students. Statistics is in many ways much more useful for most students than calculus. The problem is, to teach it well is extraordinarily…Read more ›

## The Secret to Entropy’s role in Statistics

March 6, 2014
By

Entropy is the single most powerful statistical tool discovered to date. That’s a bold claim. I intend to back that claim up in this post. Forget for a moment about Entropy and Statistics and start fresh. Suppose we know two variables , are relat...

## Advances in scalable Bayesian computation [day #3]

March 6, 2014
By

We have now gone over the midpoint of our workshop Advances in Scalable Bayesian Computation with three talks in the morning and an open research or open air afternoon. (Maybe surprisingly I chose to stay indoors and work on a new research topic rather than trying cross-country skiing!) If I must give a theme for […]