# Posts Tagged ‘ statistics ’

## Read sas7bdat files in R with GGASoftware Parso library

September 12, 2014
By

... using the new R package sas7bdat.parso. The software company GGASoftware has extended the work of myself and others on the sas7bdat R package by developing a Java library called Parso, which also reads sas7bdat files. They have worked out most of the remaining kinks. For example, the Parso library reads sas7bdat files with compressed […]

## Mathematics and Mathematical Statistics Lesson of the Day – Convex Functions and Jensen’s Inequality

$Mathematics and Mathematical Statistics Lesson of the Day – Convex Functions and Jensen’s Inequality$

Consider a real-valued function that is continuous on the interval , where and are any 2 points in the domain of .  Let be the midpoint of and .  Then, if then is defined to be midpoint convex. More generally, let’s consider any point within the interval .  We can denote this arbitrary point as where . […]

## Generalized Double Pareto Priors for Regression

September 11, 2014
By

This post is a review of the “GENERALIZED DOUBLE PARETO SHRINKAGE” Statistica Sinica (2012) paper by Armagan, Dunson and Lee. Consider the regression model $$Y=X\beta+\varepsilon$$ where we put a generalized double pareto distribution as the prior on the regression coefficients $$\beta$$. The GDP distribution has density $$$$f(\beta|\xi,\alpha)=\frac{1}{2\xi}\left( 1+\frac{|\beta|}{\alpha\xi} \right)^{-(\alpha+1)}. \label{}$$$$ GDP as Scale […] The post Generalized Double Pareto Priors for Regression appeared first on Lindons Log.

## Neurostats 2014 Highlights

September 10, 2014
By

Last week the Neurostats 2014 workshop took place at the University of Warwick (co-organised by Adam Johansen, Nicolas Chopin, and myself). The goal was to put some neuroscientists and statisticians together to talk about neural data and what to do with it. General impressions: The type of Bayesian hierarchical modelling that Andrew Gelman has been […]

## Mathematical Statistics Lesson of the Day – The Glivenko-Cantelli Theorem

$Mathematical Statistics Lesson of the Day – The Glivenko-Cantelli Theorem$

In 2 earlier tutorials that focused on exploratory data analysis in statistics, I introduced the conceptual background behind empirical cumulative distribution functions (empirical CDFs) how to plot  empirical cumulative distribution functions in 2 different ways in R There is actually an elegant theorem that provides a rigorous basis for using empirical CDFs to estimate the true CDF – and […]

## Generating quantile forecasts in R

September 8, 2014
By

From today’s email: I have just finished reading a copy of ‘Forecasting:Principles and Practice’ and I have found the book really interesting. I have particularly enjoyed the case studies and focus on practical applications. After finishing the book I have joined a forecasting competition to put what I’ve learnt to the test. I do have […]

## Statistical Science: The Likelihood Principle issue is out…!

September 7, 2014
By

Abbreviated Table of Contents: Here are some items for your Saturday-Sunday reading.  Link to complete discussion:  Mayo, Deborah G. On the Birnbaum Argument for the Strong Likelihood Principle (with discussion & rejoinder). Statistical Science 29 (2014), no. 2, 227-266. Links to individual papers: Mayo, Deborah G. On the Birnbaum Argument for the Strong Likelihood Principle. Statistical […]

## Mathematical and Applied Statistics Lesson of the Day – The Motivation and Intuition Behind Chebyshev’s Inequality

$Mathematical and Applied Statistics Lesson of the Day – The Motivation and Intuition Behind Chebyshev’s Inequality$

In 2 recent Statistics Lessons of the Day, I introduced Markov’s inequality. explained the motivation and intuition behind Markov’s inequality. Chebyshev’s inequality is just a special version of Markov’s inequality; thus, their motivations and intuitions are similar. Markov’s inequality roughly says that a random variable is most frequently observed near its expected value, .  Remarkably, it quantifies just […]

## EM Algorithm for Bayesian Lasso R Cpp Code

September 5, 2014
By

Bayesian Lasso \begin{align*} p(Y_{o}|\beta,\phi)&=N(Y_{o}|1\alpha+X_{o}\beta,\phi^{-1} I_{n{o}})\\ \pi(\beta_{i}|\phi,\tau_{i}^{2})&=N(\beta_{i}|0, \phi^{-1}\tau_{i}^{2})\\ \pi(\tau_{i}^{2})&=Exp \left( \frac{\lambda}{2} \right)\\ \pi(\phi)&\propto \phi^{-1}\\ \pi(\alpha)&\propto 1\\ \end{align*} Marginalizing over $$\alpha$$ equates to centering the observations and losing a degree of freedom and working with the centered $$Y_{o}$$. Mixing over $$\tau_{i}^{2}$$ leads to a Laplace or Double Exponential prior on $$\beta_{i}$$ with rate parameter $$\sqrt{\phi\lambda}$$ […] The post EM Algorithm for Bayesian Lasso R Cpp Code appeared first on Lindons…

## All She Wrote (so far): Error Statistics Philosophy Contents-3 years on

September 5, 2014
By

Error Statistics Philosophy: Blog Contents By: D. G. Mayo[i] Each month, I will mark (in red) 3 relevant posts (from that month 3 yrs ago) for readers wanting to catch-up or review central themes and discussions. September 2011 (9/3) Frequentists in Exile: The Purpose of this Blog (9/3) Overheard at the comedy hour at the Bayesian retreat (9/4) Drilling Rule #1 […]