This is cross-posted on my two blogs. For my fans on either of my two blogs, I'm giving away a free signed copy of my new book, Numbersense. (See my book announcement.) All you have to do is to answer...

This is cross-posted on my two blogs. For my fans on either of my two blogs, I'm giving away a free signed copy of my new book, Numbersense. (See my book announcement.) All you have to do is to answer...

This is cross-posted on my two blogs. For my fans on either of my two blogs, I'm giving away a free signed copy of my new book, Numbersense. All you have to do is to answer 3 questions, based on a few sample pages (see the PDF here; also on Slideshare). Click on the quiz to enter. The contest is open until Friday, July 19, 2013 (11:59 PM PST). This…

It is often useful to partition observations for a continuous variable into a small number of intervals, called bins. This familiar process occurs every time that you create a histogram, such as the one on the left. In SAS you can create this histogram by calling the UNIVARIATE procedure. Optionally, [...]

About 100 attendees, three keynotes, five short talks, demos, discussions, food, music, and a fantastic atmosphere: the Tapestry conference for storytelling with data took place on February 27 in Nashville, TN. Here is a conference report with links to talk videos, as well as some first news on Tapestry 2014. Setting and Format Conference hotels tend to all look the same: nondescript, badly lit, depressing ballrooms, terrible acoustics, and just way too many…

Linear regression is a very basic technique that we use a lot in machine learning. In a lot of cases (and I have been guilty of this), we just use it without much thought as to how the internals actually work. In a 2-D coordinate system, we can plot observations (such as, a child’s age is 1), and associated dependent variables (ie, the child has 1 friend) on an x/y…

1.1. Parallel Tempering Theory 1.2. Physics Origins 2.1 Intra-Thread Metropolis Move 2.2. Inter-Thread Parallel Tempering 2.3. OpenMP Parallelization 3. Full Code 4. Simulation Study 5. On the Future use of Parallel Tempering with OpenMP Parallel tempering is one of my favourite sampling algorithms to improve MCMC mixing times. This algorithm seems to be used exclusively […] The post Parallel Tempering Algorithm with OpenMP / C++ appeared first on Lindons Log.

Linear regression is a very basic technique that we use a lot in machine learning. In a lot of cases (and I have been guilty of this), we just use it without much thought as to how the internals actually work. In a 2-D coordinate system, we can plot observations (such as, a child's age is 1), and associated dependent variables (ie, the child has 1 friend) on an x/y…

Question: Do clinical trials work?Answer: Yes. Clinical trials are one of the defining success stories in the process of scientific inquiry. Do they work as fast/efficiently as a pharma company with potentially billions on the line would like? That is definitely … Continue reading →

Introduction I had my natural predilection towards math crushed out of me at some point in school, and after that point, Math (yes, we are referring to the higher power of math) and I had a wary understanding. I dabbled quietly, and Math turned a blind eye to me ignoring some of its deeper theory. When I stuggled loudly, Math did its best to hide its smirks. I generally refrained…

I’ve been trying to reduce my American accent when speaking French. I tried taping my voice and playing it back, but that didn’t help. I couldn’t actually tell that I had a strong accent by listening to myself. My own voice is just too familiar to me. Then Malecki told me about the international phonetic […]The post Learning how to speak appeared first on Statistical Modeling, Causal Inference, and Social…

Stephen Senn Head, Methodology and Statistics Group, Competence Center for Methodology and Statistics (CCMS), Luxembourg At a workshop on randomisation I attended recently I was depressed to hear what I regard as hackneyed untruths treated as if they were important objections. One of these is that of indefinitely many confounders. The argument goes that although […]

I’ve said it here so often, this time I put it on the sister blog. . . . The post Meritocracy rerun appeared first on Statistical Modeling, Causal Inference, and Social Science.

LOST CAUSES IN STATISTICS II: Noninformative Priors I thought I would post at a higher frequency in the summer. But I have been working hard to finish some papers which has kept me quite busy. So, apologies for the paucity of posts. Today I’ll discuss another lost cause: noninformative priors. I like to say that … … Continue reading →

Even if a policymaker is sure of the ideal economic policy, he or she can only implement it with the help of some of the other political players. But I’m saying something different, echoing what I wrote a couple days ago. I thought of this the other day after seeing this recent quote from Paul […]The post Economic policy does not occur in a political vacuum appeared first on Statistical…

Bayesian statistics is to Python as frequentist statistics is to Perl. Perl has the slogan “There’s more than one way to do it,” abbreviated TMTOWTDI and pronouced “tim toady.” Perl prides itself on variety. Python takes the opposite approach. The Zen of Python says “There should be one — and preferably only one — obvious […]

This week I’ve been at the R Users conference in Albacete, Spain. These conferences are a little unusual in that they are not really about research, unlike most conferences I attend. They provide a place for people to discuss and exchange ideas on how R can be used. Here are some thoughts and highlights of the conference, in no particular order. Håvard Rue spoke on Bayesian computing with INLA and…