Posts Tagged ‘ Significance ’

Gelman speed read

April 23, 2015
By

For those who have found it tough to keep up with Andrew Gelman's prolificacy, here are some brief summaries of several recent posts: On people obsessed with proving the statistical significance of tiny effects: "they are trying to use a bathroom scale to weigh a feather—and the feather is resting loosely in the pouch of a kangaroo that is vigorously jumping up and down." (link) [I left a comment. In…

Read more »

Yet another popular nutrition headline doesn’t stand up to scrutiny

April 1, 2015
By
Yet another popular nutrition headline doesn’t stand up to scrutiny

Are science journalists required to take one good statistics course? That is the question in my head when I read this Science Times article, titled "One Cup of Coffee Could Offset Three Drinks a Day" (link). We are used to seeing rather tenuous conclusions such as "Four Cups of Coffee Reduces Your Risk of X". This headline takes it up another notch. A result is claimed about the substitution effect…

Read more »

One place not to use the Sharpe ratio

March 23, 2015
By
One place not to use the Sharpe ratio

Having worked in finance I am a public fan of the Sharpe ratio. I have written about this here and here. One thing I have often forgotten (driving some bad analyses) is: the Sharpe ratio isn’t appropriate for models of repeated events that already have linked mean and variance (such as Poisson or Binomial models) … Continue reading One place not to use the Sharpe ratio → Related posts: A…

Read more »

Optimizely Stats Engine 2: what about advanced users?

February 9, 2015
By
Optimizely Stats Engine 2: what about advanced users?

In Part 1, I covered the logic behind recent changes to the statistical analysis used in standard reports by Optimizely. In Part 2, I ponder what this change means for more sophisticated customers--those who are following the proper protocols for classical design of experiments, such as running tests of predetermined sample sizes, adjusting for multiple comparisons, and constructing and analyzing multivariate tests using regression with interactions. For this segment, the…

Read more »

Deflate-gate, Part 2: not average != extreme, and Sunday talk shows

February 2, 2015
By

Last week, I pointed out the futility of using data as proof or disproof in Deflate-gate. Emphatically, a case of "N=All" does not make things better. I later edited the post for HBR (link). In this post, I want to address a couple of more subtle technical issues related to the Sharp analysis, which can be summarized as follows: 1. New England is an outlier in the plays per fumbles…

Read more »

How Optimizely will kill your winning percentage, and why that is a great thing for you (Part 1)

January 23, 2015
By

In my HBR article about A/B testing (link), I described one of the key managerial problems related to A/B testing--the surplus of “positive” results that don’t quite seem to add up. In particular, I mentioned this issue: When managers are reading hour-by-hour results, they will sometimes find large gaps between Groups A and B, and demand prompt reaction. Almost all such fluctuations result from temporary imbalance between the two groups,…

Read more »

Figuring out what data supports the argument, and what is just window-dressing

January 8, 2015
By
Figuring out what data supports the argument, and what is just window-dressing

That is the question in my head when I read an article like USA Today's "Jobless Claims Fall, Suggests Strong Hiring". (link) The headline makes the connection between newly-released jobless claims data and the conclusion of "strong hiring". But it turns out the new data is merely window-dressing, and the conclusion is based on longer-term trends. Here is the new data, as reported by the USA Today reporter: applications for…

Read more »

How to face the mid-life crisis in A/B Testing

December 10, 2014
By

There have been few updates as I was working on things for other people. One of these things showed up today. Here is an excerpt from the beginning of my new article on HBR: For over 10 years and at three companies, I set up and ran A/B testing programs, in which we test a new offer with half a sample against a control group which doesn’t get a new…

Read more »

Gelman explains why massive sample sizes to chase after tiny effects is silly

November 21, 2014
By

What a lucky day I found time to catch up on some Gelman. He posted about the Facebook research ethics controversy, and I'm glad to see that he and I have pretty much the same attitude (my earlier post is here.). It's a storm in a teacup. Gelman makes two other points about the Facebook study--unrelated to the ethics--which are very important. First, he said: if we happen to see…

Read more »

Binge Reading Gelman

June 23, 2014
By
Binge Reading Gelman

As others binge watch Netflix TV, I binge read Gelman posts, while riding a train with no wifi and a dying laptop battery. (This entry was written two weeks ago.) Andrew Gelman is statistics’ most prolific blogger. Gelman-binging has become a necessity since I have not managed to keep up with his accelerated posting schedule. Earlier this year, he began publishing previews of future posts, one week in advance, and…

Read more »


Subscribe

Email:

  Subscribe