# Posts Tagged ‘ science ’

## Showing three dimensions using a ternary plot

February 4, 2016
By

Long-time reader Daniel L. isn't a fan of this chart, especially when it is made to spin, as you can see at this link: Like other 3D charts, this one is hard to read. The vertical lines are both good...

## Reproducible randomized controlled trials

February 1, 2016
By

“Reproducible” and “randomized” don’t seem to go together. If something was unpredictable the first time, shouldn’t it be unpredictable if you start over and run it again? As is often the case, we want incompatible things. But the combination of reproducible and random can be reconciled. Why would we want a randomized controlled trial (RCT) to […]

## New digital feature editorship at the Psychonomic Society

January 19, 2016
By

As of January 1, 2016, I am the new methods editor for the Psychonomic Society digital features. Steve Lewandowsky has written an introductory post with a bit of background, and my first post -- about Arsenault and Buchsbaum's recent article in Psychon...

## The failure to replicate scientific findings

January 19, 2016
By

Andrew Gelman and I have published a piece in Slate, discussing the failure to replicate scientific findings, using the recent example of the so-called power pose. The idea of the "power pose" is that people develop psychological and hormonal changes by making this "power pose" before walking into business meetings, whereupon these changes make them more powerful. As you often read here and at Gelman's blog, the fact that someone…

## Asymmetric funnel plots without publication bias

January 9, 2016
By

In my last post about standardized effect sizes, I showed how averaging across trials before computing standardized effect sizes such as partial $$\eta^2$$ and Cohen's d can produce arbitrary estimates of those quantities. This has drastic implications...

## Averaging can produce misleading standardized effect sizes

January 7, 2016
By

Recently, there have been many calls for a focus on effect sizes in psychological research. In this post, I discuss how naively using standardized effect sizes with averaged data can be misleading. This is particularly problematic for meta-analysis, where differences in number of trials across studies could lead to very misleading results.There are two main types of effect sizes in typical use: raw effect sizes and standardized effect sizes. Raw…

## Happy new year. Did you have a white Christmas?

January 4, 2016
By

Happy 2016. I spent time with the family in California, wiping out any chance of a white Christmas, although I hear that the probability would have been miniscule even had I stayed. I did come across a graphic that tried...

## Scorched by the heat in Arizona

December 18, 2015
By

Reader Jeffrey S. saw this graphic inside a Dec 2 tweet from the National Weather Service (NWS) in Phoenix, Arizona. In a Trifecta checkup (link), I'd classify this as Type QV. The problems with the visual design are numerous and...

## Statbusters: standing may or may not stand a chance

December 7, 2015
By

In our latest Statbusters column for the Daily Beast, we read the research behind the claim that "standing reduces odds of obesity". Especially at younger companies, it is trendy to work at standing desks because of findings like this. We find a variety of statistical issues calling for better studies. For example, the observational dataset used provides no clue as to whether sitting causes obesity or obesity leads to more…

## Reviewers and open science: why PRO?

December 2, 2015
By

As of yesterday, our paper outlining the PRO Initiative for open science was accepted for publication in the journal Royal Society Open Science. It marks the end of many tweaks to the basic idea, and hopefully the beginning of a new era in peer reviewi...

 Tweet

Email: