Posts Tagged ‘ Miscellaneous Science ’

The statistical significance filter leads to overoptimistic expectations of replicability

May 22, 2018
By

Shravan Vasishth, Daniela Mertzen, Lena Jäger, et al. write: Treating a result as publishable just because the p-value is less than 0.05 leads to overoptimistic expectations of replicability. These overoptimistic expectations arise due to Type M(agnitude) error: when underpowered studies yield significant results, effect size estimates are guaranteed to be exaggerated and noisy. These effects […] The post The statistical significance filter leads to overoptimistic expectations of replicability appeared first…

Read more »

How to think about research, and research criticism, and research criticism criticism, and research criticism criticism criticism?

May 17, 2018
By

Some people pointed me to this article, “Issues with data and analyses: Errors, underlying themes, and potential solutions,” by Andrew Brown, Kathryn Kaiser, and David Allison. They discuss “why focusing on errors [in science] is important,” “underlying themes of errors and their contributing factors, “the prevalence and consequences of errors,” and “how to improve conditions […] The post How to think about research, and research criticism, and research criticism criticism,…

Read more »

Evaluating Sigmund Freud: Should we compare him to biologists or economists?

May 10, 2018
By

This post is about how we should think about Freud, not about how we should think about biology or economics. So. There’s this whole thing about Sigmund Freud being a bad scientist. Or maybe I should say a bad person and a terrible scientist. The “bad person” thing isn’t so relevant, but the “terrible scientist” […] The post Evaluating Sigmund Freud: Should we compare him to biologists or economists? appeared…

Read more »

What killed alchemy?

May 8, 2018
By

Here’s the answer according to David Wootton’s 2015 book, “The invention of science: a new history of the scientific revolution” (sent to me by Javier Benitez): What killed alchemy was the insistence that experiments must be openly reported in publications which presented a clear account of what had happened, and they must then be replicated, […] The post What killed alchemy? appeared first on Statistical Modeling, Causal Inference, and Social…

Read more »

Why is the replication crisis centered on social psychology?

May 7, 2018
By

We had a post on this a couple years ago, but the topic came up again, and here are my latest thoughts. Psychology has several features that contribute to the replication crisis: – Psychology is a relatively open and uncompetitive field (compared for example to biology). Many researchers will share their data. – Psychology is […] The post Why is the replication crisis centered on social psychology? appeared first on…

Read more »

A model for scientific research programmes that include both “exploratory phenomenon-driven research” and “theory-testing science”

May 1, 2018
By

John Christie points us to an article by Klaus Fiedler, What Constitutes Strong Psychological Science? The (Neglected) Role of Diagnosticity and A Priori Theorizing, which begins: A Bayesian perspective on Ioannidis’s (2005) memorable statement that “Most Published Research Findings Are False” suggests a seemingly inescapable trade-off: It appears as if research hypotheses are based either […] The post A model for scientific research programmes that include both “exploratory phenomenon-driven research”…

Read more »

Carol Nickerson investigates an unfounded claim of “17 replications”

April 20, 2018
By

Carol Nickerson sends along this report in which she carefully looks into the claim that the effect of power posing on feelings of power has replicated 17 times. Also relevant to the discussion is this post from a few months ago by Joe Simmons, Leif Nelson, and Uri Simonsohn. I am writing about this because […] The post Carol Nickerson investigates an unfounded claim of “17 replications” appeared first on…

Read more »

Tools for detecting junk science? Transparency is the key.

April 12, 2018
By
Tools for detecting junk science?  Transparency is the key.

In an article to appear in the journal Child Development, “Distinguishing polemic from commentary in science,” physicist David Grimes and psychologist Dorothy Bishop write: Exposure to nonionizing radiation used in wireless communication remains a contentious topic in the public mind—while the overwhelming scientific evidence to date suggests that microwave and radio frequencies used in modern […] The post Tools for detecting junk science? Transparency is the key. appeared first on…

Read more »

The all-important distinction between truth and evidence

April 6, 2018
By
The all-important distinction between truth and evidence

Yesterday we discussed a sad but all-too-familiar story of a little research project that got published and hyped beyond recognition. The published paper was called, “The more you play, the more aggressive you become: A long-term experimental study of cumulative violent video game effects on hostile expectations and aggressive behavior,” but actually that title was […] The post The all-important distinction between truth and evidence appeared first on Statistical Modeling,…

Read more »

More bad news in the scientific literature: A 3-day study is called “long term,” and nobody even seems to notice the problem. Whassup with that??

April 5, 2018
By
More bad news in the scientific literature:  A 3-day study is called “long term,”  and nobody even seems to notice the problem.  Whassup with that??

Someone pointed me to this article, “The more you play, the more aggressive you become: A long-term experimental study of cumulative violent video game effects on hostile expectations and aggressive behavior,” by Youssef Hasan, Laurent Bègue, Michael Scharkow, and Brad Bushman. My correspondent was suspicious of the error bars in Figure 1. I actually think […] The post More bad news in the scientific literature: A 3-day study is called…

Read more »


Subscribe

Email:

  Subscribe