Blog Archives

A voluntary commitment to research transparency

September 29, 2016
By

The Reproducibility Project: Psychology was published last week, and it was another blow to the overall credibility of the current research system’s output. Some interpretations of the results were in a “Hey, it’s all fine; nothing to...

Read more »

Putting the ‘I’ in open science: How you can change the face of science

September 29, 2016
By

If we want to shift from a closed science to an open science, there has to be change at several levels. In this process, it’s easy to push the responsibility (and the power) for reform onto “the system”: “If only journals changed their ...

Read more »

Changing hiring practices towards research transparency: The first open science statement in a professorship advertisement

September 29, 2016
By

Engaging in open science practices increases knowledge as a common good, and ensures the reproducibility, verifiability and credibility of research. But some have the fear that on an individual strategic level (in particular from an early career perspe...

Read more »

What’s the probability that a significant p-value indicates a true effect?

September 29, 2016
By

If the p-value is < .05, then the probability of falsely rejecting the null hypothesis is  <5%, right? That means, a maximum of 5% of all significant results is a false-positive (that’s what we control with the α rate). Well, no. As you...

Read more »

Introducing the p-hacker app: Train your expert p-hacking skills

September 29, 2016
By

[This is a guest post by Ned Bicare, PhD]   Start the p-hacker app! My dear fellow scientists! “If you torture the data long enough, it will confess.” This aphorism, attributed to Ronald Coase, sometimes has been used in a disrespective manner, ...

Read more »

Optional stopping does not bias parameter estimates (if done correctly)

September 29, 2016
By

tl;dr: Optional stopping does not bias parameter estimates from a frequentist point of view if all studies are reported (i.e., no publication bias exists) and effect sizes are appropriately meta-analytically weighted. Several recent discussions on the ...

Read more »

LMU psychology department distributes funding based on criteria of research transparency

September 29, 2016
By

The Psychology Department at LMU Munich continues to change the incentive structure towards reproducible and open science. The internal distribution of funding now partly is based on transparency criteria: Publications with open data, open material and...

Read more »

Open Science and research quality at the German conference on psychology (DGPs congress in Leipzig)

September 29, 2016
By

From 17th to 22th September, the 50th anniversary congress of the German psychological association takes place in Leipzig. On previous conferences in Germany in the last two or three years, the topic of the credibility crisis and research transparency ...

Read more »

About replication bullies and scientific progress …

September 7, 2015
By

These days psychology really is exciting, and I do not mean the Frster case … In May 2014 a special issue full of replication attempts has been released – all open access, all raw data released! This is great work, powered by the open scien...

Read more »

In the era of #repligate: What are valid cues for the trustworthiness of a study?

September 7, 2015
By

[Update 2015/1/14: I consolidate feedback from Twitter, comments, email, and real life into the main text (StackExchange-style), so that we get a good and improving answer. Thanks to @TonyLFreitas,@PhDefunct, @bahniks, @JoeHilgard, @_r_c_a, @richardmor...

Read more »


Subscribe

Email:

  Subscribe