Anna Dreber writes:
Replication Markets (RM) invites you to help us predict outcomes of 3,000 social and behavioral science experiments over the next year. We actively seek scholars with different voices and perspectives to create a wise and diverse crowd, and hope you will join us.
We invite you – your students, and any other interested parties – to join our crowdsourced prediction platform. By mid-2020 we will rate the replicability of claims from more than 60 academic journals. The claims were selected by an independent team that will also randomly choose about 200 for testing (replication).
• RM’s forecasters bet on the chance that a claim will replicate and may adjust their assessment after reading the original paper and discussing results with other players. Previous replication studies have demonstrated prediction accuracy of about 75% with these methods.
• RM’s findings will contribute to the wider body of scientific knowledge with a high-quality dataset of claim reliabilities, comparisons of several crowd aggregation methods, and insights about predicting replication. Anonymized data from RM will be open-sourced to train artificial intelligence models and speed future ratings of research claims.
• RM’s citizen scientists predict experimental results in a play-money market with real payouts totaling over $100K*. Payouts will be distributed among the most accurate of its anticipated 500 forecasters. There is no cost to play the Replication Markets.
Our project needs forecasters like you with knowledge, insight, and expertise in fields across the social and behavioral sciences. Please share this invitation with colleagues, students, and others who might be interested in participating.
Here’s the link to their homepage. And here’s how to sign up.
I know about Anna from this study from 2015 where she and her colleagues tried and failed to replicate a much publicized experiment from psychology (“The samples were collected in privacy, using passive drool procedures, and frozen immediately”), and then from a later study that she and some other colleagues did, using prediction markets to estimate the reproducibility of scientific research.
P.S. I do have some concerns regarding statements such as, “we will rate the replicability of claims from more than 60 academic journals.” I have no problem with the 50 journals; my concern is with the practice of declaring a replication a “success” or “failure.” And, yes, I know I just did this in the paragraph above! It’s a problem. We want to get definitive results, but definitive results are not always possible. A key issue here is the distinction between truth and evidence. We can say confidently that a particular study gives no good evidence for its claims, but that doesn’t mean those claims are false. Etc.