(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)
Richard Morey writes:
Rink Hoekstra and I are undertaking some research to explore how people use classical statistical results to evaluate the weight of evidence. Bayesians often critique classical techniques for being difficult to interpret in terms of what scientists want to know, but there is clearly information in the statistics themselves. We wonder how people extract that information. Below is our official announcement; it would be great if you could let people on your blog know about the survey, as we want to get a wide variety of statistical users to take the survey.
Empirical science is grounded on the belief that data can be used as evidence. The convincingness of data — the “weight” of the evidence they provide — is crucial to deciding between rival scientific positions. In situations with no uncertainty, reasoning about evidence is often straightforward; in practice, however, most conclusions from data involve uncertainty. In these situations, we obviously prefer strong evidence to weak evidence, but beyond this, strikingly little is known about how scientists actually evaluate the strength of evidence and to what extent scientists differ in their evaluations.
We are looking for researchers with experience using statistics to complete a short survey about the weight of evidence provided by statistics. Participants are asked to assess the weight of evidence in several research scenarios. The survey takes about 15 minutes to complete; if you would like to participate, click or copy/paste the link.
Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science