# Category: Miscellaneous Statistics

## Yes, I really really really like fake-data simulation, and I can’t stop talking about it.

Rajesh Venkatachalapathy writes: Recently, I had a conversation with a colleague of mine about the virtues of synthetic data and their role in data analysis. I think I’ve heard a sermon/talk or two where you mention this and also in your blog entries. But having convinced my colleague of this point, I am struggling to […]

## “Retire Statistical Significance”: The discussion.

So, the paper by Valentin Amrhein, Sander Greenland, and Blake McShane that we discussed a few weeks ago has just appeared online as a comment piece in Nature, along with a letter with hundreds (or is it thousands?) of supporting signatures. Following the first circulation of that article, the authors of that article and some […]

## My two talks in Montreal this Friday, 22 Mar

McGill University Biostatistics seminar, Purvis Hall, 102 Pine Ave. West, Room 25, 1-2pm Fri 22 Mar: Resolving the Replication Crisis Using Multilevel Modeling In recent years we have come to learn that many prominent studies in social science and medicine, conducted at leading research institutions, published in top journals, and publicized in respected news outlets, […]

## Statistical-significance thinking is not just a bad way to publish, it’s also a bad way to think

Eric Loken writes: The table below was on your blog a few days ago, with the clear point about p-values (and even worse the significance versus non-significance) being a poor summary of data. The thought I’ve had lately, working with various groups of really smart and thoughtful researchers, is that Table 4 is also a […]

## R package for Type M and Type S errors

Andy Garland Timm writes: My package for working with Type S/M errors in hypothesis testing, ‘retrodesign’, is now up on CRAN. It builds on the code provided by Gelman and Carlin (2014) with functions for calculating type S/M errors across a variety of effect sizes as suggested for design analysis in the paper, a function […]

## A corpus in a single survey!

This was something we used a few years ago in one of our research projects and in the paper, Difficulty of selecting among multilevel models using predictive accuracy, with Wei Wang, but didn’t follow up on. I think it’s such a great idea I want to share it with all of you. We were applying […]

## “Abandon / Retire Statistical Significance”: Your chance to sign a petition!

Valentin Amrhein, Sander Greenland, and Blake McShane write: We have a forthcoming comment in Nature arguing that it is time to abandon statistical significance. The comment serves to introduce a new special issue of The American Statistician on “Statistical inference in the 21st century: A world beyond P

## (back to basics:) How is statistics relevant to scientific discovery?

Someone pointed me to this remark by psychology researcher Daniel Gilbert: Publication is not canonization. Journals are not gospels. They are the vehicles we use to tell each other what we saw (hence “Letters” & “proceedings”). The bar for communicating to each other should not be high. We can decide for ourselves what to make […]

## Yes on design analysis, No on “power,” No on sample size calculations

Kevin Lewis points us to this paper, “Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty,” by Samantha Anderson, Ken Kelley, and Scott Maxwell. My reaction: Yes, it’s reasonable, but I have two big problems with the general approach: 1. I don’t like talk of power […]

## My talk this coming Monday in the Columbia statistics department

Monday 4 Mar, 4pm in room 903 Social Work Bldg: We’ve Got More Than One Model: Evaluating, comparing, and extending Bayesian predictions Methods in statistics and data science are often framed as solutions to particular problems, in which a particular model or method is applied to a dataset. But good practice typically requires multiplicity, in […]

## My talk today (Tues 19 Feb) 2pm at the University of Southern California

At the Center for Economic and Social Research, Dauterive Hall (VPD), room 110, 635 Downey Way, Los Angeles: The study of American politics as a window into understanding uncertainty in science Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University We begin by discussing recent American elections in the context of political […]

## More on that horrible statistical significance grid

Regarding this horrible Table 4: Eric Loken writes: The clear point or your post was that p-values (and even worse the significance versus non-significance) are a poor summary of data. The thought I’ve had lately, working with various groups of really smart and thoughtful researchers, is that Table 4 is also a model of their […]

## Simulation-based statistical testing in journalism

Jonathan Stray writes: In my recent Algorithms in Journalism course we looked at a post which makes a cute little significance-type argument that five Trump campaign payments were actually the $130,000 Daniels payoff. They summed to within a dollar of$130,000, so the simulation recreates sets of payments using bootstrapping and asks how often there’s […]

## Michael Crichton on science and storytelling

Javier Benitez points us to this 1999 interview with techno-thriller writer Michael Crichton, who says: I come before you today as someone who started life with degrees in physical anthropology and medicine; who then published research on endocrinology, and papers in the New England Journal of Medicine, and even in the Proceedings of the Peabody […]

## Should he go to grad school in statistics or computer science?

Someone named Nathan writes: I am an undergraduate student in statistics and a reader of your blog. One thing that you’ve been on about over the past year is the difficulty of executing hypothesis testing correctly, and an apparent desire to see researchers move away from that paradigm. One thing I see you mention several […]

## Our hypotheses are not just falsifiable; they’re actually false.

Everybody’s talkin bout Popper, Lakatos, etc. I think they’re great. Falsificationist Bayes, all the way, man! But there’s something we need to be careful about. All the statistical hypotheses we ever make are false. That is, if a hypothesis becomes specific enough to make (probabilistic) predictions, we know that with enough data we will be […]

## When doing regression (or matching, or weighting, or whatever), don’t say “control for,” say “adjust for”

This comes up from time to time. We were discussing a published statistical blunder, an innumerate overconfident claim arising from blind faith that a crude regression analysis would control for various differences between groups. Martha made the following useful comment: Another factor that I [Martha] believe tends to promote the kind of thing we’re talking […]

## How post-hoc power calculation is like a shit sandwich

Damn. This story makes me so frustrated I can’t even laugh. I can only cry. Here’s the background. A few months ago, Aleksi Reito (who sent me the adorable picture above) pointed me to a short article by Yanik Bababekov, Sahael Stapleton, Jessica Mueller, Zhi Fong, and David Chang in Annals of Surgery, “A Proposal […]

## Published in 2018

R-squared for Bayesian regression models. {\em American Statistician}. (Andrew Gelman, Ben Goodrich, Jonah Gabry, and Aki Vehtari) Voter registration databases and MRP: Toward the use of large scale databases in public opinion research. {\em Political Analysis}. (Yair Ghitza and Andrew Gelman) Limitations of “Limitations of Bayesian leave-one-out cross-validation for model selection.” {\em Computational Brain and […]

The post Published in 2018 appeared first on Statistical Modeling, Causal Inference, and Social Science.

## Published in 2018

R-squared for Bayesian regression models. {\em American Statistician}. (Andrew Gelman, Ben Goodrich, Jonah Gabry, and Aki Vehtari) Voter registration databases and MRP: Toward the use of large scale databases in public opinion research. {\em Political Analysis}. (Yair Ghitza and Andrew Gelman) Limitations of “Limitations of Bayesian leave-one-out cross-validation for model selection.” {\em Computational Brain and […]

The post Published in 2018 appeared first on Statistical Modeling, Causal Inference, and Social Science.