Bert Gunter points us to this editorial:
So, researchers using these data to answer questions about the effects of technology [screen time on adolescents] need to make several decisions. Depending on the complexity of the data set, variables can be statistically analysed in trillions of ways. This makes almost any pattern of results possible. As a result, studies have suggested both the existence of and the lack of an association between screen time and well-being, even when analysing the same data set. Naturally, it’s the research that highlights possible dangers that receives the most public attention and helps to set the policy agenda.
It’s the multiverse. Good to see people recognizing this. As always, I think the right way to go is not to apply some sort of multiple comparison correction or screen for statistical significance or preregister or otherwise choose some narrow subset of results to report. Instead, I recommend studying all comparisons of interest using a multilevel model and displaying all these inferences together, accepting that there will be uncertainty in conclusions.