Posts Tagged ‘ Miscellaneous Statistics ’

best algorithm EVER !!!!!!!!

December 6, 2016
By
best algorithm EVER !!!!!!!!

Someone writes: On the website https://odajournal.com/ you find a lot of material for Optimal (or “optimizing”) Data Analysis (ODA) which is described as: In the Optimal (or “optimizing”) Data Analysis (ODA) statistical paradigm, an optimization algorithm is first utilized to identify the model that explicitly maximizes predictive accuracy for the sample, and then the resulting […] The post best algorithm EVER !!!!!!!! appeared first on Statistical Modeling, Causal Inference, and…

Read more »

How can you evaluate a research paper?

December 1, 2016
By
How can you evaluate a research paper?

Shea Levy writes: You ended a post from last month [i.e., Feb.] with the injunction to not take the fact of a paper’s publication or citation status as meaning anything, and instead that we should “read each paper on its own.” Unfortunately, while I can usually follow e.g. the criticisms of a paper you might […] The post How can you evaluate a research paper? appeared first on Statistical Modeling,…

Read more »

“A bug in fMRI software could invalidate 15 years of brain research”

November 29, 2016
By
“A bug in fMRI software could invalidate 15 years of brain research”

About 50 people pointed me to this press release or the underlying PPNAS research article, “Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates,” by Anders Eklund, Thomas Nichols, and Hans Knutsson, who write: Functional MRI (fMRI) is 25 years old, yet surprisingly its most common statistical methods have not been validated […] The post “A bug in fMRI software could invalidate 15 years of brain research”…

Read more »

Reminder: Instead of “confidence interval,” let’s say “uncertainty interval”

November 26, 2016
By
Reminder:  Instead of “confidence interval,” let’s say “uncertainty interval”

We had a vigorous discussion the other day on confusions involving the term “confidence interval,” what does it mean to have “95% confidence,” etc. This is as good a time as any for me to remind you that I prefer the term “uncertainty interval”. The uncertainty interval tells you how much uncertainty you have. That […] The post Reminder: Instead of “confidence interval,” let’s say “uncertainty interval” appeared first on…

Read more »

Discussion on overfitting in cluster analysis

November 25, 2016
By
Discussion on overfitting in cluster analysis

Ben Bolker wrote: It would be fantastic if you could suggest one or two starting points for the idea that/explanation why BIC should naturally fail to identify the number of clusters correctly in the cluster-analysis context. Bob Carpenter elaborated: Ben is finding that using BIC to select number of mixture components is selecting too many […] The post Discussion on overfitting in cluster analysis appeared first on Statistical Modeling, Causal…

Read more »

“Breakfast skipping, extreme commutes, and the sex composition at birth”

November 24, 2016
By
“Breakfast skipping, extreme commutes, and the sex composition at birth”

Bhash Mazumder sends along a paper (coauthored with Zachary Seeskin) which begins: A growing body of literature has shown that environmental exposures in the period around conception can affect the sex ratio at birth through selective attrition that favors the survival of female conceptuses. Glucose availability is considered a key indicator of the fetal environment, […] The post “Breakfast skipping, extreme commutes, and the sex composition at birth” appeared first…

Read more »

Abraham Lincoln and confidence intervals

November 23, 2016
By
Abraham Lincoln and confidence intervals

Our recent discussion with mathematician Russ Lyons on confidence intervals reminded me of a famous logic paradox, in which equality is not as simple as it seems. The classic example goes as follows: Abraham Lincoln is the 16th president of the United States, but this does not mean that one can substitute the two expressions […] The post Abraham Lincoln and confidence intervals appeared first on Statistical Modeling, Causal Inference,…

Read more »

How best to partition data into test and holdout samples?

November 22, 2016
By
How best to partition data into test and holdout samples?

Bill Harris writes: In “Type M error can explain Weisburd’s Paradox,” you reference Button et al. 2013. While reading that article, I noticed figure 1 and the associated text describing the 50% probability of failing to detect a significant result with a replication of the same size as the original test that was just significant. […] The post How best to partition data into test and holdout samples? appeared first…

Read more »

Deep learning, model checking, AI, the no-homunculus principle, and the unitary nature of consciousness

November 21, 2016
By
Deep learning, model checking, AI, the no-homunculus principle, and the unitary nature of consciousness

Bayesian data analysis, as my colleagues and I have formulated it, has a human in the loop. Here’s how we put it on the very first page of our book: The process of Bayesian data analysis can be idealized by dividing it into the following three steps: 1. Setting up a full probability model—a joint […] The post Deep learning, model checking, AI, the no-homunculus principle, and the unitary nature…

Read more »

Thinking more seriously about the design of exploratory studies: A manifesto

November 17, 2016
By
Thinking more seriously about the design of exploratory studies:  A manifesto

In the middle of a long comment thread on a silly Psychological Science paper, Ed Hagen wrote: Exploratory studies need to become a “thing.” Right now, they play almost no formal role in social science, yet they are essential to good social science. That means we need to put as much effort in developing standards, […] The post Thinking more seriously about the design of exploratory studies: A manifesto appeared…

Read more »


Subscribe

Email:

  Subscribe