Personal perils: are numbers needed to treat misleading us as to the scope for personalised medicine? A common misinterpretation of Numbers Needed to Treat is causing confusion about the scope for personalised medicine. Stephen Senn Consultant Statistician, Edinburgh Introduction Thirty years ago, Laupacis et al1 proposed an intuitively appealing way that physicians could decide how […]
I read BD’s (bandes dessinées or, as we say in English, graphic literature or picture storybooks) to keep up with my French. Regular books are too difficult for me. When it comes to BDs, some of the classic kids strips and albums are charming, but the ones for adults, which are more like Hollywood movies, […]
The post BD reviews appeared first on Statistical Modeling, Causal Inference, and Social Science.
This post is by Phil Price, not Andrew. Waaaay back in 2010, I wrote a blog entry entitled “Exercise and Weight Loss.” I had added high-intensity interval training back into my exercise regime, and had lost 12 pounds in about 12 weeks; but around the same time, some highly publicized studies were released that claimed […]
The post Exercise and weight loss: long-term follow-up appeared first on Statistical Modeling, Causal Inference, and Social Science.
Joël Gombin writes: I’m wondering what your take would be on the following problem. I’d like to model a proportion (e.g., the share of the vote for a given party at some territorial level) in function of some compositional data (e.g., the sociodemographic makeup of the voting population), and this, in a multilevel fashion (allowing […]
Pierre Jacob, Lawrence Murray, Chris Holmes, Christian Robert write: In modern applications, statisticians are faced with integrating heterogeneous data modalities relevant for an inference, prediction, or decision problem. In such circumstances, it is convenient to use a graphical model to represent the statistical dependencies, via a set of connected “modules”, each relating to a specific […]
The post Joint inference or modular inference? Pierre Jacob, Lawrence Murray, Chris Holmes, Christian Robert discuss conditions on the strength and weaknesses of these choices appeared first on Statistical Modeling, Causal Inference, and Social Science.
The basics of Bayesian inference is p(parameters|data) proportional to p(parameters)*p(data|parameters). And, for predictions, p(predictions|data) = integral_parameters p(predictions|parameters,data)*p(parameters|data). In these expressions (and the corresponding simpler versions for maximum likelihood), “parameters” and “data” are unitary objects. Yes, it can be helpful to think of the parameter objects as being a list or vector of individual parameters; and […]
The post Divisibility in statistics: Where is it needed? appeared first on Statistical Modeling, Causal Inference, and Social Science.