Posts Tagged ‘ Statistical computing ’

Static sensitivity analysis: Computing robustness of Bayesian inferences to the choice of hyperparameters

January 16, 2018
By
Static sensitivity analysis:  Computing robustness of Bayesian inferences to the choice of hyperparameters

Ryan Giordano wrote: Last year at StanCon we talked about how you can differentiate under the integral to automatically calculate quantitative hyperparameter robustness for Bayesian posteriors. Since then, I’ve packaged the idea up into an R library that plays nice with Stan. You can install it from this github repo. I’m sure you’ll be pretty […] The post Static sensitivity analysis: Computing robustness of Bayesian inferences to the choice of…

Read more »

Three new domain-specific (embedded) languages with a Stan backend

January 9, 2018
By

One is an accident. Two is a coincidence. Three is a pattern. Perhaps it’s no coincidence that there are three new interfaces that use Stan’s C++ implementation of adaptive Hamiltonian Monte Carlo (currently an updated version of the no-U-turn sampler). ScalaStan embeds a Stan-like language in Scala. It’s a Scala package largely (if not entirely […] The post Three new domain-specific (embedded) languages with a Stan backend appeared first on…

Read more »

“Each computer run would last 1,000-2,000 hours, and, because we didn’t really trust a program that ran so long, we ran it twice, and it verified that the results matched. I’m not sure I ever was present when a run finished.”

January 6, 2018
By

Bill Harris writes: Skimming Michael Betancourt’s history of MCMC [discussed yesterday in this space] made me think: my first computer job was as a nighttime computer operator on the old Rice (R1) Computer, where I was one of several students who ran Monte Carlo programs written by (the very good) chemistry prof Dr. Zevi Salsburg […] The post “Each computer run would last 1,000-2,000 hours, and, because we didn’t really…

Read more »

How does probabilistic computation differ in physics and statistics?

January 5, 2018
By

[image of Schrodinger’s cat, of course] Stan collaborator Michael Betancourt wrote an article, “The Convergence of Markov chain Monte Carlo Methods: From the Metropolis method to Hamiltonian Monte Carlo,” discussing how various ideas of computational probability moved from physics to statistics. Three things I wanted to add to Betancourt’s story: 1. My paper with Rubin […] The post How does probabilistic computation differ in physics and statistics? appeared first on…

Read more »

R-squared for Bayesian regression models

December 21, 2017
By

Ben, Jonah, Imad, and I write: The usual definition of R-squared (variance of the predicted values divided by the variance of the data) has a problem for Bayesian fits, as the numerator can be larger than the denominator. We propose an alternative definition similar to one that has appeared in the survival analysis literature: the […] The post R-squared for Bayesian regression models appeared first on Statistical Modeling, Causal Inference,…

Read more »

Burn-in for MCMC, why we prefer the term warm-up

December 16, 2017
By

Here’s what we say on p.282 of BDA3: In the simulation literature (including earlier editions of this book), the warm-up period is called burn-in, a term we now avoid because we feel it draws a misleading analogy to industrial processes in which products are stressed in order to reveal defects. We prefer the term ‘warm-up’ […] The post Burn-in for MCMC, why we prefer the term warm-up appeared first on…

Read more »

Workflow, baby, workflow

December 11, 2017
By

Bob Carpenter writes: Here’s what we do and what we recommend everyone else do: 1. code the model as straightforwardly as possible 2. generate fake data 3. make sure the program properly codes the model 4. run the program on real data 5. *If* the model is too slow, optimize *one step at a time* […] The post Workflow, baby, workflow appeared first on Statistical Modeling, Causal Inference, and Social…

Read more »

Interactive visualizations of sampling and GP regression

December 9, 2017
By

You really don’t want to miss Chi Feng‘s absolutely wonderful interactive demos. (1) Markov chain Monte Carlo sampling I believe this is exactly what Andrew was asking for a few Stan meetings ago: Chi Feng’s Interactive MCMC Sampling Visualizer This tool lets you explore a range of sampling algorithms including random-walk Metropolis, Hamiltonian Monte Carlo, […] The post Interactive visualizations of sampling and GP regression appeared first on Statistical Modeling,…

Read more »

Bin Yu and Karl Kumbier: “Artificial Intelligence and Statistics”

December 8, 2017
By

Yu and Kumbier write: Artificial intelligence (AI) is intrinsically data-driven. It calls for the application of statistical concepts through human-machine collaboration during generation of data, development of algo- rithms, and evaluation of results. This paper discusses how such human-machine collaboration can be approached through the statistical concepts of population, question of interest, representativeness of training […] The post Bin Yu and Karl Kumbier: “Artificial Intelligence and Statistics” appeared first on…

Read more »

How not to compare the speed of Stan to something else

November 30, 2017
By

Someone’s wrong on the internet And I have to do something about it. Following on from Dan’s post on Barry Gibb statistical model evaluation, here’s an example inspired by a paper I found on Google Scholar searching for Stan citations. The paper (which there is no point in citing) concluded that JAGS was faster than […] The post How not to compare the speed of Stan to something else appeared…

Read more »


Subscribe

Email:

  Subscribe