(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)
Brendan Nyhan sends along this paper by Domenico Giannone, Michele Lenza, and Giorgio Primiceri:
Vector autoregressions are flexible time series models that can capture complex dynamic interrelationships among macroeconomic variables. However, their dense parameterization leads to unstable inference and inaccurate out-of-sample forecasts, particularly for models with many variables. A solution to this problem is to use informative priors, in order to shrink the richly parameterized unrestricted model towards a parsimonious naive benchmark, and thus reduce estimation uncertainty. This paper studies the optimal choice of the informativeness of these priors, which we treat as additional parameters, in the spirit of hierarchical modeling. This approach is theoretically grounded, easy to implement, and greatly reduces the number and importance of subjective choices in the setting of the prior. Moreover, it performs very well both in terms of out-of-sample forecasting—as well as factor models—and accuracy in the estimation of impulse response functions.
I have no experience with vector autoregressions and have not read the article carefully, but I love how they frame the problem above. This is looking to me like another brick in the wall of weakly informative priors. Excellent.
Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science