Bayesian vector autoregression


In statistics and econometrics, Bayesian vector autoregression uses Bayesian methods to estimate a vector autoregression. In that respect, the difference with standard VAR models lies in the fact that the model parameters are treated as random variables, and prior probabilities are assigned to them.
Vector autoregressions are flexible statistical models that typically include many free parameters. Given the limited length of standard relative to the vast number of parameters available, Bayesian methods have become an increasingly popular way of dealing with the problem of over-parameterization. As the ratio of variables to observations increases, the role of prior probabilities becomes increasingly important.
The general idea is to use informative priors to shrink the unrestricted model towards a parsimonious naïve benchmark, thereby reducing parameter uncertainty and improving forecast accuracy.
A typical example is the Shrinkage prior proposed by Robert Litterman, and subsequently developed by other researchers at University of Minnesota,, which is known in the BVAR literature as the "Minnesota prior". The informativeness of the prior can be set by treating it as an additional parameter, based on a hierarchical interpretation of the model.
In particular, the Litterman/Minnesota prior considers a normal prior on a set of parameters with fixed and known covariance matrix, which will be estimated with one of these three techniques: 1) Univariate AR, 2) Diagonal VAR ó 3) Full VAR.
This type model can be easily estimated with Eviews or Stata Statistical Packages.
Recent research has shown that Bayesian vector autoregression is an appropriate tool for modelling large data sets.