Repository logo

Modified Burg algorithms for multivariate subset autoregression

Abstract

We devise an algorithm that extends Burg's original method for recursive modeling of univariate autoregressions on a full set of lags, to multivariate modeling on a subset of lags. The key step in the algorithm involves minimizing the sum of the norm of the forward and backward prediction error residual vectors, as a function of the reflection coefficient matrices. We show that this sum has a global minimum, and give an explicit expression for the minimizer. By modifying the manner in which the reflection coefficients are calculated, this algorithm will also give the well-known Yule-Walker estimates. Based on recently proposed subset extensions to existing full set counterparts, two other algorithms that involve modifying the reflection coefficient calculation are also presented. Using simulated data, all four algorithms are compared with respect to the size of the Gaussian likelihood produced by each respective model. We find that the Burg and Vieira-Morf algorithms tend to perform better than the others for all configurations of roots of the autoregressive polynomial. averaging higher likelihoods with smaller variability across a large number of realizations. We extend existing asymptotic central limit type results for three common vector autoregressive process estimators, to the subset case. First, consistency and asymptotic normality are established for the least squares estimator. This is extended to Yule-Walker, by virtue of the similarity in the closed forms for the two estimators. Taking advantage of the fact that the Yule-Walker and Burg estimates can be calculated recursively via nearly identical algorithms, we then show these two differ by terms of order at most Op(1/n). In this way the Burg estimator inherits the same asymptotics as both Yule-Walker and least squares. Saddlepoint approximations to the distributions of the Yule-Walker and Burg autoregressive coefficient estimators, when sampling from a subset Gaussian AR(p) with only one non-zero lag, are given. In this context, each estimator can be written as a ratio of quadratic forms in normal random variables. The larger bias and variance in the distribution of the Yule-Walker estimator, particularly evident at low sample sizes and when the AR coefficient is close to ±1, agrees with its tendency to give lower likelihoods, as noted earlier. Empirical probability density histograms of the sampling distributions, for these two as well as the maximum likelihood estimator. provide further confirmation of the superiority of Burg over Yule-Walker in the vicinity of ±1. Relative error comparisons between the saddlepoint approximations and the empirical cumulative distribution functions, show close agreement. In conclusion, we elaborate on the logic employed in writing the computer programs that implement the Burg algorithm in the univariate and bivariate modeling settings. The central idea involves the formation of a tree of nodes connected by pointers. Due to the recursive nature of the algorithm, where modeling on larger subset sizes relies on that of smaller ones, each node harbors information on the modeling problem as applied to the particular subset of lags it represents. Starting at nodes of just one lag, the program follows pointers to those of larger numbers of lags, pausing a t each to build the necessary modeling information. Termination occurs at the top of the tree, when applying the algorithm to the unique node that represents the subset of lags upon which modeling was originally desired.

Description

Rights Access

Subject

statistics
algorithms
studies
sample size
forecasting techniques
simulation
regression analysis
histograms
copyright
approximation

Citation

Endorsement

Review

Supplemented By

Referenced By