next up previous print clean
Next: Recursions: time updating Up: THE LSL ALGORITHM Previous: Backward residuals

Recursions: order updating

Here, T is fixed, and the order of the prediction filter increases from 1 to the desired order. The recursions implied for the backward residuals need the weighting to be uniform (or exponential), essentially because the recursions need a shift of these residuals; this condition is also explained in Lee et al. (1981). These recursions are similar to those of Burg's algorithm, and are directly derived from equations (1) and (2):  
 \begin{displaymath}
\begin{array}
{ll}
&\varepsilon_{k+1,T}=\varepsilon_{k,T}-{\...
 ...arepsilon'_{k,T}\varepsilon_{k,T}\end{array}\right. \end{array}\end{displaymath} (2)
$\Delta_{k+1,T}$ is called the partial correlation of the residuals; Rrk,T-1 is the covariance of the backward residuals, $R^{\varepsilon}_{k,T}$ is the covariance of the forward residuals. Notice that updating of the forward residuals has already been illustrated by Figure [*]. We can also update these covariances as follows:  
 \begin{displaymath}
R^{\varepsilon}_{k+1,T}=R^{\varepsilon}_{k,T} - {\Delta^2_{k...
 ...+1,T}=R^r_{k,T-1}-{\Delta^2_{k+1,T}\over R^{\varepsilon}_{k,T}}\end{displaymath} (3)
Finally, we can define two reflection coefficients $K^{\varepsilon}_{k+1,T}$ and Krk+1,T, to give a compact expression of equation ([*]):

\begin{displaymath}
K^{\varepsilon}_{k+1,T}={\Delta_{k+1,T}\over R^{\varepsilon}...
 ...ace{0.5cm}} K^r_{k+1,T}={\Delta_{k+1,T}\over R^{r}_{k,T-1}} \;,\end{displaymath}

\begin{displaymath}
\varepsilon_{k+1,T}=\varepsilon_{k,T}-K^r_{k+1,T}r_{k,T-1} \...
 ..._{k+1,T}=r_{k,T-1}-K^{\varepsilon}_{k+1,T}\varepsilon_{k,T} \;.\end{displaymath}

Notice also that we have two reflection coefficients. Burg's algorithm only uses one coefficient. Normally, for an infinite stationary sequence yT, these coefficients should be equal. However, in most practical applications, this is not the case. Burg's algorithm corresponds in fact to a particular choice of the reflection coefficient, which minimizes the total energy $\sum_t(\varepsilon^2_{p+1}(t)+r^2_{p+1}(t))$:

\begin{displaymath}
{2\over K_{p+1}}={1\over K^{\varepsilon}_{p+1}}+{1\over K^r_...
 ...lon_p(t)r_p(t)\over \sum\varepsilon^2_p(t) + \sum r^2_p(t)} \;.\end{displaymath}

The advantage of Burg's algorithm is that the reflection coefficients Kp are always bounded (by 1), preventing the algorithm from exploding numerically. On the other hand, the backward residuals it uses don't provide us with an orthogonal basis of the space of the regressors, and we cannot use projection operators to solve the prediction problem.


next up previous print clean
Next: Recursions: time updating Up: THE LSL ALGORITHM Previous: Backward residuals
Stanford Exploration Project
1/13/1998