next up previous print clean
Next: Modification of Burg's adaptive Up: ADAPTIVE BURG-TYPE FILTERING Previous: Adaptive prediction error filtering

Is this algorithm rigorous?

My concern about the rigor of the algorithm comes mainly from the form of the recursions (10). Hale supposes that we can extend the exact recursions of the non-adaptive Burg's algorithm to the adaptive case. However, some contradictions appear.

First, to minimize the energy ${\cal E}(K_k)$, we consider that the same reflection coefficient is used for all the residuals at different times; however, the minimization results in a time-dependent coefficient. Second, the recursions (10) on the backward residuals should be valid only when uniform or exponential weighting is used; here the weights are pseudo-exponential (because they are symmetric), and the recursions should not be valid. So, I have the impression that this algorithm will effectively decrease the energy of the residuals, but might as well not minimize it, because we don't give enough degrees of freedom to let these residuals evolve from iteration to iteration.

I have tried to use the same formalism as for the LSL algorithm, but I could not find any logical relation between the two algorithms. The ideal would be to use again the residuals $\varepsilon_{k,T}(t)$ and rk,T(t), where the subscript T reappears, because we want to minimize the energy:

\sum_{t=0}^{T_{max}}\lambda^{\vert T-t\vert}\varepsilon^2_{k,T}(t) \;.\end{displaymath}

Unlike in Burg's adaptive filtering, I prefer to keep the backward residuals outside the summation, so that we minimize their energy at the same time, and separately: they are still an orthogonal basis of the space of regressors. The final output is again $\varepsilon_{n,T}(T)$: the residual at time T for the minimization problem with the weights centered on T. Actually, the definitions of the forward and backward residuals in term of projection operators are still the same. We should just use a different scalar product:

x.y=\sum_{t=0}^{T_{max}}\lambda^{\vert T-t\vert}x(t)y(t) \ \... \hspace{1.0cm}}\ x.y=\sum_{t=0}^{T}\lambda^{T-t}x(t)y(t) \;.\end{displaymath}

However, some problems appear in the order-updating recursions. First, as I said, the updating of the backward residuals is not rigorous: it implies a shift of these residuals, and we need an orthogonality condition to be preserved even after this shift. This is not true with the weights of Burg's adaptive algorithm. This problem had already been approached by Claerbout (1977), who derived a Burg-type algorithm for weighted prediction problems. However, Claerbout recognized that he didn't know exactly what this algorithm was solving. I tried it on a simple numerical problem, and I found an error of $20\ \%$ between the exact solution and the solution of the algorithm, which confirms that the solution of an arbitrary weighted problem cannot be reached with a totally recursive algorithm.

Moreover, the recursions in Hale's algorithm involve the values rk-1,T-1(t)
($t=0,\cdots,T_{max}$) to compute the reflection coefficient Krk,T, while the recursion (11) uses the values rk-1,t-1(t) (for all t).

So, I must concede that I could not derive any mathematical formulation of Hale's algorithm comparable to the LSL algorithm. That lack of formulation makes me say that Burg's adaptive algorithm is rather intuitive.

To conclude this part, I recognize that this algorithm, exact for $\lambda=1$, is a good approximation of the ideal solution (which may not be recursive) for $\lambda$ close to 1. Effectively, some theoretical obstacles disappear for such values of $\lambda$. First, in the minimization of ${\cal E}(K_k)$, we can at first hand consider that Kk is the same for all times t, even if it results in a time-varying expression: the reflection coefficient will actually be slowly varying. Then, when deriving the recursions, the weighting will be uniform on a large scale of time, which makes Burg-type recursions more plausible.

So, even if this algorithm does not exactly minimize the residuals, it will give a good approximation of the residuals. Actually, Hale himself noticed in his paper (1981) that it was necessary to keep $\lambda$ close to 1.

next up previous print clean
Next: Modification of Burg's adaptive Up: ADAPTIVE BURG-TYPE FILTERING Previous: Adaptive prediction error filtering
Stanford Exploration Project