next up previous print clean
Next: Is this algorithm rigorous? Up: ADAPTIVE BURG-TYPE FILTERING Previous: ADAPTIVE BURG-TYPE FILTERING

Adaptive prediction error filtering

We still have a data sequence y(t) ($t=0,\cdots,T_{max}$). For each time T and order k, we define the forward and backward residuals $\varepsilon_{k}(T)$ and rk(T); a single reflection coefficient Kk and Burg's recursions are used:  
 \begin{displaymath}
\left\{\begin{array}
{ll}
&\varepsilon_{0}(T)=y(T) \;, r_{0}...
 ...{k}(T)=r_{k-1}(T-1)-K_k \varepsilon_{k-1}(T) \end{array}\right.\end{displaymath} (9)
To compute the reflection coefficients Kk, we minimize the weighted energy of the forward and backward residuals of order k. The weights are centered on the particular time T:
\begin{eqnarraystar}
{\cal E}(K_k)&=&\sum_{t=0}^{T_{max}-1}\lambda^{\vert T-t\ve...
 ...!-\!1))^2+(r_{k-1}(t\!-\!1)\!-\!K_k\varepsilon_{k-1}(t))^2]\;.\end{eqnarraystar}

It is important to notice that the residuals depend here on the whole set of data, since the energy ${\cal E}(K_k)$ is a summation on the whole time window. However, this expression of the energy induces an adaptive formalism, since more emphasis is put on the residuals near the time T of interest than of the residuals far from this time. Minimizing this expression with respect to Kk leads to a time-dependent reflection coefficient:  
 \begin{displaymath}
K_{k,T}={2\sum_{t=0}^{T_{max}-1}\lambda^{\vert T-t\vert}\var...
 ...da^{\vert T-t\vert}(\varepsilon_{k-1}^2(t)+r^2_{k-1}(t-1))} \;.\end{displaymath} (10)

Then Hale (1981) showed that the reflection coefficients Kk,T can be easily computed, if the numerator and denominator of expression (11) are split between past and future summations. For example, the numerator becomes:
\begin{eqnarraystar}
N_{k}(T)&=&2\sum_{t=0}^{T_{max}-1}\lambda^{\vert T-t\vert}\...
 ..._{k-1}(t)r_{k-1}(t-1) \\  &=&N^{-}_{k}(T)\ +\ N^{+}_{k}(T) \;.\end{eqnarraystar}
In the same way, we can split the denominator between D-k(T) (summation up to $T\!-\!1$), and D+k(T) (summation from T to Tmax). Then, it is straightforward to show that:
\begin{displaymath}
\left\{\begin{array}
{lllll}
N^{-}_{k}(T+1)&=&\lambda(2\vare...
 ...\lambda D^{+}_{k}(T)&;&D^{+}_{k}(T_{max})=0. \end{array}\right.\end{displaymath} (11)
These recursions, together with the recursions (10) and the expression (11), form the adaptive version of Burg's algorithm.


next up previous print clean
Next: Is this algorithm rigorous? Up: ADAPTIVE BURG-TYPE FILTERING Previous: ADAPTIVE BURG-TYPE FILTERING
Stanford Exploration Project
1/13/1998