next up previous print clean
Next: Forward residuals Up: THE LSL ALGORITHM Previous: THE LSL ALGORITHM

The basic LSL algorithm: concepts and formalism

Suppose we want to predict a data sequence y(t) ($t=0,\cdots,T_{max}$)from its own past values. The basic concept of the LSL algorithm is that, to estimate the optimal residual at time T, we solve the prediction problem in a least-squares sense on the window [0,T], and keep only the residual at time T. To compute the residual at time $T\!+\!1$, we would solve the problem on the window $[0,T\!+\!1]$, and again compute the residual at time $T\!+\!1$ from that solution. This sequence explains why the final residuals depend only on the past data.

In mathematical formulation, we try to find n filter coefficients $f_{n,T}(1),\cdots,f_{n,T}(n)$, minimizing:

\begin{displaymath}
\sum_{t=0}^T[\varepsilon_{n,T}(t)]^2 \mbox{\hspace{0.5cm}whe...
 ...cm}} \varepsilon_{n,T}(t)=y(t)-\sum_{i=1}^nf_{n,T}(i)y(t-i) \;.\end{displaymath}

I define the data vectors yT, the residual vector $\varepsilon_{n,T}$ and the convolution matrix A1,n,T:

\begin{displaymath}
y_T = \left(\!\begin{array}
{c}
 y(0) \\  y(1) \\  \vdots \\...
 ...\vdots&\vdots&\vdots&\vdots\cr
 y(T-1)&y(T-2)&\cdots&y(T-n)\cr}\end{displaymath}

The prediction coefficients minimizing the residual energy are then equal to:

\begin{displaymath}
f_{n,T}=[f_{n,T}(1),\cdots,f_{n,T}(n)]' = (A'_{1,n,T}A_{1,n,T})^{-1}A'_{1,n,T}y_T \;.\end{displaymath}

We are solving a minimization problem on the data window [0,T], as indicated by the subscript T. Especially, the filter coefficients fn,T depend on T: if we work on the window $[0,T\!+\!1]$, the least-squares solution will be different, giving fn,T+1. The second subscript n means that we are solving a problem for a filter of length n.

In the classical approach, the final outputs are $\varepsilon_{n,T_{max}}(T)$:we solve the problem on the whole window [0,Tmax]. In this case, the final residuals also depend on the future data, because the filter coefficients are computed with the whole set of data ($t=0,\cdots,T_{max}$). On the other hand, the concept of the LSL algorithm is that the final outputs are the residuals $\varepsilon_{n,T}(T)$ for all $T=0,\cdots,T_{max}$: this enables the algorithm to adjust to variations in the statistics of the data.

So, the LSL algorithm uses recursions for the residuals when we increase the order n (order-updating), and the time T (time-updating). We can also include exponential weighting, if we use a positive factor $\lambda\leq 1$ to minimize:

\begin{displaymath}
\sum_{t=0}^T\lambda^{T-t}[\varepsilon_{n,T}(t)]^2 \ = \ \sum_{t=0}^T\lambda^{T-t}[y(t)-\sum_{i=1}^nf_{n,T}(i)y(t-i)]^2 \;.\end{displaymath}

The matrix A'1,n,TA1,n,T would be the usual Toeplitz matrix involved in the Wiener-Levinson algorithm, if we took only the data up to t=T-n into account, but the form of A1,n,T is necessary for the time-update recursions. This matrix A1,n,T will actually never be built, because the aim of lattice methods is to work only with the residuals $\varepsilon_{n,T}$,and not with the filter coefficients.
next up previous print clean
Next: Forward residuals Up: THE LSL ALGORITHM Previous: THE LSL ALGORITHM
Stanford Exploration Project
1/13/1998