In mathematical formulation, we try to find n filter coefficients , minimizing:
I define the data vectors yT, the residual vector and the convolution matrix A1,n,T:
The prediction coefficients minimizing the residual energy are then equal to:
We are solving a minimization problem on the data window [0,T], as indicated by the subscript T. Especially, the filter coefficients fn,T depend on T: if we work on the window , the least-squares solution will be different, giving fn,T+1. The second subscript n means that we are solving a problem for a filter of length n.
In the classical approach, the final outputs are :we solve the problem on the whole window [0,Tmax]. In this case, the final residuals also depend on the future data, because the filter coefficients are computed with the whole set of data (). On the other hand, the concept of the LSL algorithm is that the final outputs are the residuals for all : this enables the algorithm to adjust to variations in the statistics of the data.
So, the LSL algorithm uses recursions for the residuals when we increase the order n (order-updating), and the time T (time-updating). We can also include exponential weighting, if we use a positive factor to minimize:
The matrix A'1,n,TA1,n,T would be the usual Toeplitz matrix involved in the Wiener-Levinson algorithm, if we took only the data up to t=T-n into account, but the form of A1,n,T is necessary for the time-update recursions. This matrix A1,n,T will actually never be built, because the aim of lattice methods is to work only with the residuals ,and not with the filter coefficients.